id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.05564
Performance of Commercial Quantum Annealing Solvers for the Capacitated Vehicle Routing Problem
Quantum annealing (QA) is a heuristic search algorithm that can run on Adiabatic Quantum Computation (AQC) processors to solve combinatorial optimization problems. Although theoretical studies and simulations on classic hardware have shown encouraging results, these analyses often assume that the computation occurs in adiabatically closed systems without environmental interference. This is not a realistic assumption for real systems; therefore, without extensive empirical measurements on real quantum platforms, theory-based predictions, simulations on classical hardware or limited tests do not accurately assess the current commercial capabilities. This study has assessed the quality of the solution provided by a commercial quantum annealing platform compared to known solutions for the Capacitated Vehicle Routing Problem (CVRP). The study has conducted extensive analysis over more than 30 hours of access to QA commercial platforms to investigate how the size of the problem and its complexity impact the solution accuracy and the time used to find a solution. Our results have found that the absolute error is between 0.12 and 0.55, and the quantum processor unit (QPU) time is between 30 and 46 micro seconds. Our results show that as the constraint density increases, the quality of the solution degrades. Therefore, more than the problem size, the model complexity plays a critical role, and practical applications should select formulations that minimize the constraint density.
Salvatore Sinno, Thomas Groß, Alan Mott, Arati Sahoo, Deepak Honnalli, Shruthi Thuravakkath, Bhavika Bhalgamiya
2023-09-11T15:51:22Z
http://arxiv.org/abs/2309.05564v1
# Performance of Commercial Quantum Annealing Solvers for the Capacitated Vehicle Routing Problem ###### Abstract Quantum annealing (QA) is a heuristic search algorithm that can run on Adiabatic Quantum Computation (AQC) processors to solve combinatorial optimisation problems. Although theoretical studies and simulations on classic hardware have shown encouraging results, these analyses often assume that the computation occurs in adiabatically closed systems without environmental interference. This is not a realistic assumption for real systems; therefore, without extensive empirical measurements on real quantum platforms, theory-based predictions, simulations on classical hardware or limited tests do not accurately assess the current commercial capabilities. This study has assessed the quality of the solution provided by a commercial quantum annealing platform compared to known solutions for the Capacitated Vehicle Routing Problem (CVRP). The study has conducted extensive analysis over more than 30 hours of access to QA commercial platforms to investigate how the size of the problem and its complexity impact the solution accuracy and the time used to find a solution. Our results have found that the absolute error is between 0.12 and 0.55, and the quantum processor unit (QPU) time is between 30 and 46 \(\mu\)s. Our results show that as the constraint density increases, the quality of the solution degrades. Therefore, more than the problem size, the model complexity plays a critical role, and practical applications should select formulations that minimise the constraint density. quantum annealing, CVRP, quantum optimisation, D-Wave, CQM, QUBO ## I Introduction Quantum annealing (QA) [1] is a heuristic search algorithm that can run on an Adiabatic Quantum Computation (AQC) platform using quantum properties such as tunnelling, entanglement and superposition to solve combinatorial optimisation problems. Like Simulated Annealing (SA), QA uses a parameter called tunnelling coefficient (\(\Gamma\)) to control the transversability of the solution landscape and the probability of taking an uphill step at each iteration. QA is emerging as a promising generic approach for tackling complex optimisation problems as recent advancements have made this technology commercially available. Although theoretical studies and simulations on classic hardware (e.g. Path Integral Monte Carlo (PMIC)) have shown encouraging results, these analyses often assume that the computation occurs in adiabatically closed systems with no environmental interference. With this assumption, the algorithm is probabilistic and the Quantum Adiabatic Theorem may bind the trade-off between computation time and the probability that the system found an optimal solution. However, all real-world quantum computations occur in open systems, vulnerable to environmental noise that reduces the probability of finding a good solution and increases computation time. For these reasons, without extensive empirical measurements on real quantum platforms, theory-based predictions, simulations on classical hardware or limited tests do not provide a reliable assessment of the current commercial capabilities. This study assesses the quality of the solution provided by a commercial quantum annealing platform for the CVRP problem, a well-known logistic combinatorial optimisation problem. A well-known limitation of quantum computing is the problem size: it is, therefore, important to investigate how the complexity of the problem (size and constraint density) impacts the quality of the solution. This study considers many simulations (100 for each instance) over more than 30 hours of access to QA commercial platforms. The rest of the paper is organised as follows: Section II analyses previous studies using quantum annealing to solve the CVRP problem. Section III describes the aims of this analysis and the measures used. Section IV describes the method followed, outlining the business problem, the sample used, and the mathematical and QA models. Section V presents the results, and Section VI discusses them. Finally, Section VII concludes the paper with the final conclusions and suggestions for future work. ## II Related Work Syrichas and Crispin [10] propose a simulated Quantum Annealing solver using Quantum Monte Carlo simulation and apply the model to large-scale benchmark datasets. They obtain optimal results by empirically manipulating the hyper-parameters of their model. This work simulates a quantum system through statistical computation, therefore, does not provide an assessment of the real system and the impact of quantum errors. Borcinova [3] describes a flow-based formulation and a hybrid quantum annealing algorithm for solving the Vehicle Routing Problem (VRP) using quantum annealing. The model focuses on designing directly applicable algorithms to solve routing problems in actual companies. The author suggests that the flow base is superior to other approaches as it reduces the number of binary variables. Borowski et al. [4] introduce a hybrid algorithm, DBCAN Solver and Solution Partitioning Solver (SPS), which uses quantum annealing to solve the VRP and the CVRP variant. Their experiments indicate that the hybrid method gives promising results and can find solutions of similar or even better quality than the tested classical algorithms. However, their results are limited to a few nodes. Jain [5] shows how to solve the Travel Salesman Problem (TSP) problem by using an Ising Hamiltonian-based quantum annealer and transforming it in a quadratic unconstrained binary optimisation (QUBO) problem. They suggest that QA can only handle a minor problem (8 or fewer nodes), and even in these cases, the performance in terms of time and accuracy is subpar compared to the classical solver. Salehi et al. [6] provide a detailed analysis of the Travelling Salesman Problem with Time Windows (TSPTW) in the context of solving it on a quantum computer. They introduce unconstrained quadratic binary optimisation and higher-order binary optimisation formulations of this problem. They demonstrate the advantage of edge-based and node-based formulation of the TSPTW problem. Feld et al. [7] investigate different quantum-classic hybrid approaches to solve the CVRP and expose the difficulties in finding feasible solutions. They propose a hybrid method based on a 2-Phase-Heuristic to address these limitations. After running their simulations, they concluded that the critical step was to find an effective way to map the optimisation into the QUBO formulation. The analysis of this literature suggests that: 1. QA simulated on classical hardware can solve large CVRP instances and find optimal solutions by empirically evolving the model hyper-parameters. 2. Previous studies using real quantum processors solve only small problems (\(\leq\) 8 nodes), and even in this case, the results are subpar compared to the classical solver. 3. The quality of the solution depends on the effective way of mapping the problem to a suitable formulation for the quantum solver (QUBO formulation). ## III Aim This empirical work is organised around 4 questions: 1. What is the QA solver's accuracy for the CVRP problem over benchmark data sets? 2. How does the size (number of nodes and routes) impact the QA solver's accuracy? 3. How does the constraint density impact the accuracy? 4. How does the problem complexity impact the time the quantum processor unit uses? To answer question 1, for each problem instance, we run 100 simulations, and we calculate the QA results accuracy using the Mean Absolute Percentage Error (MAPE) \(R\) calculated as follows: \[R_{n}=\frac{1}{n}\sum_{k=0}^{n}\frac{|E_{QA}^{k}-E_{best}|}{E_{best}} \tag{1}\] where: \(E_{\text{best}}\) is the best-known solution for the instance, \(E_{\text{QA}}^{k}\) is the QA result for the iteration \(k\) and \(R_{n}\) is the MAPE after \(n\) iterations. With respect to question 2, we express the size of the problem expressed as the number of binary variables in the QUBO formulation and use this measure to investigate question 4 and analyse how it impacts the Quantum Process Unit (QPU) time. Regarding question 3, we express the constraint density as the tightness of the sample expressed as the total demand divided by the vehicle capacity: \[\tau=\frac{\sum_{i=0}^{n}d_{i}}{k\times Q} \tag{2}\] where: \(d_{i}\) is the demand for the node \(i\), \(k\) is the number of trucks and \(Q\) is the capacity for each truck (we assume that each truck has the same capacity). We use \(\tau\) to measure the model complexity. ## IV Method In Fig. 1 we provide an overview of the process of applying quantum annealing optimisation to the selected problem. We followed these steps: * Define the problem * Collect the Data * Define the mathematical model * Translate the mathematical model into a QA model * Run the QA code on the chosen platform * Analyse the results The following sections will provide details of these steps. ### _The Business Problem_ The CVRP is a variation of the well-known vehicle routing problem (VRP), one of the most studied NP-hard optimisation problems, and it is usually used as a benchmark for new algorithms and computing capabilities. In the CVRP problem, the objective is to minimise the cost of deliveries across all customers and all routes, given that (constraints): 1. A node is visited only once 2. Each vehicle can leave the depot only once 3. Each vehicle starts and ends its route at the depot 4. Each customer's demand is indivisible, and each vehicle shall not exceed its maximum load capacity 5. No route is disconnected from the depot (i.e. sub-routing elimination) We made the following simplifying assumptions: 1. All vehicles have the same capacity 2. All vehicles have the same cost per unit distance travelled 3. Demands, distances between nodes, and delivery costs are known. ### _Sample_ We have used the data set produced by Augera et al. [11], known as the A-series. The A-series is de facto benchmark dataset to assess solutions for the CVRP. From Table I, the A-n32-k5 is the smallest instance, while the A-n80-k10 is the largest. The A-n60-k9 seats in the middle of these two instances. We have, therefore, intensively investigated the following three instances: * A-n32-k5: 32 nodes and 5 trucks. * A-n60-k9: 60 nodes and 9 trucks * A-n80-k10: 80 nodes and 10 trucks ### _Mathematical Model_ Following [3], we express the objective function as follows: \[Minimize\ \sum_{r=1}^{p}\sum_{i=0}^{n}\sum_{j=0,i\neq j}^{n}C_{\text{ij}}x_{ \text{rij}} \tag{3}\] where \(C_{\text{ij}}\) is the distance (cost) between node \(i\) and node \(j\) and the binary variable \(x_{\text{rij}}\) is: \[x_{rij}=\begin{cases}1\text{ if truck r travels from i to j}\\ \hskip 28.452756pt0\text{ otherwise}\end{cases} \tag{4}\] and \(p\) is the number of trucks, \(n\) is the number of cities (nodes) (including the depot (0)). The following equation ensures that each node is visited only once by any truck: \[\sum_{r=1}^{p}\sum_{i=0,i\neq j}^{n}x_{\text{rij}}\ =1\ \forall j\in\{1,..,n\} \tag{5}\] The following equation ensures that each vehicle visits the depot: \[\sum_{j=1}^{n}x_{r0j}=1\ \forall r\in\{1,..,p\} \tag{6}\] \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Problem Name** & **Total Nodes** & **Vehicles** & \(\tau\) & **Best Solution** \\ \hline **A-n32-k5** & **31** & **5** & **0.820** & **784** \\ \hline A-n33-k5 & 32 & 5 & 0.948 & 661 \\ \hline A-n33-k6 & 32 & 6 & 0.901 & 742 \\ \hline A-n34-k5 & 33 & 5 & 0.920 & 778 \\ \hline A-n36-k5 & 35 & 5 & 0.884 & 799 \\ \hline A-n37-k5 & 36 & 5 & 0.814 & 669 \\ \hline A-n37-k6 & 36 & 6 & 0.950 & 949 \\ \hline A-n38-k5 & 37 & 5 & 0.967 & 730 \\ \hline A-n39-k5 & 38 & 5 & 0.950 & 822 \\ \hline A-n39-k6 & 38 & 6 & 0.876 & 831 \\ \hline A-n44-k6 & 43 & 6 & 0.950 & 937 \\ \hline A-n45-k6 & 44 & 6 & 1.050 & 944 \\ \hline A-n45-k7 & 44 & 7 & 0.950 & 1146 \\ \hline A-n46-k7 & 45 & 7 & 0.861 & 914 \\ \hline A-n48-k7 & 47 & 7 & 0.892 & 1073 \\ \hline A-n53-k7 & 52 & 7 & 0.948 & 1010 \\ \hline A-n54-k7 & 53 & 7 & 0.955 & 1167 \\ \hline A-n55-k9 & 54 & 9 & 0.932 & 1073 \\ \hline **A-n60-k9** & **59** & **9** & **0.921** & **1354** \\ \hline A-n61-k9 & 60 & 9 & 0.983 & 1034 \\ \hline A-n62-k8 & 61 & 8 & 0.916 & 1288 \\ \hline A-n63-k9 & 62 & 9 & 0.970 & 1616 \\ \hline A-n63-k10 & 62 & 10 & 0.932 & 1314 \\ \hline A-n64-k9 & 63 & 9 & 0.942 & 1401 \\ \hline A-n65-k9 & 64 & 9 & 0.974 & 1174 \\ \hline A-n69-k9 & 68 & 9 & 0.938 & 1159 \\ \hline **A-n80-k10** & **79** & **10** & **0.948** & **1763** \\ \hline \end{tabular} \end{table} TABLE I: A-CVRP Benchmark Dataset for CVRP (in bold the three instances investigated in this paper) Fig. 1: Quantum Annealing Optimisation Process. Each vehicle has to start and end its route at the depot: \[\sum_{i=0,i\neq j}^{n}x_{\text{rij}}=\sum_{i=0}^{n}x_{\text{rij}}\,\ \forall j\in\left\{0,\ldots,n\right\},\ \ \text{r}\in\left\{1,\ldots,p\right\} \tag{7}\] The load carried by each vehicle should not exceed its capacity: \[\sum_{i=0}^{n}\sum_{j=1,i\neq j}^{n}d_{j}x_{\text{rij}}\leq Q,\ \forall r\in \left\{1,\ldots,p\right\} \tag{8}\] where \(Q\) is the vehicle capacity. The routes must be interconnected (i.e. no isolated loops): this constraint is called the "sub-routing elimination constraint" (SEC). Many SECs have been proposed. One of the best-known is the Dantzig, Fulkerson and Johnson (DFJ) SEC formulation given by equation (9): \[\sum_{r=1}^{p}\sum_{i\in S}^{n}\sum_{j\in S,i\neq j}^{n}x_{\text{rij}}\ \leq|S|-1, \tag{9}\] This formulation introduces \(2^{n}+2n-2\) equations and \(n(n-1)\) ancillary variables [13]. Other authors have developed alternative SEC formulations to reduce the number of constraints and ancillary variables. For an exhaustive analysis, see [17]. One of the most used is the Miller-Tucker-Zemlin (MTZ) SECs [13]. The MTZ SEC uses an extra variable \(u_{i}\) that gets a value for each node, except for the depot. If a vehicle drives from node \(i\) to node \(j\), the value of \(u_{j}\) has to be bigger than the value of \(u_{i}\). The mathematical formulation of the MTZ SEC is as follows: \[u_{j}-u_{i}\ \geq\ q_{j}-Q\left(1-x_{\text{ijk}}\right)\forall i,j\ \in V\backslash\left\{1\right\}\ i\neq j \tag{10}\] \[q_{i}\leq u_{i}\leq Q\forall i\ \in V\backslash\left\{1\right\} \tag{11}\] If vehicle \(k\) drives from node \(i\) to node \(j\), \(x_{\text{ijk}}=0\), and constraint 10 can be written to \(u_{j}\geq u_{i}+q_{j}\). This ensures that the value of \(u_{j}\) is at least \(q_{j}\) more than \(u_{i}\). So, the value of \(u_{j}\) is greater than the value of \(u_{i}\). The MTZ SEC introduces \(n^{2}-n+2\) constraints, \(n(n-1)\) 0-1 variables, and \((n-1)\) continuous variables. Table II compares the MTZ and DFJ SEC formulations and shows that MTZ's approach adds a polynomial number of constraints, while DFJ's approach introduces an exponential number of constraints [13]. For this reason, the MTZ SEC formulation has been adopted. ### _QA Model_ Any given NP-hard problem instance can be translated into an Ising Model (IM) instance with no more than a polynomial expansion in problem size. Any Ising Model instance of \(n\) variables can be minor-embedded onto a quantum annealing graph using \(O(n^{2})\) qubits in the worst case. In essence, quantum annealing processors are designed to solve objective functions expressed as Ising Model (IM). It is easy to prove that the Ising formulation is equivalent to a Quadratic Unconstrained Binary Optimisation (QUBO) formulation, given a simple variable substitution {-1,1} (IM) into {0,1} (QUBO) [8]. For this association with the Ising problem, the QUBO model has emerged as an underpinning of quantum annealing and lies at the heart of experimentation with quantum computers. This section briefly introduces the generic QUBO formulation and how it is adapted to include constraints. Let's consider the optimisation problem: \[\text{Minimise}\ y=-3x_{1}-5x_{2}+2x_{1}x_{2} \tag{12}\] where the variables \(x_{i}\) are binary 0,1. The above function is quadratic in binary variables with a linear part -3\(x_{1}\) -5\(x_{2}\) and a quadratic part \(2x_{1}x_{2}\). As \(x_{i}\) are binary, \(x_{i}\)=\(x_{i}^{2}\), the linear part can be written as -3\(x_{1}^{2}\) -5\(x_{2}^{2}\). We can rewrite the model using matrices. Let's consider the optimisation problem: \[\text{Minimise}\ y=\left(x_{1}\ x_{2}\right)\begin{bmatrix}-3&2\\ 0&-5\end{bmatrix}\begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix} \tag{13}\] The matrix notation of this can be written as \(Minimise\ y=\mathbf{x}^{T}Q\mathbf{x}\), where \(\mathbf{x}\) is a column vector of binary variables. The coefficients of the original linear terms appear on the main diagonal of the \(Q\) matrix. From a mathematical point of view, the problem's constraints are inequalities that the solution must respect. By default, the QUBO formulation does not allow constraints. To include constraints, we need to rewrite them as quadratic equations and introduce them as penalties that influence the value of the objective function. Penalties are formulated to have zero value for feasible solutions and a positive amount for invalid solutions. For inequalities, slack variables transform them into equalities. Each penalty should be multiplied by a positive constant to have a comparable magnitude with the objective function; we denote these constants with \(P\) (called the Lagrange multiplier). For example, assuming that one constraint is: \[x_{1}+x_{2}=1 \tag{14}\] Such constraints can be formulated as follows: \[P*\left(x_{1}+x_{2}-1\right)^{2} \tag{15}\] The objective function becomes: \[\text{Minimise}\ y=-3x_{1}-5x_{2}+2x_{1}x_{2}+P*(x_{1}+x_{2}-1)^{2} \tag{16}\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Formulation** & **Variables** & **Constraints** & **Constraints for n=20** \\ \hline DFJ & O(n\({}^{2}\)) & O(2\({}^{n}\)) & 2\({}^{n}\)=1,048,576 \\ \hline MTZ & O(n\({}^{2}\)) & O(n\({}^{2}\)) & n\({}^{2}\)=400 \\ \hline \end{tabular} \end{table} TABLE II: Size comparisons for DFJ and MTZ sub-tour elimination formulations. Expanding the constraint and considering \(x_{i}\)=\(x_{i}^{2}\) \[Q=\begin{bmatrix}-3&2\\ 0&-5\end{bmatrix}+P\begin{bmatrix}-1&2\\ 0&-1\end{bmatrix} \tag{17}\] Constraints expressed as inequalities need slack variables. For example: \[2x_{1}+3~{}x_{2}~{}\leq C \tag{18}\] where \(C\) is a positive quantity. Can be expressed as: \[P*\left(2~{}x_{1}+3x_{2}-C+~{}\sum_{k=0}^{r_{u}}2^{k}v_{k}\right)^{2} \tag{19}\] Where \(v_{k}\in[0,1]\) is the slack variable and \(r_{u}\) is such that: \[C\approx\sum_{k=0}^{r_{u}}2^{k}v_{k} \tag{20}\] From this analysis, we conclude that: 1) The number of model parameters (also called the Lagrange multipliers) increases as the number of constraints increases 2) The introduction of inequalities introduced additional slack variables that consume physical resources on the quantum platform As we don't know the value of the Lagrange variable before running the model, we have to run multiple simulations to find the suitable range for these coefficients empirically. ### _Run on Quantum Platform_ D-Wave has developed and commercialised quantum annealing processor units designed to solve Ising Models. Their hardware can be programmed via low-level Quantum Machine Instruction or a standard set of Internet APIs based on RESTFuI service. Client libraries are available in many programming languages, including Python SDK [2]. We have used the Python SDK to access the system as a cloud resource over the Internet. In our simulations, we have used the D-Wave Leap hybrid quantum solver: this service can be used to submit an arbitrary quadratic model. In November 2022, D-Wave introduced a new model called Constrained Quadratic Model (CQM) [2]. CQM can be used for problems with binary and integer variables and one or more constraints. In contrast to previous hybrid solvers, which required that any problem constraints be modelled as a penalty model in the objective function, the CQM solver natively supports equality and inequality constraints through symbolic maths. As the CQM is a new solver, no empirical studies have analysed its performance to the best of our knowledge. ## V Results We consider solution statistics aggregated over 100 distinct run per instance, and we calculate the quantum solution accuracy as MAPE index as described in section III. Fig. 2 shows the MAPE for the three instances. We evaluate the absolute error, defined as: \[AE_{n}=\frac{\left|E_{QA}^{n}-E_{best}\right|}{E_{best}} \tag{21}\] where: \(E_{best}\) is the best-known cost for the instance reported in the CVRP library [http://vrp.atd-lab.inf.puc-rio.brlib](http://vrp.atd-lab.inf.puc-rio.brlib), \(E_{QA}^{n}\) is the QA result for the iteration \(n\). Fig. 3 shows the \(AE_{n}\) for the three instances. Table III summarises the QPU time in \(\mu\)s and shows that the average execution time on the QPU is similar for the three problems. For each problem, the standard deviation for the QPU time is small: the QPU time ranges from \(\mu\)s 32 for the A-n32-k5 to \(\mu\)s 48 for the A-n80-k10 sample. variables) does not impact the QPU time, and the variation between simulations is small. ### _Limitations and Future Work_ Although the new D-Wave CQM solver removes the need for the manual definition of slack variables while allowing the modelling of complex quadratic constraints (e.g., the DTM sub-routing approach defines constraints that are not linear), it remains a black box approach, and the user has less control on the hyperparameters to control the execution. It is worth noting that the QA does not guarantee the best solution. The solution's quality deteriorates as the problem's complexity increases. A way to address this limitation for future work is to split the problem into two phases, a clustering phase and a TSP solution for each cluster. ## VII Conclusions In this paper, we used the D-Wave CQM solver to run multiple simulations for the CVRP problem and compared the results against known optimal solutions for three datasets. The CQM solver offers some benefits as it allows for expressing objective functions using a combination of different types of variables. It also helps express constraints using intuitive symbolic algebraic notation. The direct QUBO formulation could not find any feasible solution as the number of nodes moved above 15 nodes and 3 trucks, this is consistent with other studies. As the constraints add penalty terms to the Hamiltonian of the problem, the model consumes increasing resources while also restricting the dynamic range of the interactions between nodes. We conducted extensive simulations for each problem (more than 100 per problem) and calculated the aggregated MAPE and time to execute. The MAPE index estimates how good the actual quantum solution is compared to the "optimal" solution. Our results show that we must be careful about formulating the problem and especially the constraints equations: the formulation with the least number of constraints has to be preferred to other models. Fig. 3: Absolute Error Fig. 2: MAPE
2309.12018
Viscous fluid dynamics with decaying vacuum energy density
In this work, we investigate the dynamics of bulk viscous models with decaying vacuum energy density (VED) in a spatially homogeneous and isotropic flat Friedmann-Lema\^{i}tre- Robertson-walker (FLRW) spacetime. We particularly are interested to study the viscous model which considers first order deviation from equilibrium, i.e., the Eckart theory. In the first part, using the different forms of the bulk viscous coefficient, we find the main cosmological parameters, like Hubble parameter, scale factor, deceleration parameter and equation of state parameter analytically. We discuss some cosmological consequences of the evolutions and dynamics of the different viscous models with decaying VED. We examine the linear perturbation growth in the context of the bulk viscous model with decaying VED to see if it survives this further level of scrutiny. The second part of the work is devoted to constrain the viscous model of the form $\zeta \propto H$, where $\zeta$ is the bulk viscous coefficient and $H$ is the Hubble parameter, using three different combinations of data from type Ia supernovae (Pantheon), $H(z)$ (cosmic chronometers), Baryon Acoustic Oscillation and $f(z)\sigma_8(z)$ measurements with Markov Chain Monte Carlo (MCMC) method. We show that the considered model is compatible with the cosmological probes, and the $\Lambda$CDM recovered in late-time of the evolution of the Universe. Finally, we obtain selection information criteria (AIC and BIC) to study the stability of the models.
C. P. Singh, Vinita Khatri
2023-09-21T12:37:42Z
http://arxiv.org/abs/2309.12018v2
# Viscous fluid dynamics with decaying vacuum energy density ###### Abstract In this work, we investigate the dynamics of bulk viscous models with decaying vacuum energy density (VED) in a spatially homogeneous and isotropic flat Friedmann-Lemaitre- Robertson-walker (FLRW) spacetime. We particularly are interested to study the viscous model which considers first order deviation from equilibrium, i.e., the Eckart theory. In the first part, using the different forms of the bulk viscous coefficient, we find the main cosmological parameters, like Hubble parameter, scale factor, deceleration parameter and equation of state parameter analytically. We discuss some cosmological consequences of the evolutions and dynamics of the different viscous models with decaying VED. We examine the linear perturbation growth in the context of the bulk viscous model with decaying VED to see if it survives this further level of scrutiny. The second part of the work is devoted to constrain the viscous model of the form \(\zeta\propto H\), where \(\zeta\) is the bulk viscous coefficient and \(H\) is the Hubble parameter, using three different combinations of data from type Ia supernovae (Pantheon), \(H(z)\) (cosmic chronometers), Baryon Acoustic Oscillation and \(f(z)\sigma_{8}(z)\) measurements with Markov Chain Monte Carlo (MCMC) method. We show that the considered model is compatible with the cosmological probes, and the \(\Lambda\)CDM recovered in late-time of the evolution of the Universe. Finally, we obtain selection information criteria (AIC and BIC) to study the stability of the models. ## I Introduction The different observations such as luminosity distances of type Ia supernova, measurements of anisotropy of cosmic microwave background and gravitational lensing have confirmed that our Universe is spatially flat and expanding with an accelerated rate. It has been observed that the Universe contains a mysterious dominant component, called dark energy (DE) with large negative pressure, which leads to this cosmic acceleration [1; 2; 3; 4; 5; 6; 7]. In literature, several models have been proposed to explain the current accelerated expansion of the Universe. The two most accepted DE models are that of a cosmological constant and a slowly varying rolling scalar field (quintessence models)[8; 9; 10; 11]. The cosmological constant \(\Lambda\)(CC for short), initially introduced by Einstein to get the static Universe, is a natural candidate for explaining DE phenomena with equation of state parameter equal to \(-1\). The natural interpretation of CC arises as an effect of quantum vacuum energy. Thus, the cold dark matter based cosmology together with a CC, called the \(\Lambda\)CDM cosmology, is preferred as the standard model for describing the current dynamics of the Universe. It is mostly consistent with the current cosmological observations. However, despite of its success, the \(\Lambda\)CDM model has several strong problems due to its inability to renormalize the energy density of quantum vacuum, obtaining a discrepancy of \(\sim 120\) orders of magnitude between its predicted and observed value, so-called CC or fine-tuning problem [12; 13; 14]. It also has the coincidence problem, i.e., why the Universe transition, from decelerated to an accelerated phase, is produced at late times [15]. Many models have been proposed to tackle these issues. One of the possible proposal is to incorporate energy transfer among the cosmic components. In this respect, the models with time-varying vacuum energy density (VED), also known as 'decaying vacuum cosmology' seems to be promising. The idea of a time-varying VED models (\(\rho_{\Lambda}=\Lambda(t)/8\pi G\)) is physically more viable than the constant \(\Lambda\)[16; 17; 18; 19]. Although no fundamental theory exists to describe a time-varying vacuum, a phenomenological technique has been suggested to parametrize \(\Lambda(t)\). In literature, many authors [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40] have carried out analysis on decaying vacuum energy in which the time-varying vacuum has been phenomenologically modeled as a function of time in various possible ways, as a function of the Hubble parameter. Such attempts suggest that decaying VED model provides the possibility of explaining the acceleration of the Universe as well as it solves both cosmological constant and coincidence problems. Shapiro and Sol\(\grave{a}\)[41], and Sol\(\grave{a}\)[42] proposed a possible connection between cosmology and quantum field theory on the basis of renormalization group (RG) which gives the idea of running vacuum models (RVM), characterized by VED \(\rho_{\Lambda}\), see Refs.[32; 35; 39] for a review. The RVM has been introduced to solve the coincidence problem where the term \(\Lambda\) is assumed to be varying with the Hubble parameter \(H\). Carnerio et al.[27] proposed that the vacuum term is proportional to the Hubble parameter, \(\Lambda(a)\propto H(a)\). However, this model fails to fit the current CMB data. It is interesting to note that RG in quantum field theory (QFT) provides a time-varying vacuum, in which \(\Lambda(t)\) evolves as \(\Lambda\propto H^{2}\)[43]. Basilakos [28] proposed a parametrization of the functional form of \(\Lambda(t)\) by applying a power series expansion in \(H\) up to the second order. Recently, a large class of cosmologies has been discussed where VED evolves like a truncated power-series in the Hubble parameter \(H\), see Refs.[44; 45] and references therein. On the other hand, in recent years, the observations suggest that the Universe is permeated by dissipative fluids. Based on the thermodynamics point of view, phenomenological exotic fluids are supposed to play the role for an alternative DE models. It has been known since long time ago that a dissipative fluid can produce acceleration during the expansion of the Universe [46; 47]. The bulk and shear viscosity are most relevant parts of dissipative fluid. The bulk viscosity characterizes a change in volume of the fluid which is relevant only for the compressed fluids. The shear viscosity characterizes a change in shape of a fixed volume of the fluid which represents the ability of particles to transport momentum. In general, shear viscosity is usually used in connection with the spacetime anisotropy where as bulk viscosity plays the role in an isotropic cosmological models. The dynamics of homogeneous cosmological models has been studied in the presence of viscous fluid and has application in studying the evolution of the Universe. Eckart [48] extended a classical irreversible thermodynamics from Newtonian to relativistic fluids. He proposed the simplest non-causal theory of relativistic dissipative phenomena of first order which was later modified by Landau and Lifshitz [49]. The Eckart theory has some important limitations. It has been found that all the equilibrium states are unstable [50] and the signals can propagate through the fluids faster than the speed of light [51]. Therefore, to resolve these issues, Israel and Stewart [52] proposed a full causal theory of second order. When the relaxation time goes to zero, the causal theory reduces to the Eckart's first order theory. Thus, taking the advantage of this limit of vanishing relaxation time at late time, it has been used widely to describe the recent accelerated expansion of the Universe. An exhaustive reviews on non-causal and causal theories of viscous fluids can be found in Refs.[53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66]. In recent years, the direct observations indicate for viscosity dominated late epoch of accelerating expansion of the Universe. In this respect, many authors have explored the viability of a bulk viscous Universe to explain the present accelerated expansion of the Universe cf.[67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78]. In Eckart theory, the effective pressure of the cosmic fluid is modeled as \(\Pi=-3\zeta H\), where \(\zeta\) is bulk viscous coefficient and \(H\) the Hubble parameter. Bulk viscous coefficient can be assumed as a constant or function of Hubble parameter. It allows to explore the presence of interacting terms in the viscous fluid. Since the imperfect fluid should satisfy the equilibrium condition of thermodynamics, the pressure of the fluid must be greater than the one produced by the viscous term. To resolve this condition, it is useful to add an extra fluid such as cosmological constant. Many authors [89; 90; 91; 92; 93] have studied viscous cosmological models with constant or with time-dependent cosmological constant. Hu and Hu [92] have investigated a bulk viscous model with cosmological constant by assuming bulk viscous proportional to the Hubble parameter. Herrera-Zamorano et al. [93] have studied a cosmological model filled with two fluids under Eckart formalism, a perfect fluid as DE mimicking the dynamics of the CC, while a non-perfect fluid as dark matter with viscosity term. In this paper, we focus on discussing the dynamics of viscous Universe which consider the first order deviation from equilibrium, i.e., Eckart formalism with decaying VED. Using different versions of bulk viscous coefficient \(\zeta\), we find analytically the main cosmological functions such as the scale factor, Hubble parameter, and deceleration and equation of state parameters. We discuss the effect of viscous model with varying VED in perturbation level. We implement the perturbation equation to obtain the growth of matter fluctuations in order to study the contribution of this model in structure formation. We perform a Bayesian Markov Chain Monte Carlo (MCMC) analysis to constrain the parameter spaces of the model using three different combinations involving observational data from type Ia supernovae (Pantheon), Hubble data (cosmic chronometers), Baryon acoustic oscillations and \(f(z)\sigma_{8}(z)\) measurements. We compare our model and concordance \(\Lambda\)CDM to understand the effects of viscosity with decaying vacuum by plotting the evolutions of the deceleration parameter, equation of state parameter and Hubble parameter. We also study the selection information criterion such as AIC and BIC to analyze the stability of the model. The work of the paper is organized as follows. In Section II, we present the basic cosmological equations of Friedmann-Lemaitre-Robertson-Walker (FLRW) geometry with bulk viscosity and decaying VED. In Section III, we find the solution of the field equations by assuming the various forms of bulk viscous coefficient. We discuss the growth rate equations that govern the perturbation in Section IV. Section V presents the observational data and method to be used to constrain the proposed model. The results and discussion on the evolution of the various parameters are presented in Section VI. In Section VII, we present the selection information criterion to distinguish the presented model with concordance \(\Lambda\)CDM. Finally, we conclude our finding in Section VIII. ## II Viscous model with varying-\(\Lambda\) Let us start with the Friedmann-Lemaitre-Robertson-Walker (FLRW) metric in the flat space geometry as the case favoured by observational data \[ds^{2}=-dt^{2}+a^{2}(t)\left[dr^{2}+r^{2}(d\theta^{2}+sin^{2}\theta d\phi^{2 })\right], \tag{1}\] where \((r,\theta,\phi)\) are the co-moving coordinates and \(a(t)\) is the scale factor of the Universe. The large scale dynamics of (1) is described by the Einstein field equations, which include the cosmological constant \(\Lambda\) and is given by \[G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=8\pi G(T_{\mu\nu}+g_{\mu\nu}\rho_{ \Lambda}), \tag{2}\] where \(G_{\mu\nu}\) is the Einstein tensor, \(\rho_{\Lambda}=\Lambda/8\pi G\) is the vacuum energy density (the energy density associated to CC vacuum term) and \(T_{\mu\nu}\) is the energy-momentum tensor of matter. It is to be noted that for simplicity we use geometrical units \(8\pi G=c=1\). We introduce a bulk viscous fluid through the energy-momentum tensor which is given by [94] \[T_{\mu\nu}=(\rho_{m}+P)u_{\mu}u_{\nu}+g_{\mu\nu}P, \tag{3}\] where \(u^{\mu}\) is the fluid four-velocity, \(\rho_{m}\) is the density of matter and \(P\) is the pressure which is composed of the barotropic pressure \(p_{m}\) of the matter fluid plus the viscous pressure \(\Pi\), i.e., \(P=p_{m}+\Pi\). The origin of bulk viscosity is assumed as a deviation of any system from the local thermodynamic equilibrium. According to the second law of thermodynamics, the re-establishment to the thermal equilibrium is a dissipative processes which generates entropy. Due to generation of entropy, there is an expansion in the system through a bulk viscous term. In homogeneous and isotropic cosmological models, the viscous fluid is characterized by a bulk viscosity. It is mostly based on the Eckart's formalism [48] which can be obtained from the second order theory of non-equilibrium thermodynamics proposed by Israel and Stewart [52] in the limit of vanishing relaxation time. The viscous effect can be defined by the viscous pressure \(\Pi=-3\zeta H\), where \(\zeta\) is the bulk viscous coefficient and \(H\) is the Hubble parameter. The bulk viscous coefficient \(\zeta\) is assumed to be positive on thermodynamical grounds. Therefore, it makes the effective pressure as a negative value which leads to modification in energy-momentum tensor of perfect fluid. If we denote the total energy-momentum tensor \(T_{\mu\nu}+g_{\mu\nu}\rho_{\Lambda}\) as modified \(\tilde{T}_{\mu\nu}\) on right hand side of field equations (2), then the modified \(\tilde{T}_{\mu\nu}\) can be assumed the same form as \(T_{\mu\nu}\), that is, \(\tilde{T}_{\mu\nu}=(\rho+p)u_{\mu}u_{\nu}+g_{\mu\nu}p\), where \(\rho=\rho_{m}+\rho_{\Lambda}\) and \(p=p_{m}-3\zeta H+p_{\Lambda}\) are the total energy density and pressure, respectively. Further, we assume that the bulk viscous fluid is the non-relativistic matter with \(p_{m}=0\). Thus, the contribution to the total pressure is only due to the sum of negative viscous pressure, \(-3\zeta H\) and vacuum energy pressure, \(p_{\Lambda}=-\rho_{\Lambda}\). Using the modified energy-momentum tensor as discussed above, the Einstein field equations (2) describing the evolution of FLRW Universe dominated by bulk viscous matter and vacuum energy yield \[3H^{2}=\rho=\rho_{m}+\rho_{\Lambda}, \tag{4}\] \[2\dot{H}+3H^{2}=-p=3\zeta H+\rho_{\Lambda}. \tag{5}\] where \(H=\dot{a}/a\) is the Hubble parameter and an over dot represents the derivative with respect to cosmic time \(t\). In this paper, we propose the evolution of the Universe based on decaying vacuum models, i.e., vacuum energy density as a function of the cosmic time. From (2), the Bianchi identity \(\nabla^{\mu}G_{\mu\nu}=0\) gives \[\nabla^{\mu}\tilde{T}_{\mu\nu}=0, \tag{6}\] or, equivalently, \[\dot{\rho}_{m}+3H(\rho_{m}+p_{m}-3\zeta H+\rho_{\Lambda}+p_{\Lambda})=-\dot{ \rho}_{\Lambda}, \tag{7}\] which imply that the there is a coupling between a dynamical \(\Lambda\) term and viscous CDM. Therefore, there is some energy exchange between the viscous CDM fluid and vacuum. Using the equation of state of the vacuum energy \(p_{\Lambda}=-\rho_{\Lambda}\) and \(p_{m}=0\), Eq. (7) leads to \[\dot{\rho}_{m}+3H(\rho_{m}-3\zeta H)=-\dot{\rho}_{\Lambda}. \tag{8}\] Now, combining Eqs.(4) and (8), we get \[\dot{H}+\frac{3}{2}H^{2}=\frac{1}{2}\rho_{\Lambda}+\frac{3}{2}\zeta H. \tag{9}\] The dynamics of the Universe depends on the specific forms of \(\rho_{\Lambda}\) and \(\zeta\). ## III Solution of field equations The evolution equation (9) has three independent unknown quantities, namely, \(H\), \(\zeta\) and \(\rho_{\Lambda}\). We get the solution only if \(\zeta\) and \(\rho_{\Lambda}\) are specified. In this paper, we parameterize the functional form of \(\rho_{\Lambda}\) as a function of Hubble parameter. The motivation for a function \(\rho_{\Lambda}=\rho_{\Lambda}(H)\) can be assumed from different points of view. Although the correct functional form of \(\rho_{\Lambda}\) is not known, a quantum field theory (QFT) approach within the context of the renormalization group (RG) was proposed in Refs.[95; 96] and further studied by many authors [29; 32; 35; 42; 97; 98]. In Ref. [36], the following ratio has been defined between the two fluid components: \[\gamma=\frac{\rho_{\Lambda}-\rho_{\Lambda_{0}}}{\rho_{m}+\rho_{\Lambda}}, \tag{10}\] where \(\rho_{\Lambda_{0}}\) is a constant vacuum density. If \(\rho_{\Lambda}=\rho_{\Lambda_{0}}\), then \(\gamma=0\), and we get \(\Lambda\)CDM model. On the other hand, if \(\rho_{\Lambda_{0}}\neq 0\), then we get \[\rho_{\Lambda}=\rho_{\Lambda 0}+\gamma(\rho_{m}+\rho_{\Lambda})=\rho_{\Lambda 0} +3\gamma H^{2}. \tag{11}\] The above proposal was first considered by Shapiro and Sola [41] in context of RG. Many authors have studied the evolution of the Universe by assuming this form [33; 34; 40]. Hereafter, we shall focus on the simplest form of \(\rho_{\Lambda}\) which evolves with the Hubble rate. Specifically, in this paper we consider \[\rho_{\Lambda}=c_{0}+3\nu H^{2}, \tag{12}\] where \(c_{0}=3H_{0}^{2}(\Omega_{\Lambda 0}-\nu)\) is fixed by the boundary condition \(\rho_{\Lambda}(H_{0})=\rho_{\Lambda 0}\). The suffix '0' denotes the present value of the parameter. The dimensionless coefficient \(\nu\) is the vacuum parameter and is expected to be very small value \(|\nu|\ll 1\). A non-zero value of it makes possible the cosmic evolution of the vacuum. The choice of \(\zeta\) generates different viscous models and in literature there are different approaches to assume the evolution of bulk viscosity. In this paper, we consider the most general form of the bulk viscous term \(\zeta\), which is assumed to be the sum of three terms: the first term is a constant, \(\zeta_{0}\), the second term is proportional to the Hubble parameter \(H=\dot{a}/a\) which is related to the expansion and the third term is proportional to the acceleration, \(\ddot{a}/\dot{a}\). Thus, we assume the parametrization of bulk viscous coefficient in the form[72; 78; 99; 100] \[\zeta=\zeta_{0}+\zeta_{1}\frac{\dot{a}}{a}+\zeta_{2}\frac{\ddot{a}}{\dot{a}}, \tag{13}\] where \(\zeta_{0}\), \(\zeta_{1}\) and \(\zeta_{2}\) are constants to be determined by the observations. The term \(\ddot{a}/\dot{a}\) in Eq. (13) can be written as \(\ddot{a}/aH\). The basic idea about the assumption of \(\zeta\) in Eq.(13) is that the dynamic state of the fluid influences its viscosity in which the transport viscosity is related to the velocity and acceleration. In what follows, we study the decaying vacuum model defined in (12) with different forms of bulk viscous coefficient as defined in Eq.(13). ### Cosmology with \(\zeta=\zeta_{0}\)=const. This is the simplest parametrization of Eckart's bulk viscosity model. Many authors [69; 70; 77; 84; 87; 88; 91; 101; 102; 103] have studied the viscous cosmological models with constant bulk viscous coefficient. Using the decaying vacuum form (12) and taking \(\zeta=\zeta_{0}=const.\), where \(\zeta_{1}=\zeta_{2}=0\) in Eq.(13), the evolution equation (9) reduces to \[\dot{H}+\frac{3}{2}(1-\nu)H^{2}-\frac{3}{2}\zeta_{0}H=\frac{1}{2}c_{0}. \tag{14}\] Solving (14) for \(\nu<1\), we get \[H=\frac{\zeta_{0}}{2(1-\nu)}+\sigma\left(\frac{1+e^{-3(1-\nu)\sigma t}}{1-e^{- 3(1-\nu)\sigma t}}\right), \tag{15}\] where \(\sigma=\sqrt{(\frac{\zeta_{0}}{2(1-\nu)})^{2}+\frac{H_{0}^{2}(\Omega_{\Lambda 0 }-\nu)}{(1-\nu)}}\). Here, we have used \(c_{0}=3H_{0}^{2}(\Omega_{\Lambda 0}-\nu)\). The above equation simplifies to give \[H=\frac{\zeta_{0}}{2(1-\nu)}+\sigma\coth\left(\frac{3}{2}(1-\nu)\sigma t\right). \tag{16}\] It can be observed that the solution reduces to the standard \(\Lambda\) for \(\zeta_{0}=0\) and \(\nu=0\), whereas for \(\zeta_{0}=0\) and \(\nu\neq 0\) it gives the solution for \(\Lambda(t)\) model from quantum field theory[29]. Using the Hubble parameter \(\dot{H}=\dot{a}/a\), the scale factor of the model \(a(t)\) with the condition \(a(t_{0})=1\) is given by \[a(t)=e^{\frac{\zeta_{0}}{2(1-\nu)}t}\left(\sinh(\frac{3}{2}(1-\nu)\sigma t) \right)^{\frac{2}{2(1-\nu)}}, \tag{17}\] which shows that the scale factor increases exponentially as \(t\) increases. From (17), one can observe that, in general, it is not possible to express cosmic time \(t\) in terms of the scale factor \(a\). It is possible only if \(\zeta_{0}=0\). In the absence of bulk viscosity, we obtain the result of decaying vacuum model as discussed in Ref.[29]. Further, for constant \(\Lambda\), the solution reduced to the \(\Lambda\)CDM model with no viscosity. To discuss the decelerated and accelerated phases and its transition during the evolution of the Universe, we study a cosmological parameter, known as 'deceleration parameter', \(q\), which is defined as \[q=-\frac{\ddot{a}}{a}\frac{1}{H^{2}}=-\left(1+\frac{\dot{H}}{H^{2}}\right). \tag{18}\] In cosmology, \(q\) is a dimensionless measure of the cosmic acceleration. The expansion of the Universe decelerates if \(q>0\), whereas it accelerates for \(q<0\) and \(q=0\) gives the marginal inflation. The time-dependent \(q\) may describe the transition from one phase to another phase. Using (16), the deceleration parameter is calculated as \[q=-1+\frac{3}{2}\frac{(1-\nu)\sigma^{2}\csc^{2}h(\frac{3}{2}(1-\nu)\sigma t)}{ \left(\frac{\zeta_{0}}{2(1-\nu)}+\sigma\coth(\frac{3}{2}(1-\nu)\sigma t)\right) ^{2}}. \tag{19}\] For sake of completeness, we discuss another important cosmological parameter, known as effective equation of state (EoS) parameter, which is defined as \[w_{eff}=-1-\frac{2}{3}\frac{\dot{H}}{H^{2}}. \tag{20}\] Using (16), we get \[w_{eff}=-1+\frac{(1-\nu)\sigma^{2}\csc^{2}h(\frac{3}{2}(1-\nu)\sigma t)}{\left( \frac{\zeta_{0}}{2(1-\nu)}+\sigma\coth(\frac{3}{2}(1-\nu)\sigma t)\right)^{2}}. \tag{21}\] ### Cosmology with \(\zeta=\zeta_{1}\)\(H\) Let us consider the case where bulk viscous coefficient is proportional to the Hubble parameter, i.e., \(\zeta=\zeta_{1}H\). Such a form of \(\zeta\) has been studied by many authors [53; 59; 72; 80; 104; 105]. This type of bulk viscous coefficient can be obtained by assuming \(\zeta_{0}=\zeta_{2}=0\) in Eq.(13). Thus, using \(\zeta=\zeta_{1}H\) and Eq.(12) into Eq.(9), we get the evolution equation for Hubble parameter as \[\dot{H}+\frac{3}{2}(1-\zeta_{1}-\nu)H^{2}-\frac{1}{2}c_{0}=0. \tag{22}\] The above equation with change of a variable from \(t\) to \(x=\ln a\) can be written as \[\frac{dh^{2}}{dx}+3(1-\zeta_{1}-\nu)h^{2}=3(\Omega_{\Lambda 0}-\nu), \tag{23}\] where \(h=H/H_{0}\) is the dimensionless Hubble parameter and \(\Omega_{\Lambda 0}=\rho_{\Lambda 0}/3H_{0}^{2}\). Assuming \((\zeta_{1}+\nu)<1\) and using the normalized scale factor -redshift relation, \(a=(1+z)^{-1}\), we can express the normalized Hubble function \(E(z)\equiv H(z)/H_{0}\) as \[\begin{split} E(z)&=\frac{1}{(1-\zeta_{1}-\nu)^{1/ 2}}\\ &\times\left[(1-\zeta_{1}-\Omega_{\Lambda 0})(1+z)^{3(1-\zeta_{1}- \nu)}+\Omega_{\Lambda 0}-\nu\right]^{1/2}.\end{split} \tag{24}\] From the above equation, it is clear that for \(\nu=0\) and \(\zeta_{1}=0\), we recover exactly the \(\Lambda\)CDM expansion model whereas only \(\zeta_{1}=0\) gives the solution obtained in Ref.[40]. It is observed that at very late time we get an cosmological constant dominated era, \(H\approx H_{0}\sqrt{\frac{\Omega_{\Lambda 0}-\nu}{(1-\zeta_{1}-\nu)}}\), which implies a de Sitter phase of the scale factor. Using \(H=\dot{a}/a\), the solution for the scale factor in terms of cosmic time \(t\) is given by \[\begin{split} a&=\left(\frac{(1-\zeta_{1}-\Omega_ {\Lambda 0})}{\Omega_{\Lambda 0}-\nu}\right)^{\frac{1}{3(1-\zeta_{1}-\nu)}}\\ &\times\left[\sinh(\frac{3}{2}\sqrt{(1-\zeta_{1}-\nu)(\Omega_{ \Lambda 0}-\nu)}\;H_{0}\;t)\right]^{\frac{2}{3(1-\zeta_{1}-\nu)}}\end{split} \tag{25}\] It can be observed that the scale factor evolves as power-law expansion, i.e., \(a\propto t^{2/3(1-\zeta_{1}-\nu)}\) for small values of \(t\) whereas it expands exponentially, i.e., \(a\propto\exp\sqrt{\frac{(\Omega_{\Lambda 0}-\nu)}{3(1-\zeta_{1}-\nu)}}H_{0}t\) for large values of time \(t\). In other words, the model expands with decelerated rate in early time of its evolution and expands with accelerated rate in late time of its evolution. From Eq. (25), we can find the cosmic time in terms of the scale factor, which is given by \[t(a)=\frac{2}{3H_{0}\sqrt{(1-\zeta_{1}-\nu)(\Omega_{\Lambda 0}-\nu)}}\sinh^{-1} \left[\left(\frac{a}{a_{I}}\right)^{\frac{3(1-\zeta_{1}-\nu)}{2}}\right] \tag{26}\] where \(a_{I}=\left(\frac{(1-\zeta_{1}-\Omega_{\Lambda 0})}{(\Omega_{\Lambda 0}-\nu)} \right)^{1/3(1-\zeta_{1}-\nu)}\). Using (24), the value of \(q\) in terms of redshift is calculated as \[q(z)=-1+\frac{3}{2}\frac{(1-\zeta_{1}-\Omega_{\Lambda 0})(1+z)^{3(1-\zeta_{1}- \nu)}}{\left[\frac{(\Omega_{\Lambda 0}-\nu)}{(1-\zeta_{1}-\nu)}+\left(1- \frac{(\Omega_{\Lambda 0}-\nu)}{(1-\zeta_{1}-\nu)}\right)(1+z)^{3(1-\zeta_{1}- \nu)}\right]} \tag{27}\] The above equation shows that the dynamics of \(q\) depends on the redshift which describes the transition of the Universe from decelerated to accelerated phase. We observe that as \(z\rightarrow-1\), \(q(z)\) approaches to \(-1\). However, the model decelerates or accelerates if \(\Omega_{\Lambda 0}=\nu\), which gives \(q=-1+1.5(1-\zeta_{1}-\nu)\). Thus, a cosmological constant is required for a transition phase. Also, for \(z=0\), we find the present value of \(q\) which is given by \[q_{0}=-1+1.5(1-\zeta_{1}-\Omega_{\Lambda 0}). \tag{28}\] The transition redshift, \(z_{tr}\) of the Universe, which is defined as a zero point of the deceleration parameter, \(q=0\), can be calculated as \[z_{tr}=-1+\left(\frac{2(\Omega_{\Lambda 0}-\nu)}{(3(1-\zeta_{1}-\nu)-2)(1- \zeta_{1}-\Omega_{\Lambda 0})}\right)^{\frac{1}{3(1-\zeta_{1}-\nu)}}. \tag{29}\] In this case, the effective EoS parameter is defined by \(w_{eff}=-1-\frac{1}{3}\frac{d\ln h^{2}}{dx}\), where \(x=\ln a\) and \(h=H/H_{0}\). Using Eq. (24), we get \[w_{eff}(z)=-1+\frac{(1-\zeta_{1}-\Omega_{\Lambda 0})(1+z)^{3(1-\zeta_{1}-\nu)}}{ \left[\frac{(\Omega_{\Lambda 0}-\nu)}{(1-\zeta_{1}-\nu)}+\left(1-\frac{(\Omega_{ \Lambda 0}-\nu)}{(1-\zeta_{1}-\nu)}\right)(1+z)^{3(1-\zeta_{1}-\nu)}\right]}. \tag{30}\] The present value of \(w_{eff}\) at \(z=0\) is given by \[w_{eff}(z=0)=-1+(1-\zeta_{1}-\Omega_{\Lambda 0}). \tag{31}\] We can observe that the model will accelerate provided \(3w_{eff}(z=0)+1=-2+3(1-\zeta_{1}-\Omega_{\Lambda 0})<0\). In Section IV, we will perform the observational analysis to estimate the parameters of the model and analyse the evolution and dynamics of the model in detail. ### Cosmology with \(\zeta=\zeta_{0}+\zeta_{1}H\) In this subsection, we assume that the bulk viscous coefficient is a linear combination of two terms: \(\zeta_{0}\) and \(\zeta_{1}H\), i.e., \(\zeta=\zeta_{0}+\zeta_{1}H\). In literature, many authors [72; 78; 79] have assumed such a form of \(\zeta\) to study the dynamics of Universe. Using (12), Eq. (9) takes the form \[\dot{H}+\frac{3}{2}(1-\zeta_{1}-\nu)H^{2}-\frac{3}{2}\zeta_{0}H=\frac{1}{2}c_{ 0}. \tag{32}\] Assuming \((\zeta_{1}+\nu)<1\), we integrate (32) to obtain the solution for Hubble parameter which is given by \[H=\frac{\zeta_{0}}{2(1-\zeta_{1}-\nu)}+\sigma_{1}\left(\frac{1+e^{-3(1-\zeta_{1} -\nu)\sigma_{1}t}}{1-e^{-3(1-\zeta_{1}-\nu)\sigma_{1}t}}\right), \tag{33}\] where \(\sigma_{1}=\sqrt{\left(\frac{\zeta_{0}}{2(1-\zeta_{1}-\nu)}\right)^{2}+\frac{H_ {0}^{2}(\Omega_{\Lambda 0}-\nu)}{(1-\zeta_{1}-\nu)}}\). On simplification, the above equation can be written as \[H=\frac{\zeta_{0}}{2(1-\zeta_{1}-\nu)}+\sigma_{1}\;\coth\left(\frac{3}{2}(1- \zeta_{1}-\nu)\sigma_{1}t\right). \tag{34}\] The corresponding expression for the scale factor in normalized unit has the form \[a=e^{\frac{\zeta_{0}}{2(1-\zeta_{1}-\nu)}t}\left[\sinh\left(\frac{3}{2}(1-\zeta_{ 1}-\nu)\sigma_{1}t\right)\right]^{\frac{2}{3(2-\zeta_{1}-\nu)}}. \tag{35}\] The respective deceleration parameter and effective EoS parameter are calculated as \[q=-1+\frac{3(1-\zeta_{1}-\nu)\sigma_{1}^{2}\csc^{2}h(\frac{3}{2}(1-\zeta_{1}- \nu)\sigma_{1}t)}{2\left(\frac{\zeta_{0}}{2(1-\zeta_{1}-\nu)}+\sigma_{1}\coth (\frac{3}{2}(1-\zeta_{1}-\nu)\sigma_{1}t)\right)^{2}} \tag{36}\] and \[w_{eff}=-1+\frac{(1-\zeta_{1}-\nu)\sigma_{1}^{2}\csc^{2}h(\frac{3}{2}(1-\zeta_ {1}-\nu)\sigma_{1}t)}{\left(\frac{\zeta_{0}}{2(1-\zeta_{1}-\nu)}+\sigma_{1} \coth(\frac{3}{2}(1-\zeta_{1}-\nu)\sigma_{1}t)\right)^{2}} \tag{37}\] ### Cosmology with \(\zeta=\zeta_{0}+\zeta_{1}H+\zeta_{2}(\ddot{a}/aH)\) Lastly, we assume a more general form of bulk viscous coefficient which is a combination of three terms: \(\zeta_{0}\), \(\zeta_{1}H\) and \(\zeta_{2}\ddot{a}/aH\). This generalized form of \(\zeta\) is well motivated as discussed earlier and has been studied by many authors [66; 71; 72; 81; 99; 100]. This form of \(\zeta\) can be rewritten as \[\zeta=\zeta_{0}+\zeta_{1}H+\zeta_{2}(\frac{\dot{H}}{H}+H). \tag{38}\] Using Eqs.(38) and (12), Eq.(9) reduces to \[(1-\frac{3}{2}\zeta_{2})\dot{H}+\frac{3}{2}(1-\zeta_{1}-\zeta_{2}-\nu)H^{2}- \frac{3}{2}\zeta_{0}H-\frac{1}{2}c_{0}=0, \tag{39}\] which on integration, it gives \[H=\frac{\zeta_{0}}{2(1-\zeta_{1}-\zeta_{2}-\nu)}+\sigma_{2}\coth\left(\frac{3 }{2}\frac{(1-\zeta_{1}-\zeta_{2}-\nu)\sigma_{2}}{(1-\frac{3}{2}\zeta_{2})}t \right), \tag{40}\] where \(\sigma_{2}=\sqrt{\left(\frac{\zeta_{0}}{2(1-\zeta_{1}-\zeta_{2}-\nu)}\right)^ {2}+\frac{(1-\frac{3}{2}\zeta_{1})H_{0}^{2}(\Omega\alpha_{0}-\nu)}{(1-\zeta_{ 1}-\zeta_{2}-\nu)}}\). The solution for the scale factor can be obtained as \[a=e^{\frac{\zeta_{0}}{2(1-\zeta_{1}-\zeta_{2}-\nu)}t}\left[\sinh\left(\frac{ 3}{2}\frac{(1-\zeta_{1}-\zeta_{2}-\nu)\sigma_{2}}{(1-\frac{3}{2}\zeta_{2})}t \right)\right]^{\frac{2(1-\frac{3}{2}\zeta_{2})}{5(1-\zeta_{1}-\zeta_{2}-\nu )}}. \tag{41}\] The deceleration parameter and effective EoS parameter are calculated as \[q=-1+\frac{3}{2}\frac{\frac{(1-\zeta_{1}-\zeta_{2}-\nu)}{(1-\frac{3}{2}\zeta_ {2})}\sigma_{2}^{2}\csc^{2}h(\frac{3}{2}(\frac{1-\zeta_{1}-\zeta_{2}-\nu)}{( 1-\frac{3}{2}\zeta_{2})}\sigma_{2}t)}{\left(\frac{\zeta_{0}}{2(1-\zeta_{1}- \zeta_{2}-\nu)}+\sigma_{2}\coth(\frac{3}{2}(\frac{1-\zeta_{1}-\zeta_{2}-\nu )}{(1-\frac{3}{2}\zeta_{2})}\sigma_{2}t)\right)^{2}}, \tag{42}\] and \[w_{eff}=-1+\frac{\frac{(1-\zeta_{1}-\zeta_{2}-\nu)}{(1-\frac{3}{2}\zeta_{2})} \sigma_{2}^{2}\csc^{2}h(\frac{3}{2}(\frac{1-\zeta_{1}-\zeta_{2}-\nu)}{(1-\frac {3}{2}\zeta_{2})}\sigma_{2}t)}{\left(\frac{\zeta_{0}}{2(1-\zeta_{1}-\zeta_{2}- \nu)}+\sigma_{2}\coth(\frac{3}{2}\frac{(1-\zeta_{1}-\zeta_{2}-\nu)}{(1-\frac {3}{2}\zeta_{2})}\sigma_{2}t)\right)^{2}}. \tag{43}\] ## IV Growth of perturbations In cosmic structure formation it is assumed that the present abundant structure of the Universe is developed through gravitational amplification of small density perturbations generated in its early evolution. In this section, we briefly discuss the linear perturbation within the framework of viscous fluid with varying \(\Lambda(t)\). We refer the reader to Refs. [106; 107] for the detailed perturbation equations since here we have discussed some basic equations only. The differential equation for the matter density contrast \(\delta_{m}\equiv\delta\rho_{m}/\rho_{m}\) for our model considered here can be approximated as follows [108]: \[\delta^{\prime\prime}_{m}+\left(\frac{3}{a}+\frac{H^{\prime}(a)}{H(a)}\right) \delta^{\prime}_{m}-\frac{4\pi G\rho_{m}}{H^{2}(a)}\frac{\delta_{m}}{a^{2}}=0 \tag{44}\] where prime represents derivative with respect to the scale factor \(a\). The above second-order differential equation turns out to be accurate since the main effects come from the different expression of the Hubble function. We consider the Hubble function as obtained in Part B of Sect. III. Equation (44) describes the smoothness of the matter perturbation in extended viscous \(\Lambda(t)\) model. The linear growth rate of the density contrast, \(f\), which is related to the peculiar velocity in the linear theory [109] is defined as \[f(a)=\frac{d\ln D_{m}(a)}{d\ln a}, \tag{45}\] where \(D_{m}(a)=\delta_{m}(a)/\delta_{m}(a=1)\) is the linear growth function. The weighted linear growth rate, denoted by \(f\sigma_{8}\), is the product of the growth rate \(f(z)\), defined in (45), and \(\sigma_{8}(z)\). Here, \(\sigma_{8}\) is the root-mean-square fluctuation in spheres with radius \(8h^{-1}\) Mpc scales [110; 111], and it is given by [112] \[\sigma_{8}(z)=\frac{\delta_{m}(z)}{\delta_{m}(z=0)}\sigma_{8}(z=0). \tag{46}\] Using (45) and (46), the weighted linear growth rate is given by \[f\sigma_{8}(z)=-(1+z)\frac{\sigma_{8}(z=0)}{\delta_{m}(z=0)}\frac{d\delta_{m}}{ dz}. \tag{47}\] ## V Data and methodology In this section, we present the data and methodology used in this work. We constrain the parameters of the \(GR-\Lambda\)CDM and \(\zeta=\zeta_{1}H\) with varying \(\Lambda\) models using a large, robust and latest set of observational data which involve observations from: (i) distant type Ia supernovae (SNe Ia); (ii) a compilation of cosmic chronometer measurements of Hubble parameter \(H(z)\) at different redshifts; (iii) baryonic acoustic oscillations (BAO); and (iv) \(f(z)\sigma_{8}(z)\) data. A brief description of each of datasets are as follows: ### Pantheon SNe Ia sample The most known and frequently used cosmological probe are distant type Ia supernovae (SNe Ia) which are used to understand the actual evolution of the Universe. A supernova explosion is an extremely luminous event, with its brightness being comparable with the brightness of its host galaxy [113]. We use the recent SNe Ia data points, the so-called Pantheon sample which includes 1048 data points of luminosity distance in the redshift range \(0.01<z<2.26\). Specifically, one could use the observed distance modulo, \(\mu_{obs}\), to constrain cosmological models. The Chi-squared function for SNe Ia is given by \[\chi^{2}_{SNe\;Ia}=\sum_{i=1}^{1048}\Delta\mu^{T}C^{-1}\Delta\mu, \tag{48}\] where \(\Delta\mu=\mu_{obs}-\mu_{th}\). Here, \(\mu_{obs}\) is the observational distance modulus of SNe Ia and is given as \(\mu_{obs}=m_{B}-\mathcal{M}\), where \(m_{B}\) is the observed peak magnitude in the rest frame of the \(B\) band, \(\mathcal{M}\) is the absolute B-band magnitude of a fiducial SNe Ia, which is taken as \(-19.38\). The theoretical distance modulus \(\mu_{th}\) is defined by \[\mu_{th}(z,\mathbf{p})=5\log_{10}\left(\frac{D_{L}(z_{hel},z_{cmb})}{1Mpc} \right)+25, \tag{49}\] where \(\mathbf{p}\) is the parameter space and \(D_{L}\) is the luminosity distance, which is given as \(D_{L}(z_{hel},z_{cmb})=(1+z_{hel})r(z_{cmb})\). Here, \(r(z_{cmb})\) is given by \[r(z)=cH_{0}^{-1}\int_{0}^{z}\frac{dz^{\prime}}{E(z^{\prime},\mathbf{p})}, \tag{50}\] where \(c\) is the speed of light, \(E(z)\equiv H(z)/H_{0}\) is the dimensionless Hubble parameter, \(z_{hel}\) and \(z_{cmb}\) are heliocentric and CMB frame redshifts, respectively. Here, \(C\) is the total covariance matrix which takes the form \(C=D_{stat}+C_{sys}\), where the diagonal matrix \(D_{stat}\) and covariant matrix \(C_{sys}\) denote the statistical uncertainties and the systematic uncertainties. ### BAO measurements In this work, we have used six points of BAO datasets from several surveys, which includes the Six Degree Field Galaxy Survey (6dFGS),the Sloan Digital Sky Survey (SDSS), and the LOWZ samples of the Baryon Oscillation Spectroscopic Survey(BOSS)[114; 115; 116]. The dilation scale \(D_{v}(z)\) introduced in [117] is given by \[D_{v}(z)=\left(\frac{d_{A}^{2}(z)z}{H(z)}\right)^{1/3} \tag{51}\] Here, \(d_{A}(z)\) is the comoving angular diameter distance and is defined as \[d_{A}(z)=\int_{0}^{z}\frac{dy}{H(y)^{\prime}}, \tag{52}\] Now, the corresponding Chi-squared function for the BAO analysis is given by \[\chi^{2}_{BAO}=A^{T}C^{-1}_{BAO}A, \tag{53}\] where \(A\) depend on the considered survey and \(C^{-1}_{BAO}\) is the inverse of the covariance matrix [116]. ### \(H(z)\) data The cosmic chronometer (CC) data, which is determined by using the most massive and passively evolving galaxies based on the 'galaxy differential age' method, are model independent (see, Ref.[118] for detail). In our analysis, we use 32 CC data points of the Hubble parameter measured by differential age technique [118] between the redshift range \(0.07\leq z\leq 1.965\). The Chi-squared function for \(H(z)\) is given by \[\chi^{2}_{H(z)}=\sum_{i=1}^{32}\frac{[H(z_{i},\mathbf{p})-H_{obs}(z_{i})]^{2}} {\sigma^{2}_{H(z_{i})}} \tag{54}\] where \(H(z_{i},\mathbf{p})\) represents the theoretical values of Hubble parameter with model parameters, \(H_{obs}(z_{i})\) is the observed values of Hubble parameter and \(\sigma_{i}\) represents the standard deviation measurement uncertainty in \(H_{obs}(z_{i})\). ### \(f(z)\sigma_{8}(z)\) data In Section IV, we have mainly discussed the background evolution of the growth perturbations and defined the weighted linear growth rate by Eq. (47). To make more complete discussion on viscous \(\Lambda(t)\) model in perturbation evolution, we focus on an observable quantity of \(f(z)\sigma_{8}(z)\). We use 18 data points of "Gold -17" compilation of robust and independent measurements of weighted linear growth \(f(z)\sigma_{8}(z)\) obtained by various galaxy surveys as compiled in Table III of Ref. [119]. In order to compare the observational data set with that of the predicted by our model, we define the Chi-square function as \[\chi^{2}_{(fe\mathfrak{s})}=\sum_{i=1}^{18}\frac{[f\sigma_{8}^{the}(z_{i},\mathbf{ p})-f\sigma_{8}^{obs}(z_{i})]^{2}}{\sigma_{f\sigma_{8}(z_{i})}^{2}}, \tag{55}\] where \(f\sigma_{8}^{the}(z_{i},\mathbf{p})\) is the theoretical value computed by Eq.(47) and \(f\sigma_{8}^{obs}(z_{i})\) is the observed data [119]. Using the observational data as discussed above, we use the Markov Chain Monte Carlo (MCMC) method by employing EMCEE python package [120] to explore the parameter spaces of viscous model with decaying vacuum density as discussed in part B of Sect.III by utilizing different combinations of data sets. The combinations are as follows: * BASE: The combination of two datasets \(SNe\;Ia+BAO\) is termed as "BASE", whose the joint \(\chi^{2}\) function is defined as \(\chi^{2}_{tot}=\chi^{2}_{SNe\;Ia}+\chi^{2}_{BAO}\). * \(+CC\): We combine \(CC\) data to the BASE, where \(\chi^{2}_{tot}=\chi^{2}_{SNe\;Ia}+\chi^{2}_{BAO}+\chi^{2}_{H(z)}\) * \(+f\sigma_{8}(z)\): The BASE data is complemented with \(CC\) and \(f\sigma_{8}\), where \(\chi^{2}_{tot}=\chi^{2}_{SNe\;Ia}+\chi^{2}_{BAO}+\chi^{2}_{H(z)}+\chi^{2}_{f \sigma_{8}}\). We consider the \(\Lambda\)CDM model as a reference model and its parameters are also constrained with the above sets of data. ## VI Results and Discussion In this section, we present the main results obtained through the observational data on the viscous \(\Lambda(t)\) model of the form \(\zeta=\zeta_{1}H\) with \(\Lambda=c_{0}+3\nu H^{2}\) (Refers to part B of Sect.III). We also present the cosmological observation for \(\Lambda\)CDM model using the three combination of datasets. The viscous \(\Lambda(t)\) model has 4 free parameter spaces \(\{H_{0},\Omega_{\Lambda},\zeta_{1},\nu\}\), where as \(\Lambda\)CDM has 2 free parameters \(\{H_{0},\Omega_{\Lambda}\}\). We calculate the best-fit values by minimizing the combination of \(\chi^{2}\) function for above defined data sets. We also provide the fitting values of the \(\Lambda\)CDM for comparison with the viscous \(\Lambda(t)\) model. The constraints of the statistical study are presented in Tables 1 and 2. Figures 1-3 show the \(1\sigma(68.3\%)\) and \(2\sigma(95.4\%)\) confidence level (CL) contours with marginalized likelihood distributions for the cosmological parameters of \(\Lambda\)CDM and viscous \(\Lambda(t)\) models considering combination of different datasets, respectively. It is observed from Tables 1 and 2 that the constraints on the parameter spaces of \(\Lambda\)CDM and viscous with \(\Lambda(t)\) are nearly the same. Figure 1: Two-dimensional confidence contours of the \(H_{0}-\Omega_{\Lambda}\) and one dimensional posterior distributions of \(H_{0}\), \(\Omega_{\Lambda}\) for the \(\Lambda\)CDM and viscous \(\Lambda(t)\) models using “\(BASE\)” data. The green and black dot on the contour represents the best fit value of \(\Lambda\)CDM and viscous \(\Lambda(t)\) models respectively. Figure 2: Two-dimensional confidence contours of the \(H_{0}-\Omega_{\Lambda}\) and one dimensional posterior distributions of \(H_{0}\), \(\Omega_{\Lambda}\) for the \(\Lambda\)CDM and viscous \(\Lambda(t)\) models using “\(+\)\(CC\)” data. The green and black dot on the contour represents the best fit value of \(\Lambda\)CDM and viscous \(\Lambda(t)\) models respectively. Figure 3: Two-dimensional confidence contours of \(H_{0}-\Omega_{\Lambda}\), \(\Omega_{\Lambda}-S_{8}\) and \(H_{0}-S_{8}\) and one-dimensional posterior distributions of \(H_{0}\), \(\Omega_{\Lambda}\) and \(S_{8}\) for the \(\Lambda\)CDM and viscous \(\Lambda(t)\) models using “\(+\)\(f\sigma\)s” data. The green and black dot on the contour represents the best fit value of \(\Lambda\)CDM and viscous \(\Lambda(t)\) models respectively. the \(\Lambda\)CDM model with the same datasets (cf.Table 1). Using the best-fit values of parameters in Eq. (30), the evolutions of the effective EoS parameter \(w_{eff}\) are shown in Figs.10-12. We conclude that for large redshifts, \(w_{eff}\) has small negative value \(w_{eff}>-1/3\) and in future the model asymptotically approaches to \(w_{eff}=-1\). The trajectory of \(w_{eff}\) for \(BASE\) and \(+CC\) datasets coincides with the evolution of \(\Lambda\)CDM model. However, it slightly varies with the best-fit values obtained through \(+f\sigma_{8}(z)\) data points. It can be observed that the viscous \(\Lambda\)(t) model behaves like a quintessence in early time and cosmological constant in late-time. The present values of \(w_{eff}\) are found to be \(-0.689^{+0.017}_{-0.013}\), \(-0.690^{+0.015}_{-0.013}\) and \(-0.677^{+0.014}_{-0.011}\) with \(BASE\), \(+CC\) and \(+f\sigma_{8}\) datasets respectively, which are very close to the current value of \(\Lambda\)CDM model as presented in Table 1. Figure 4: The redshift evolution of the deceleration parameter for viscous \(\Lambda(t)\) using “\(BASE\)” dataset. The evolution of deceleration parameter in the standard \(\Lambda\)CDM model is also shown as the dashed curve. A dot denotes the current value of \(q\) (hence \(q_{0}\)). Figure 5: The redshift evolution of the deceleration parameter for viscous \(\Lambda(t)\) using “\(+CC\)” dataset. The evolution of deceleration parameter in the standard \(\Lambda\)CDM model is also shown as the dashed curve. A dot denotes the current value of \(q\) (hence \(q_{0}\)). Figure 6: The redshift evolution of the deceleration parameter for viscous \(\Lambda(t)\) using “\(+f\sigma_{8}\)” dataset. The evolution of deceleration parameter in the standard \(\Lambda\)CDM model is also shown as the dashed curve. A dot denotes the current value of \(q\) (hence \(q_{0}\)). Figure 7: Best fits using “\(BASE\)” data set over \(H(z)\) data for viscous \(\Lambda(t)\) (green dot-dashed line) and \(\Lambda\)CDM (black solid line) are shown. The grey points with uncertainty bars correspond to the \(32\)\(CC\) sample. From Tables 1 and 2, let us discuss the present value \(H_{0}\) of Hubble parameter in case of viscous \(\Lambda\)(t) and \(\Lambda\)CDM models. The viscous \(\Lambda(t)\) model gives \(H_{0}=68.843^{+0.274}_{-0.238}\) km/s/Mpc with \(BASE\) data, the \(+CC\) data gives \(H_{0}=68.913^{+0.262}_{-0.261}\) km/s/Mpc and, finally, the \(+f\sigma_{8}\) renders the present value: \(H_{0}=68.684^{+0.259}_{-0.241}\) km/s/Mpc. Recently, the local measurement \(H_{0}=73.04\pm 1.04\) km/s/Mpc from Riess et al.[121] exhibits a strong tension with the Planck 2018 release \(H_{0}=67.4\pm 0.5\) km/s/Mpc [7] at the \(4.89\sigma\) confidence level. The residual tensions of our fitting results with respect to the latest local measurement \(H_{0}=73.04\pm 1.04\) km/s/Mpc [121] are \(3.92\sigma\), \(3.85\sigma\) and \(4.07\sigma\) respectively. Let us focus on \(\sigma_{8}\) and \(S_{8}\) which play very relevant role in structure formation. The best-fit values of these parameters for \(\Lambda\)CDM and viscous \(\Lambda(t)\) models using \(BASE+CC+f\sigma_{8}\) data are reported in Tables 1 and 2, respectively. We can read off \(\sigma_{8}=0.794^{+0.014}_{-0.015}\) for \(\Lambda\)CDM model (cf.Table 1), whereas the viscous \(\Lambda(t)\) model prediction is \(\sigma_{8}=0.790^{+0.008}_{-0.010}\) (cf. Table 2). This is a very good result, which can be rephrased in terms of the fitting value of the related LSS observable \(S_{8}=\sigma_{8}\sqrt{(1-\Omega_{\Lambda})/0.3}\) quoted in the Tables 1 and 2: \(S_{8}=0.811\pm 0.022\) for \(\Lambda\)CDM and \(S_{8}=0.822\pm 0.019\) for viscous \(\Lambda\)(t) model. The values of \(\sigma_{8}\) and \(S_{8}\) for viscous \(\Lambda(t)\) model is compatible for \(1\sigma\) confidence level with \(\Lambda\)CDM. Our result predicts that the tensions in \(\sigma_{8}\) and \(S_{8}\) are reduced to \(0.23\sigma\) and \(-0.38\sigma\), respectively. The behavior of \(f(z)\sigma_{8}(z)\) as a function of redshift is plotted in Fig.14. We can see that the evolution of \(f\sigma_{8}\) for both viscous \(\Lambda(t)\) and \(\Lambda\)CDM models are consistent with the observational data points. Table 3 presents the \(\chi^{2}\) and reduced \(\chi^{2}\) of \(\Lambda\)CDM and viscous \(\Lambda(t)\) models, respectively for the used datasets. To compute reduced \(\chi^{2}\), denoted as \(\chi^{2}_{red}\), we use \(\chi^{2}_{red}\) = \(\chi^{2}_{min}/(N-d)\), where \(N\) is the total number of data points and \(d\) is the total number of fitted parameters, which differs for the various models. It should be noted that when a model is fitted to data, a value of \(\chi^{2}_{red}<1\) is regarded as the best fit, whereas a value of \(\chi^{2}_{red}>1\) is regarded as a poor fit. In our observations, we have used \(N=1054\) data points for BASE (SNIa and BAO), \(N=1086\) data points for BASE+CC and \(N=1104\) data points for BASE+CC+\(f\sigma_{8}\). The number of free parameters of viscous \(\Lambda(t)\) is \(d=4\) where as for \(\Lambda\)CDM it is \(d=2\). Using these information, the \(\chi^{2}_{red}\) for both the models are given in Table 3. It can be observed that the value of \(\chi^{2}_{red}\) is less than unity with every data sets for both the models which show that the both models are in a very good fit with these observational data sets and the observed data are consistent with the considered models. Using the three combination of data sets, we are also interested in investigating the cosmographical aspects of the models, such as jerk parameter, which is defined as \[j=\frac{\overset{\cdot\cdot}{a(t)}}{aH^{3}}=q(2q+1)+(1+z)\frac{dq}{dz}. \tag{56}\] The jerk parameter which is a dimensionless third derivative of the scale factor, can provide us the simplest approach to search for departures from the \(\Lambda\)CDM model. It is noted that for \(\Lambda\)CDM model, \(j=1\)(const.) always. Thus, any deviation from \(j=1\) would favor a non-\(\Lambda\)CDM model. In contrast to deceleration parameter which has negative values indicating accelerating Universe, the positive values of the jerk parameter show an accelerating rate of expansion. In Fig. 13, the evolutions of jerk parameter are shown for \(\Lambda\)CDM and viscous \(\Lambda(t)\) models using the best-fit values of parameters obtained from three combination of datasets. It is obvious from Figure 8: Best fits using “\(+CC\)” data set over \(H(z)\) data for viscous \(\Lambda(t)\) (blue dot-dashed line) and \(\Lambda\)CDM (black solid line) are shown. The grey points with uncertainty bars correspond to the \(32\)\(CC\) sample. Figure 9: Best fits using “\(+f\sigma\)s” data set over \(H(z)\) data for viscous \(\Lambda(t)\) (red dot-dashed line) and \(\Lambda\)CDM (black solid line) are shown. The grey points with uncertainty bars correspond to the \(32\)\(CC\) sample. the figure that this parameter remains positive and less than unity in past, and eventually tends to unity in late-time. Thus, the jerk parameter deviates in early time but it attains the same value as \(\Lambda\)CDM in late-time. ## VII Selection Criterion There are two widely used selection criterion, namely, Akaike information criteria (AIC) and Bayesian information criteria (BIC) to measure the goodness of the fitted models compared to a base model. AIC is an essentially selection criteria based on the information theory where as the BIC is based on the bayesian evidence valid for large sample size. In cosmology, AIC and BIC are used to discriminate cosmological models based on the penalization associated with the number of free parameters of the considered models. The AIC parameter is defined through the relation [122] \[AIC=\chi_{min}^{2}+\frac{2dN}{N-d-1}, \tag{57}\] where \(d\) is the free parameters in a model, \(N\) the observational data points and \(\chi_{min}^{2}\) is the minimum value of the \(\chi^{2}\) function. AIC penalizes according to the number of free parameters of that model. To discriminate the proposed model \(m_{1}\) with the reference model \(m_{2}\), we calculate \(\Delta AIC_{m_{1}m_{2}}=AIC_{m_{1}}-AIC_{m_{2}}\), which can be explained as "evidence in favor" of model \(m_{1}\) as compared to model \(m_{2}\). In this paper, we consider \(\Lambda\)CDM model as a reference model (\(m_{2}\)). The value \(0\leq\Delta AIC_{m_{1}m_{2}}<2\) refers to "strong evidence in favor" of the model \(m_{1}\), for \(2\leq\Delta AIC_{m_{1}m_{2}}\leq 4\), there is "average strong evidence in favor" of the model \(m_{1}\), for \(4<\Delta AIC_{m_{1}m_{2}}\leq 7\), there is "little evidence in favor" of the model \(m_{1}\), and for \(\Delta AIC_{m_{1}m_{2}}>8\) there is "no evidence in favor" of the model \(m_{1}\). On the other hand, the Bayesian information criteria (BIC) can be defined as [123] \[BIC=\chi_{min}^{2}+d\ln N. \tag{58}\] Similar to \(\Delta AIC\), \(\Delta BIC_{m_{1}m_{2}}=BIC_{m_{1}}-BIC_{m_{2}}\) gives as "evidence against" the model \(m_{1}\) with reference to model \(m_{2}\). For \(0\leq\Delta BIC_{m_{1}m_{2}}<2\) gives "not enough evidence" of the model \(m_{1}\), for \(2\leq\Delta BIC_{m_{1}m_{2}}<6\), we have "evidence against" the model \(m_{1}\), and for \(6\leq\Delta BIC_{m_{1}m_{2}}<10\), there is "strong evidence against" the model \(m_{1}\). Finally, if \(\Delta BIC>10\) then there is strong evidence against the model and it is probably not the best model. The values of \(\Delta\)AIC and \(\Delta\)BIC with respect to \(\Lambda\)CDM as the referring model are shown in Table 3. According to our results, \(\Delta AIC(\Delta BIC)=1.026(10.977)\) with respect to the \(BASE\) dataset, \(\Delta AIC(\Delta BIC)=0.959(10.913)\) with \(+CC\) dataset, and for \(+f\sigma_{8}\) dataset, we have \(\Delta AIC(\Delta BIC)=-7.492(2.416)\). Thus, under AIC there is "strong evidence in favor" of the viscous \(\Lambda(t)\) model where as under BIC, there is "strong evidence against" the viscous \(\Lambda(t)\) model with \(BASE\) and \(+CC\) dataset and "positive evidence against" the model with \(+f\sigma_{8}\) dataset. ## VIII Conclusion In this work, we have studied the analytical and observational consequences of cosmology inspired by dissipative phenomena in fluids according to Eckart theory with varying VED scenarios for spatially flat homogeneous and isotropic FLRW geometry. We have assumed the interaction of two components: viscous dark matter and vacuum energy density satisfying the conservation equation (8). To solve the field equations (9), we have considered various functional forms of bulk viscous coefficient, in particular (1) \(\zeta=\zeta_{0}\); (2) \(\zeta=\zeta_{1}H\); (3) \(\zeta=\zeta_{0}+\zeta_{1}H\); and \(\zeta=\zeta_{0}+\zeta_{1}H+\zeta_{2}(\tilde{a}/aH)\). These viscous models have different theoretical motivations, but not all of them are able to constraint observationally. We have constrained only the viscous model \(\zeta=\zeta_{1}H\) with varying VED. The motivation of the present work is to study the dynamics and evolutions of a wide class of viscous models with time varying vacuum energy density in the light of the most recent observational data. Current observations do not rule out the possibility of varying DE. It has been observed that the dynamical \(\Lambda\) could be useful to solve the coincidence problem. Although the functional form of \(\Lambda(t)\) is still unknown, a quantum field theory (QFT) approach has been proposed within the context of the renormalization group (RG). Thus, we have used the varying VED of the functional form \(\Lambda=c_{0}+3\nu H^{2}\) in all of viscous models presented in this paper. The motivation for this functional form stems from the general covariance of the effective action in QFT in curved geometry. It has been shown that the \(\Lambda(t)\) provides either a particle production processes or increasing the mass of the viscous dark matter particles. In what follows, we summarize the main results of the four different viscous \(\Lambda(t)\) models. In case of the viscous \(\Lambda(t)\) models with \(\zeta=\zeta_{0}\), \(\zeta=\zeta_{0}+\zeta_{1}H\) and \(\zeta=\zeta_{0}+\zeta_{1}H+\zeta_{2}(\tilde{a}/aH)\), we have found the analytical solution of the cosmological parameters, like \(H(t)\), \(a(t)\), \(q(t)\) and \(w_{eff}(t)\). It has been observed that these viscous \(\Lambda(t)\) models expand exponentially with cosmic time \(t\). The models show the transition from decelerated phase to accelerated phase in late time. It is important to note that it is \(H(z)\) that is actually the observable quantity in cosmology which can be examined with current observations. However, assuming suitable choice of model parameters, the evolutions and dynamics of these models can be interpreted. In case of viscous \(\Lambda(t)\) model with \(\zeta=\zeta_{1}H\), we have obtained the various cosmological parameters. We have performed a joint likelihood analysis in order to put the constrain on the main parameters by using the three different combinations of observational data: BASE, \(+CC\) and \(+f\sigma_{8}\). To discriminate our model with the concordance \(\Lambda\)CDM model, we have also performed the statistical analysis for \(\Lambda\)CDM by using the same observational datasets. Our finding shows that this viscous \(\Lambda(t)\) model can accommodate a late time accelerated expansion. It has been observed that we can improve significantly the performance of the model by using \(BASE+CC+f\sigma_{8}\). From observational consistency points of view, we have examined the evolution of the viscous \(\Lambda(t)\) model on Hubble parameter, deceleration parameter and equation of state parameter by using the best-fit values of parameters. It has been observed that the model depicts transition from an early decelerated phase to late-time accelerated phase and the transition takes place at \(z_{tr}=0.664^{+0.031}_{-0.042}\) with \(BASE\) data, \(z_{tr}=0.665^{+0.031}_{-0.037}\) Figure 11: Effective EoS parameter as a function of redshift \(z\) for viscous \(\Lambda(t)\) using “\(+CC\)” dataset. The evolution of EoS parameter in the standard \(\Lambda\)CDM model is also represented as the dashed curve. A dot denotes the present value of the EoS parameter. Figure 12: Effective EoS parameter as a function of redshift \(z\) for viscous \(\Lambda(t)\) using “\(+f\sigma_{8}\)” dataset. The evolution of EoS parameter in the standard \(\Lambda\)CDM model is also represented as the dashed curve. A dot denotes the present value of the EoS parameter. Figure 10: Effective EoS parameter as a function of redshift \(z\) for viscous \(\Lambda(t)\) using “\(BASE\)” dataset. The evolution of EoS parameter in the standard \(\Lambda\)CDM model is also represented as the dashed curve. A dot denotes the present value of the EoS parameter. Figure 13: Jerk parameter \(j(z)\) with redshift \(z\) using best-fit values of parameters for viscous \(\Lambda(t)\) model. The horizontal line represents the \(\Lambda\)CDM model. with \(+CC\) data and \(z_{tr}=0.626^{+0.028}_{-0.037}\) with \(+f\sigma_{8}\) data. The present viscous \(\Lambda(t)\) model has \(q_{0}=-0.533^{+0.025}_{-0.020}\), \(q_{0}=-0.535^{+0.023}_{-0.020}\) and \(q_{0}=-0.516^{+0.022}_{-0.017}\) respectively. Thus, both \(z_{tr}\) and \(q_{0}\) values are in good agreement with that of \(\Lambda\)CDM model. The ages of the Universe obtained for this model with each dataset are very much compatible with the \(\Lambda\)CDM model. The proposed model has small negative value of EoS parameter for large redshifts and asymptotically approaches to cosmological constant for small redshifts. Thus, the viscous \(\Lambda(t)\) model behaves like quintessence in early time and cosmological constant in late-time. The residual tensions of our fitting results with respect to the latest local measurement \(H_{0}=73.04\pm 1.04\) km/s/Mpc [121] are \(3.92\sigma\), \(3.85\sigma\) and \(4.07\sigma\), respectively. In Ref. [124], the authors found \(H_{0}=69.13\pm 2.34\) km/s/Mpc assuming the \(\Lambda\)CDM. Such result almost coincides with \(H_{0}\) that we obtained in Tables 1 and 2 for \(\Lambda\)CDM and viscous \(\Lambda(t)\) models. We have explored the \(\sigma_{8}\) and \(S_{8}\) parameters using the combined datasets of \(BASE+CC+f\sigma_{8}\). The constraints on \(\sigma_{8}\) and \(S_{8}\) from this combined analysis are \(\sigma_{8}=0.790^{+0.008}_{-0.010}\) and \(S_{8}=0.822^{+0.019}_{-0.019}\), respectively which are very close to the values of \(\Lambda\)CDM. The tension of our fitting results in \(\sigma_{8}\) and \(S_{8}\) for viscous \(\Lambda(t)\) model with respect to respective \(\sigma_{8}\) and \(S_{8}\) of \(\Lambda\)CDM are \(0.23\sigma\) and \(-0.38\sigma\), respectively. The evolution of \(f\sigma_{8}\) as displayed in Fig.14 shows that the behaviour of \(f\sigma_{8}\) is consistent with the observational data points. It has been noticed that the best-fit results are consistent in the vicinity of Planck data [7]. It has been observed that the value of \(\chi^{2}_{red}\) is less than unity with every data sets which show that the model is in a very good fit with these observational data sets and the observed data are consistent with the considered model. The jerk parameter remains positive and less than unity in past, and eventually tends to unity in late-time. Thus, the jerk parameter deviates in early time but it attains the same value as \(\Lambda\)CDM in late-time. To discriminate the viscous \(\Lambda(t)\) with the \(\Lambda\)CDM, we have examined the selection criterion, namely, AIC and BIC. According to the selection criteria \(\Delta\)AIC, we have found that the viscous \(\Lambda(t)\) model is "positively favored" over the \(\Lambda\)CDM model for \(BASE\), \(+CC\) and \(+f\sigma_{8}\) datasets. Similarly, with respect to \(\Delta\)BIC our model has a "very strong evidence against" the model for \(BASE\) and \(+CC\) datasets whereas when we add \(+f\sigma_{8}\) dataset, there is "no significant evidence against" the model. As a concluding remark we must point out that the viscous models with decaying VED may be preferred as potential models to examine the dark energy models beyond the concordance cosmological constant. The viscous effects with decaying VED can drive an accelerated expansion of the Universe. Thus, a viable cosmology can be constructed with viscous fluids and decaying VED. With new and more accurate observations, and with more detailed analyses, it would be possible to conclusively answer the compatibility of viscous model with dynamical vacuum energy. Figure 14: Theoretical curves for the \(f(z)\sigma_{8}(z)\) corresponding to \(\Lambda\)CDM and viscous \(\Lambda(t)\) model along with some of the data points employed in our analysis. To generate this plot we have used the best-fit values of the cosmological parameters listed in Tables 1 and 2 for “\(+f\sigma_{8}\)” data. \begin{table} \begin{tabular}{l|c|c|c|c|c|c} Values & \multicolumn{2}{c|}{BASE} & \multicolumn{2}{c|}{\(+\)CC} & \multicolumn{2}{c}{\(+f\sigma_{8}\)} \\ \cline{2-7} & \(\Lambda\)CDM & viscous \(\Lambda\)(t) & \(\Lambda\)CDM & viscous \(\Lambda\)(t) & \(\Lambda\)CDM & viscous \(\Lambda\)(t) \\ \hline \(\chi^{2}\) & 518.017 & 515.074 & 525.457 & 522.390 & 842.630 & 831.112 \\ \(d\) & 2 & 4 & 2 & 4 & 2 & 4 \\ \(N\) & 1054 & 1054 & 1086 & 1086 & 1104 & 1104 \\ \(\chi^{2}_{red}\) & 0.492 & 0.498 & 0.484 & 0.481 & 0.764 & 0.755 \\ \(AIC\) & 522.028 & 523.055 & 529.468 & 530.427 & 846.641 & 839.112 \\ \(BIC\) & 531.938 & 542.915 & 539.438 & 550.351 & 856.643 & 859.139 \\ \(\Delta\)AIC & \(-\) & 1.026 & \(-\) & 0.959 & \(-\) & \(-7.492\) \\ \(\Delta\)BIC & \(-\) & 10.977 & \(-\) & 10.913 & \(-\) & 2.496 \\ \end{tabular} \end{table} Table 3: Values of Chi-squared, reduced Chi-squared, AIC and BIC of \(\Lambda\)CDM and viscous \(\Lambda(t)\) models. The \(\Lambda\)CDM model is considered as reference model to calculate the \(\Delta\)AIC and \(\Delta\)BIC. ## Acknowledgments One of the author, VK would like to thank Delhi Technological University, India for providing Research Fellowship to carry out this work.
2301.01134
Ring That Bell: A Corpus and Method for Multimodal Metaphor Detection in Videos
We present the first openly available multimodal metaphor annotated corpus. The corpus consists of videos including audio and subtitles that have been annotated by experts. Furthermore, we present a method for detecting metaphors in the new dataset based on the textual content of the videos. The method achieves a high F1-score (62\%) for metaphorical labels. We also experiment with other modalities and multimodal methods; however, these methods did not out-perform the text-based model. In our error analysis, we do identify that there are cases where video could help in disambiguating metaphors, however, the visual cues are too subtle for our model to capture. The data is available on Zenodo.
Khalid Alnajjar, Mika Hämäläinen, Shuo Zhang
2022-12-15T17:11:35Z
http://arxiv.org/abs/2301.01134v1
# Ring That Bell: A Corpus and Method for Multimodal Metaphor Detection in Videos ###### Abstract We present the first openly available multimodal metaphor annotated corpus. The corpus consists of videos including audio and subtitles that have been annotated by experts. Furthermore, we present a method for detecting metaphors in the new dataset based on the textual content of the videos. The method achieves a high F1-score (62%) for metaphorical labels. We also experiment with other modalities and multimodal methods; however, these methods did not out-perform the text-based model. In our error analysis, we do identify that there are cases where video could help in disambiguating metaphors, however, the visual cues are too subtle for our model to capture. The data is available on Zenodo. ## 1 Introduction Figurative language is a challenging topic for computational modeling as the meaning of a figurative expression is non-compositional and typically very context dependent (see Roberts and Kreuz 1994). Metaphor is one of the most important figures of language; it is constantly used in every day language (Steen et al., 2010) to draw comparisons or to express something difficult and foreign in more familiar terms. Metaphors can be conventional (Traugott, 1985) and they are often found in idioms, but at the same time metaphors are used to create something new (see Kantokorpi et al. 1990). Given its ubiquitous presence, understanding metaphors is integral in achieving true natural language understanding (NLU) in the real world. Without their successful interpretation, our models are bound to make mistakes whenever anything is expressed in an indirect or creative fashion. Metaphors are often very contextual and their successful detection and interpretation requires a wide range of contextual cues that would be captured in audio (e.g., prosody) and video (e.g., gestures and actions). Therefore, we believe a multimodal dataset is a great contribution to metaphor research within and outside of the field of NLP. Two important parts of a metaphor are a tenor and a vehicle (see Richards 1936). For example, in the metaphor _life is a journey_, _life_ is the tenor and _journey_ is the vehicle. How metaphors essentially operate is that a vehicle is used to give some of its attributes to the tenor. In the case above, _journeys_ are long and full of adventure, which means that these properties are attributed to _life_ in an indirect fashion. The meaning of a metaphor is never literal nor compositional, but rather calls for interpretation on the level of pragmatics (see Rosales Sequeiros 2016). Meanwhile, multimodality is becoming increasingly important for many tasks (see Castellucci et al. 2020; Mogadala et al. 2020; Declerk et al. 2020). We believe the availability of multimodal datasets for a variety of NLP tasks is lacking, and we hope to contribute to the community with our multimodal metaphor dataset. In this paper, we present the first fully open expert annotated multimodal dataset for metaphor detection1. In addition, we experiment with unimodal and multimodal methods for metaphor detection. Our results indicate that the text-based model achieved the best performance. We discuss the results of our experiments and conduct an extensive error analysis to shed light on what was learned successfully by the model and its shortcomings. Footnote 1: [https://doi.org/10.5281/zenodo.7217991](https://doi.org/10.5281/zenodo.7217991) Using CC BY licensed videos in our corpus has been the primary design principle of our data collection so that we can release our corpus without any restrictions in its entirety. This, we believe, is more useful for research purposes than a corpus consisting of short video clips to compile with copyright laws such as the fair use law in the US. Related Work Metaphors have, thus far, been computationally detected using only text. In this section, we describe some of the recent approaches for textual metaphor detection, the corpora used to achieve that and some of the multimodal research conducted on NLP tasks other than metaphor detection. There are several takes on metaphor interpretation Xiao et al. (2016); Rai et al. (2019); Bar et al. (2020) and generation Hamalainen (2018); Terai and Sugyo (2019); Zheng et al. (2019), but we do not describe them in detail as interpretation is a very different problem. There are two corpora currently used for metaphor detection, VU Amsterdam (VUA) Metaphor Corpus Steen et al. (2010) and Corpus of Non-Native Written English Annotated for Metaphor Beigman Klebanov et al. (2018). Unlike our corpus, both of these datasets contain textual modality only. For textual metaphor detection, Gao et al. (2018) has used a bi-directional LSTM (long short-term memory) based model with ELMo embeddings. Similarly, Liu et al. (2020) have used a bi-LSTM model with BERT and XLNet for the same task. Not unlike the previous approaches, Dankers et al. (2020) has also applied bi-LSTM models comparing ELMo and GloVe embeddings to BERT embeddings with global and hierarchical attention models. Traditional machine learning methods, Logistic Regression, Linear SVC (Support Vector Classification) and Random Forest Classifier, have been used recently with feature engineering to detect metaphors Wan et al. (2020). In DeepMet, proposed by Su et al. (2020), a siamese neural network have been utilized, where textual RoBERTa Liu et al. (2019) embeddings are computed from the context, the token in question and its part-of-speech and fine-grained part-of-speech. DeepMet was the best performing solution for detecting textual metaphors in the VUA dataset, based on a recent shared task Leong et al. (2020). There are several recent works on multimodal detection of a variety of linguistic phenomena. For example, SVMs (Support Vector Machines) with word embeddings and feature extraction have been used for multimodal sarcasm detection Castro et al. (2019); Alnajjar and Hamalainen (2021). Mittal et al. (2020) uses GloVe embeddings, features extracted from audio and facial recognition system output to predict emotion in a multimodal dataset. These multimodal features are fused using a memory fusion network (MFN) Zadeh et al. (2018). Similarly, Li et al. (2021) detect emotion in a multimodal dataset by modeling the problem from the point of view of the quantum theory. While the field has seen increasing research on multimodal NLP Tsai et al. (2019); Mai et al. (2020); Sahu and Vechtomova (2021), no data or model has been proposed for multimodal metaphor detection. ## 3 Our Metaphor Corpus In this section, we present our video, audio and textual corpus of manually annotated metaphorical language. Our selection of the video clips includes only CC-BY licensed videos on YouTube that have human authored closed captions in English. The content of the videos presents mainly real people talking, which rules out animations and video game streams. The availability of human authored closed captioning is important as it speeds up our annotation time and provides us with subtitles that are already aligned with video and audio. The CC-BY license was an important selection criterion because it makes it possible for us to release the dataset openly. We used the filters provided by YouTube to limit our search to videos that were marked as CC-BY and had closed captioning. However, the YouTube filter does not distinguish between automatically generated closed captioning and a human authored one. Fortunately, it is relatively easy to tell these two apart from each other. Automated closed captioning tends to appear one word at a time, whereas human authored closed captioning is visualized more like traditional subtitles. These criteria greatly reduced the number of eligible videos to include in our corpus. Apart from these criteria, we also filtered videos with sensitive and offensive languages. No further restrictions have been explicitly placed on the genres or types of videos, as we do not want to introduce biases for which types of contents are more likely to contain metaphors. Therefore, the availability of the metaphors naturally occurring in the corpus is the result of the ubiquity of the metaphor in everyday language use. All Youtube queries were conducted in incognito mode to avoid biased YouTube suggestions based on our viewing habits. Figure 1 shows real examples from our corpus where video can be useful in detecting metaphors. On the left, the woman wearing a gray shirt is talking about _sprinkling keywords_ and showing a sprinkling gesture. On the right, the woman wearing the wine red shirt says _ring that bell_ and shows a bell ringing gesture. Our corpus consists of 27 YouTube videos with a total duration of 3 hours, 53 minutes and 47 seconds of video. For comparison, a recently released multimodal dataset for sarcasm detection [1] has the duration of 3 hours, 40 minutes and 47 seconds. The videos belong mostly to a start-up domain and many of them deal with issues of online visibility for a start-up company. This domain was a consequence of our selection criteria for videos. It turns out that YouTube has plenty of high-quality human close-captioned videos released under the CC-BY license that relate to this particular domain. Our corpus provides linguistics researchers with the ability to study the use of metaphor in a multimodal setting, something that has gained attention in their field of science as well [14]. This can, indeed, foster a wider interdisciplinary collaboration leading to a deeper understanding of the phenomenon. ### Annotation Two expert annotators went through the video files and annotated metaphors by surrounding them with \(v\) tags for vehicles and \(t\) tags for tenors. The use of experts is motivated by the fact that previous research has found that non-expert annotators struggle with metaphors [1]. The annotators followed a simple procedure in annotating the data: * Is the meaning literal? * If the meaning of the word is abstract, is it a dictionary meaning? * Does the potential metaphor express pragmatic insincerity? * If the answer to all of the questions is no, annotate it as a metaphor. In other words, if the meaning of a word or a phrase is not literal, it is annotated as a metaphor. However, just the mere fact of a word being used in an abstract way is not enough to mark it as metaphorical. For example, in the sentence _it is tied to revenue_, "tied" is not tagged as a metaphor just because it is used in a more abstract sense than the typical concrete sense of tying one's shoes, for example. If the abstract meaning of a word appears in a dictionary, the word is not considered metaphorical. However, conventional metaphors that consist of multiple words, and are thus idioms, are tagged as metaphors. We do not make a distinction between metaphors and similes. Pragmatic insincerity [see] is a phenomenon related to sarcasm as one of its preconditions [see]. There is a certain overlap between metaphors and sarcastic expressions in the sense that both use words in their non-literal meaning. In order to ensure that we do not mix these two notions with each other, it is important to avoid annotating pragmatically insincere expressions as metaphorical. Table 1 shows an example of annotations. The annotations were done directly in the subtitles The utterances are time stamped and aligned with the video. In the table, tenors are indicated with \(<\)\(t\)\(>\) and vehicles with \(<\)\(v\)\(>\). For deictic tenors, an \(r\) attribute is provided to resolve the deixis by indicating the actual tenor that has appeared earlier in the conversation. In the examples, _game_ is used metaphorically to talk about marketing, _quick fix_ is \begin{table} \begin{tabular}{|l|} \hline **sentence** \\ \hline that you can use to really up your \(<\)\(v\)\(>\)game\(<\)/\(v\)\(>\) \\ \hline because while a \(<\)\(t\)\(>\)quick fix\(<\)/\(t\)\(>\) can be \(<\)\(v\)\(>\)appetizing\(<\)/\(v\)\(>\) and appealing \\ \hline \(<\)t r=“domain name”\(>\)That\(<\)/\(c\)’s \(<\)\(v\)\(>\)the street address\(<\)/\(v\)\(>\) for your website \\ \hline you’re ready to \(<\)\(v\)\(>\)give it a shot\(<\)/\(v\)\(>\) \\ \hline \end{tabular} \end{table} Table 1: Example of the annotations for the metaphor detection corpus. Figure 1: Metaphors made visible in the video through gestures. called _appetizing_ as though it was something edible and _domain name_ is contrasted to a physical _street address_ by direct comparison. _Give it a shot_ is a conventional metaphor. All in all, after multiple annotation iterations, the dataset consists of 304 vehicles and 67 tenors. This totals to 371 metaphorical expressions. They vary in length: the shortest tenor is one word, such as _it_, while the longest tenor is several words _the discovery of those five noble gases to illuminate like that_. The same goes for vehicles where their length varies form one word such as _dive_ to multiple words: _the history of the internet itself_. On a token level, we have 672 vehicle tokens and 113 tenor tokens, so altogether 785 metaphorical tokens. In total, 6% of the expressions in the corpus are metaphorical. While this percentage might appear low, it is natural and more representative of the real usage of metaphors in typical conversations which makes this corpus suitable for building metaphor detection models applicable for real-world scenarios. Around 55% of the vehicles are conventional metaphors and 45% are novel metaphors. However, it is fairly common that same words appear in the corpus in a metaphorical and non-metaphorical sense. In our corpus, there are two videos that deal with actual cooking, in which many food-related metaphors appear non-metaphorically, such as _sprinkle those in_, said metaphorically about keywords and _a little sprinkle_, said non-metaphorically about sugar. Another example is the use of _house_ non-metaphorically as in _come pick it up at my house_ and metaphorically as in _hink of hosting as your house_, where a metaphorical connection is drawn between _hosting_ and a _house_. ### Data preparation As YouTube serves files in several different formats such as _webm_, _mkv_ and _mp4_ the first step is to use FFmpeg2 to convert all videos into mp4 format. We also use the same tool to clip the video files into sentence-length clips based on the time stamps in the subtitles and extract their audio into wav files. This process yielded 6,565 video and audio clips that are aligned with text. Footnote 2: [https://ffmpeg.org/](https://ffmpeg.org/) We split the datset randomly so that 70% of sentences that contain metaphors and 70% of sentences that don't contain any metaphors are used for training, 15 % of both types of sentences for validation and 15% of both for testing. This way we ensure that both metaphorical and non-metaphorical sentences are divided proportionally with the same ratios. These splits are used for all the models. ## 4 Metaphor Detection We experiment with uni- and multi-modal models for metaphor detection. In this section, we describe the preprocessing steps applied and the experimental setups conducted. ### Preprocessing For each modality, we make use of the latest advances in neural network models to capture important features that have achieved state-of-the-art results in various NLP tasks. As metaphor detection has been conducted solely based on text, we follow the DeepMet approach by Su et al. (2020) and process the entire textual content using spaCy Honnibal et al. (2020) to tokenize it and acquire Universal Dependencies style syntactic trees Nivre et al. (2020) and Penn Treebank parts-of-speech tags Santorini (1990). Similarly to the original approach, all of our textual models predict metaphors at the token level given the context surrounding it and its POS tags as input. We resample the audio to 16kHz. Audio features are extracted using _Wav2Vec2FeatureExtractor_ provided by the Transoformers Python library Wolf et al. (2020). Video features are obtained by taking equally-distributed 16 frames from a clip and then resize them into 128x171, followed by normalization and center cropping to 112x112. ### Textual model We train two text-only models, both follow the architecture and approach of DeepMet where we obtain textual embeddings using RoBERTa Liu et al. (2019) and feed them into two transformer encoding layers which are then combined by applying global average pooling and concatenation. A dense fully-connected layer takes in the combined output of both encoders and predicts whether the token is metaphorical (c.f., Su et al. 2020 for more details). In our first textual model, we train the model using our corpus, whereas in the second one we train it using VUA corpus (with a learning rate of 0.00001, akin the original paper) and later fine-tune it using our corpus. ### Audio model We extend and fine-tune Facebook's pretrained multilingual XLSR-Wav2Vec2 large model (Baevski et al., 2020). The model is trained on Multilingual LibriSpeech (Pratap et al., 2020), CommonVoice (Ardila et al., 2020) and Babel (Roach et al., 1996) for speech recognition. We employ this model to encode speech into vector representations from raw audio. We replace the classification layer of the original model with a dense fully-connected layer that produces two outputs, one for each label. Unlike the textual model, here we classify whether the entire spoken expression contains a metaphor or not (i.e., not on a word level). ### Video model For our video unimodal model, we incorporate a pretrained model for human action detection. The model is based on the 18 layer deep R(2+1)D network (Tran et al., 2018) and it is trained on the Kinetics-400 (Zisserman et al., 2017) dataset. The intuition behind using this model is that it was able to detect actions (e.g., playing organ), gestures (e.g., pointing) and movements (e.g., waving). Realizing such information is crucial in understanding the context, and would provide further cues for detecting metaphors. Similar to the audio model, we substitute the original classification layer with a fully connected layer and fine-tune the pretrained model to predict whether a scene is metaphorical or not. ### Multimodal metaphor detection We test out three multimodal metaphor detection models; 1) text and audio, 2) text and video and 3) text, audio and video. The textual model is the fine-tuned model using the VUA corpus and our textual corpus. In all of the models, the final classification layer of their sub-models are removed. Unimodal models are combined by concatenating the weights of their last layer, which are then fed to a classification layer. ### Common configuration All of the models described above share common configurations, unless we explicitly indicate otherwise. Prior to the last classification layer of all of our mono- and multimodal models, we introduce a dropout layer (Srivastava et al., 2014) (with a probability of 20%) to accelerate training, and reduce internal covariate shift and overfitting. We use the cross entropy loss function along with Adam optimizer (Kingma and Ba, 2014; Loshchilov and Hutter, 2019) to update the weights and train the models. All the fine-tuned models are trained with a learning rate of 0.0001 and for 3 full epochs. ## 5 Results In this section, we follow the evaluation metrics commonly used for the metaphor detection task by reporting the precision, recall and F1 scores for the metaphorical label. Regarding the textual models, we report three sets of results, which are for the models trained on: 1) VUA corpus, 2) our corpus and 3) both the VUA and our corpus. All the models predict metaphoricity on the token level. To ensure that our implementation of the DeepMet approach is correct, we tested the first model on the VUA test dataset of the metaphor detection shared task and achieved an F1-score of 0.68 and 0.73 on all POS and verb subsets of the data, respectively. These results are relatively close to the results reported by the authors. Table 2 shows the classification results of all three models on the test set. The test set contained 90 metaphorical tokens and 6,961 non-metaphorical tokens. The results indicate that the textual model trained solely on the VUA dataset performed poorly on our test set. In comparison, training the model using our metaphor corpus only resulted in a great increase of correct predictions. Nonetheless, combining both corpora by fine-tuning the first model with our corpus produced the winning model, which managed to spot 76% of the metaphorical tokens correctly. We believe that the huge differences between the first and second textual models, despite the larger size of VUA's training dataset, are due to the differences in domains. The VUA corpus contains academic texts, conversation, fiction, and news texts, whereas our corpus is dominated by conversations on the web and start-ups. It is evident that by exposing the model to general domains (i.e., VUA's corpus) and, thereafter, concentrating it on the start-up domain, the model was able to identify the highest number of metaphorical usages. Results from the other models (unimodal or multimodal) that involving audio and video showed that adding these modalities actually did not help improving the model - rather, they are detrimental to the model performance on metaphor detection. We extend two possible explanations for this failure. First, it is possible that because the visual and audio cues of metaphor are subtle, these models failed to learn from such a small amount of annotated data. Second, it is unclear that the specific models we are using for audio and video modalities encode the information relevant for the metaphor detection task. For instance, whereas it is impossible to completely disentangle what exactly the Wav2Vec model is encoding, we can conjecture that it encodes information about phoneme identity considering it is optimized for the speech recognition task. Therefore, it may not be entirely surprising that the Wav2Vec encoding is not useful for the metaphor detection task because it is adding redundant or irrelevant information to the model. It is our future work (or the future work for the community who utilizes this dataset) to refine our understanding of the multimodal encoding for the metaphor detection task (for instance, employing a model that more directly encodes information about speech prosody from the audio). ### Error analysis When looking at the results of the text only model, we can see that the model identifies metaphors correctly as metaphors more often than not. There are some metaphorical tokens in metaphors consisting of multiple words that get classified wrong, for example, in _You could think of hosting as your house_, the tenor _hosting_ and the determinant _your_ of the metaphorical word _house_ are not identified as metaphorical, while _house_ is correctly identified. Another example is the conventional metaphor _toot their own horn_, where all other words except for _own_ are correctly identified as metaphorical. There are also a fewer number of cases where all words get identified wrongly as non-metaphorical, for example, the model did not predict any metaphorical tokens in _It's where you live_, while in reality _it_ is the tenor and _where you live_ is the vehicle. Also, individual tenors where the vehicle comes later get often not recognized such as in _Yes, malware you could think of like_, where _malware_ is the tenor for a vehicle that appears later in the dialog. When the tenor and the vehicle co-exist nearby, the model can get all metaphorical tokens right such as in _It's kinda like real estate right?_ where both the tenor _it_ and the vehicle _real estate_ are correctly identified. Also many tenorless expressions are fully recognized correctly as metaphorical, such as _Spreadin' the love_. There were plenty of cases (61) where the model predicted a metaphor tag for a token while there was no metaphor. Curiously, prepositions were often tagged metaphorical, such as _to_ in _ring that bell to see these episodes first_. The actual metaphorical part _ring that bell_ ends before the preposition _to_ that has a non-metaphorical meaning _in order to_. We can also see that the model was indeed fooled by cooking terms that were used both metaphorically and non-metaphorically. In _Yeah a little sprinkle_, both \(a\) and _sprinkle_ were classified as metaphors, while the context was about sprinkling sugar. Another similar case was _there's five noble gases that illuminate_, where _noble gases_ and _illuminate_ were erroneously classified to be metaphorical. This was clearly due to the tenor in the corpus: _the discovery of those five noble gases to illuminate like that_ contained similar words. It is evident that the model relies on word similarities more than reaching to a higher pragmatic representation of the phenomenon, however, this is not an unexpected behavior from a machine learning model. There are also cases where the model detects a metaphor, that could theoretically be a metaphor, but is not because of the way it was used in the corpus. For example, the model predicts _Give it a go_ as metaphorical in the expression _button, "Give it a go."_, where people are talking about a button with a particular text rather than using the expression metaphorically. Another such an example is _flying_ in _(money flying)_. Such an expression might be used metaphorically, but in this case this was a note for the hearing impaired as money was actually flying on the video. \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Trained on** & **Precision** & **Recall** & **F1-score** \\ \hline VUA & 0.04 & 0.33 & 0.07 \\ \hline Ours & 0.38 & 0.63 & 0.47 \\ \hline VUA + Ours & **0.53** & **0.76** & **0.62** \\ \hline \end{tabular} \end{table} Table 2: Classification results of the textual monomodal models on the test set of our corpus, for the metaphorical label. Discussion and Conclusions In this work, we have only focused on metaphor as a strictly linguistic phenomenon and we have built a multimodal dataset where these linguistic metaphors have been tagged in terms of tenors and vehicles. However, it is apparent that metaphor is a phenomenon that occurs on a higher level of our cognitive capacities than mere language. There are several cases in our corpus, where we can evidence the existence of a metaphor but it is never expressed verbally. For example in Figure 2, _money flying_ cannot be a metaphor when inspected purely from the point of view of language and its relation to the video when money is actually flying in the scene. However, it is a metaphor on a higher level in the sense that the entire scene where money was flying was to indicate someone becoming rich. In other words, stating a fact that is happening is not metaphorical if the fact is literally taking place, however the fact itself might be metaphorical. At the same time, as evidenced by our error analysis, there are certainly cases where video modality could help in disambiguating whether something is said metaphorically or not. For instance, talking about _sprinkling_ in a kitchen environment (see Figure 3) is a very strong sign that the word is potentially non-metaphorical. Integrating these weak cues into a multimodal system is, however, not an easy task given that the current methods for video processing are limited in their coverage. Therefore, in the future, it would be useful to annotate metaphors also in the other modalities. Money flying can be a visual metaphor, and so can a sound effect, and they can exist independently from each other in different modalities. Perhaps the reason why our multimodal attempts failed was that metaphor can be independent of the other modalities. Producing such a dataset where these modal specific metaphors are also annotated for video and audio is definitely a huge undertaking that requires research in its own right. It is clear that our model can detect metaphors correctly, but also the mistakes it makes highlight that despite using a large RoBERTa model, the meaning representation the model has cannot reach to such a nuanced level as to confidently detect metaphors. Metaphor is a figurative device that cannot be explained by semantics, but rather requires pragmatic inspection. It is not clear based on our research and other contemporary approaches whether the current word or sentence embedding models are sufficient to navigate in the depths of pragmatics and subjective interpretation in any other way than learning some irrelevant co-occurring phenomena from a biased corpus. At the same time there is no such thing as an unbiased corpus, either, given that bias (and mostly heuristics causing it) is a fundamental part of our cognition as human beings. In this paper, we have presented a new open and multimodal dataset for metaphor detection. Because we have focused strictly on CC-BY licensed videos, we can make the entire dataset available on Zenodo. In our current work, we have not taken the context widely into account when predicting metaphoricity, but rather resorted to a very local context. The fact that the videos can be published in full length makes it possible for any future work to explore different ways of including contextual cues freely. ## Acknowledgments This work was partially financed by the Society of Swedish Literature in Finland with funding from Enhancing Conversational AI with Computational Creativity, and by the Ella and Georg Ehrnrooth Foundation for Modelling Conversational Artificial Intelligence with Intent and Creativity. Figure 3: _Sprinkling_ used in a kitchen in reference to sugar. Figure 2: Money actually flying on the video.
2301.13530
Domain-Generalizable Multiple-Domain Clustering
This work generalizes the problem of unsupervised domain generalization to the case in which no labeled samples are available (completely unsupervised). We are given unlabeled samples from multiple source domains, and we aim to learn a shared predictor that assigns examples to semantically related clusters. Evaluation is done by predicting cluster assignments in previously unseen domains. Towards this goal, we propose a two-stage training framework: (1) self-supervised pre-training for extracting domain invariant semantic features. (2) multi-head cluster prediction with pseudo labels, which rely on both the feature space and cluster head prediction, further leveraging a novel prediction-based label smoothing scheme. We demonstrate empirically that our model is more accurate than baselines that require fine-tuning using samples from the target domain or some level of supervision. Our code is available at https://github.com/AmitRozner/domain-generalizable-multiple-domain-clustering.
Amit Rozner, Barak Battash, Lior Wolf, Ofir Lindenbaum
2023-01-31T10:24:50Z
http://arxiv.org/abs/2301.13530v2
# Domain-Generalizable Multiple-Domain Clustering ###### Abstract Accurately clustering high-dimensional measurements is vital for adequately analyzing scientific data. Deep learning machinery has remarkably improved clustering capabilities in recent years due to its ability to extract meaningful representations. In this work, we are given unlabeled samples from multiple source domains, and we aim to learn a shared classifier that assigns the examples to various clusters. Evaluation is done by using the classifier for predicting cluster assignments in a previously unseen domain. This setting generalizes the problem of unsupervised domain generalization to the case in which no supervised learning samples are given (completely unsupervised). Towards this goal, we present an end-to-end model and evaluate its capabilities on several multi-domain image datasets. Specifically, we demonstrate that our model is more accurate than schemes that require fine-tuning using samples from the target domain or some level of supervision. Machine Learning, ICML ## 1 Introduction Clustering is a fundamental machine-learning task that categorizes unlabeled vectors into homogeneous groups. Clustering high dimensional measurements is a difficult task, and classical methods such as \(K\)-means (Lloyd, 1982), Spectral Clustering (Ng et al., 2001), or density-based methods (Ester et al., 1996) often fail to group semantically related examples. Several recent works have developed deep learning-based frameworks to overcome this limitation by extracting meaningful features and automatically clustering images. However, extracting informative features from real-world images is challenging since, often, the images are collected from multiple domains with diverse properties. The problem becomes even more challenging if we want to generalize the cluster assignments to unseen target domains that may deviate significantly from the source distribution. Multi-domain clustering methods simultaneously group samples in analog domains such that a sample in one domain is associated with semantically related samples in another (Menapace et al., 2020). Such methods or others that rely on labeled observations benefit from the shared knowledge between the domains (Cheng et al., 2013; Menapace et al., 2020; Zhang et al., 2021; Harary et al., 2022). For example, if one domain contains grayscale images and the other domain contains low-resolution images of the same objects, one can learn to separate between the images of the different objects in a way that utilizes the fact that it is easier to separate between some objects based on color and between other objects based on fine details. We do not assume knowledge about corresponding images between domains, only that all target classes appear in each domain. In this work, we study the ability to learn a classifier \(f\) that, given a sample, regardless of its domain, can categorize it to the matching cluster. The requirement is that \(f\) is evaluated (one sample at a time) in an unseen target domain for which no sample is seen during training. This task combines the problem of multiple domain clustering with the aspect of domain generalization. Both of these are of high value, and their combination is extremely powerful: multiple source clustering is core to scientific discovery, e.g., in biology, where every organism can be a domain, and the application to new domains is the ultimate test. Formally, the problem statement is as follows. We are given a dataset \(S\) with \(N\) samples from \(d\) different domains, where each domain has \(N_{i},\forall i\in\{1,...,d\}\) samples, and we know which sample belongs to each domain. A classifier \(f\) is trained to map every sample in \(S\) to one of \(K\) groups. For evaluation purposes only, a set of labeled samples is then used, and the \(K\) groups are assigned to a set of ground truth labels using a best-matching method (The Hungarian method (Kuhn, 1955)) on an unseen single-domain dataset \(T\). Specifically, \(f\) is applied, without further training, to each sample of \(T\) separately, and the assigned labels \(f\) outputs are compared to the ground truth labels of the samples in \(T\). Figure 1 illustrates our setting. To solve this challenge, we propose a two-stage learning scheme: (i) training a backbone in a self-supervised domain invariant fashion. This step leads to a feature extractor that focuses on semantic features and attenuates the influ ence of style. (ii) training multiple clustering heads using pseudo labels generated using the semantic information. Our method was tested on various domain generalization datasets and has shown superior results over several baselines. Our framework is at par or better than methods that rely on some supervision or adaptation to the target domain. ## 2 Background Self-supervised learning (SSL)is an emerging field in machine learning (ML) that enables extracting useful representation from unlabelled data. SSL relies on creating a pretext task with fictitious labels that can be generated "for free". If the pretext task is correlated with a downstream task of interest, SSL techniques can be compelling for extracting features useful for many applications. A seminal work (He et al., 2020) encodes a query image and matches it to a dictionary of image keys using a contrastive loss. To stabilize the embedding of keys, they use a momentum encoder to update the representation of keys at a slow pace. By using queues, the architecture can scale to large datasets. Another important work was presented by Chen et al. (2020). They utilized pairs of data augmentations, creating feature representations that are then projected and trained to maximize agreement. After training, the projection heads are discarded, and the resulting features generate state-of-the-art self-supervised results. Bootstrap your own latent (BOYL) (Grill et al., 2020), utilized two neural networks for their training. Each model is updated at a different rate making the interaction beneficial for the learning process. Further progress was made by Chen et al. (2020), improving contrastive SSL efficiency. Deep clusteringaims to exploit the strength of neural networks as feature extractors to identify a representation that better preserves cluster structures in unlabelled data. In (Xie et al., 2016), the authors use an autoencoder to extract features while learning soft cluster assignments by minimizing the KL-divergence between the latent space and a prior distribution. Chang et al. (2017) propose an algorithm that iterates between feature extraction and performing a pairwise classification to predict whether pairs of images belong to the same cluster. DeepCluster (Caron et al., 2018), extends this idea by iterating between applying \(k\)-means to a deep feature mapping and training a NN to predict the cluster assignments. Ji et al. (2018) apply content-preserving image transformations to create pairs of samples with shared information. Then, they train a NN to maximize the mutual information between the image pairs in a cluster probabilistic data representation. Recently, Semantic Pseudo-Labeling for Image Clustering (SPICE) (Niu et al., 2022) obtained state-of-the-art results on several clustering benchmarks. SPICE is an iterative deep clustering method that relies on self-supervision and pseudo-labeling. First, self-supervision is performed using a contrastive loss to learn informative features. Then, they create prototype pseudo labeling to avoid miss annotations common to pseudo labeling techniques. Unsupervised domain generalization (UDG)was recently presented by Zhang et al. (2021); Harary et al. (2022). UDG is related to our goal but requires some amount of supervision. Specifically, UDG involves: unsupervised training on a set of source domains, then fitting a classifier using a small number of labeled images. Finally, evaluating the model on a set of target domains unseen during training. Toward this goal Zhang et al. (2021) suggests a method to ignore domain-related features by selecting negative samples from a queue based on their similarity to the positive sample domain. Recently, Harary et al. (2021) presented BrAD, in which self-supervised pre-training on multiple source domains is performed in a shared sketch-like domain. To fine-tune a classifier, they used various amounts of source domain-labeled samples. In contrast to these works, in our research, no class labels for either source or target domains are used for training the model. Unsupervised Clustering under Domain Shift (UCDS) was presented by (Menapace et al., 2020). The goal is to cluster samples from multiple source domains and then adapt the model to the target domain using multiple unlabeled samples from that domain. Menapace et al. (2020) suggested optimizing an information-theoretic loss coupled with domain-alignment layers. This setting is similar to ours; however, we aim to design a model that can predict cluster assignments on multiple source domains and generalize to new unseen domains without any further tuning or adaptation. To the best of our knowledge, this is the first work that solves this task without using any labels or samples from the target domain. This is a big advantage since, in real-world settings; we often don't know that a domain shift occurred, or we can not access a pool of samples from the test domain. ## 3 Method High-level overviewOur method consists of two phases; in the first, a feature extraction model \(f_{1}\) is trained in a self-supervised fashion on data from multiple source domains. This phase bridges the gap between different domains by extracting semantically related features \(u_{i}\in\mathbb{R}^{e}\), where \(e\) is the embedding dimension. The second phase focuses on training a clustering head \(f_{2}\) while the weights of \(f_{1}\) are frozen. Then our cluster predictions are based on \(f=f_{2}\circ f_{1}\). We leverage a basic common domain (BCD) to mitigate the gap between several domains. The BCD is designed to maintain the sample's content while removing domain related information. Conceptually, a sketch-like domain can be considered a suitable BCD for image data with varying color and texture domains. Transforming an image to a sketch domain keeps high-level features such as object identity while decreasing the bias that features such as colors and image style can induce. ### Pre-training Our pre-training phase is inspired by MoCoV2 (Chen et al., 2020). This self-supervised method applies a contrastive loss to learn a representation invariant to a set of strong augmentations defined in the appendix A.1. Each training example \(x_{i}\) is strongly augmented, then compared using a contrastive loss to a positive \(x_{i}^{+}\) (another strongly augmented) example and a negative example \(x_{j}^{-}\), where \(i\neq j\). The negative example \(x_{j}^{-}\) is a strong augmentation of \(x_{j}\), which is stored in a queue as explained in section 3.1. The pre-training process aims to generate valuable features for the downstream tasks. We adapt MoCoV2 for the problem of domain-generalizable multiple-domain clustering by presenting the four components described below and illustrated in figure 2. 1. _Adversarial domain classifier:_ Using adversarial training for domain adaptation was presented in the seminal work of Ganin and Lempitsky (2015) and subsequently used by many (Liu et al., 2022; Zhang, 2019; Wilson and Cook, 2018; Zhao et al., 2020). Here, we leverage this idea and introduce a domain classifier to the features extracted from our backbone \(f_{1}\) to learn features invariant to the domain identity. The weights of the domain classifier \(f_{d}\) are denoted as \(\theta_{d}\), and the gradient of \(f_{d}\) is: \(\frac{\partial E_{f_{d}}}{\partial\theta_{d}}\). We train \(f_{1}\) in an adversarial fashion by updating its weights using a constant \(\lambda_{d}\geq 0\) as in \(-\lambda_{d}\frac{\partial E_{f_{d}}}{\partial\theta_{1}}\), which is known as gradient reversal layer. 2. _Domain balancing:_ Since we are given samples from several source domains, quite often, the population of samples from each domain is different. Such domain imbalance can cause poor generalization since the model may learn only from the highly populated domain. We use a simple approach to mitigate this phenomenon by choosing a domain uniformly and then picking a sample from the chosen domain. 3. _Multi queue:_ Multiple-domain self-supervised feature extraction is prone to focus on domain-related features resulting in poor performance. Specifically, a single negative sample queue has shown inferior results compared to multiple domain-specific queues (Harary et al., 2022). Following this work, we employ a domain-specific queue \(Q=[Q_{1},Q_{2},...,Q_{d}]\) each with size \(N_{q}\) for each of the \(d\) domains. The negative examples \(u_{i}^{-}\) are drawn from the same domain as the positive sample, which makes the discrimination more challenging and encourages the model to focus on the content rather than the domain. For efficiency, positive samples \(u_{i}^{+}\) are stored in the relevant negative domain queue for later use. 4. _Style transfer augmentation:_ We want our backbone to learn semantic features that are invariant to the "style" of the input image. To encourage this property, we use a style transfer neural network model (Huang and Belongie, 2017). We perform a style transfer augmentation \(\mathcal{ST}(x_{i})\) with probability \(p_{st}\) as a replacement for the strong augmentation used in MoCoV2. The style of \(x_{i}\) is replaced by the style of an image from a different domain \(x_{j},s.t.\ d_{x_{i}}\neq d_{x_{j}}\). By augmenting with varying styles of domains, we further enhance the ability of our backbone to ignore domain specific features. Algorithmic overviewGiven an input batch of images \(x=[x_{1},x_{2},..,x_{B}]^{T}\in\mathbb{R}^{B\times C\times W\times H}\), each image \(x_{i}\) is transformed twice. First, it is augmented using a strong augmentation \(x_{i}^{s}=\mathcal{S}(x_{i})\). Second, the image is transformed using the following: \[x_{i}^{st}=\begin{cases}\mathcal{ST}(x_{i}),&w.p\ \ \ p_{st},\\ \mathcal{S}(x_{i})\ \,&w.p\ \ 1-p_{st}.\end{cases} \tag{1}\] Where \(\mathcal{ST}(x_{i})\) replaces the style of \(x_{i}\) with another domain's style with probability \(p_{st}\). Otherwise, a strong augmentation \(\mathcal{S}(x_{i})\) is applied to generate the positive Figure 1: Problem statement - given unlabeled samples from multiple source domains, our goal is to learn a deep clustering model that can accurately assign samples to their cluster in each source domain. We aim to design a model that, at inference, can predict cluster assignments on new unseen domains without any labels or model tuning. sample. Both \(x_{i}^{s}\) and \(x_{i}^{st}\) are passed through the backbone to create the embeddings \(u_{i}^{s}=\mathcal{P}(f_{1}(x_{i}^{s},\theta_{1}),\theta_{p})\), \(u_{i}^{st}=\mathcal{P}(f_{1}(x_{i}^{st},\theta_{1}^{\prime}),\theta_{p}^{\prime})\), where \(\mathcal{P}\) and \(\theta_{p}\) are the projection head and its weights respectively. \(\theta_{1}^{\prime}=\mu\theta_{1}^{\prime}+(1-\mu)\theta_{1}\) and \(\theta_{p}^{\prime}=\mu\theta_{p}^{\prime}+(1-\mu)\theta_{p}\) are the moving average versions of \(\theta_{1}\) and \(\theta_{p}\) respectively. Finally, negative samples \(u_{i}^{-},\forall i\in[0,N_{q}]\) are sampled from a domain queue of the same domain \(d_{x_{i}}\) as \(x_{i}\). The contrastive loss is used: \[\mathcal{L}_{f_{proj}}=-log\frac{exp((u^{s})^{T}u^{st})}{\sum_{i=1}^{N_{q}} exp((u^{s})^{T}u_{i}^{-})+exp((u^{s})^{T}u^{st})}, \tag{2}\] where \(u^{s}=[u_{1}^{s},u_{2}^{s},..,u_{B}^{s}]^{T}\in\mathbb{R}^{Bxe}\), \(u^{st}=[u_{1}^{st},u_{2}^{st},..,u_{B}^{st}]^{T}\in\mathbb{R}^{Bxe}\), are the embeddings for the transformed input batch \(x^{s}\), \(x^{st}\). To further remove domain-specific information, we use an additional domain loss term using cross-entropy (Kullback and Leibler, 1951) loss: \(\mathcal{L}_{f_{d}}=-\mathcal{L}_{ce}(f_{d}(u^{s}),\theta_{d})\). Hence our final loss objective: \[\mathcal{L}_{f_{1}}=\mathcal{L}_{f_{proj}}+\mathcal{L}_{f_{d}}. \tag{3}\] The contrastive and domain-adversarial loss terms complement one another in finding content-related, domain-invariant features. ### Clustering head The pre-training phase provides a solid and robust backbone \(f_{1}\) on which a clustering head \(f_{2}\) can be trained. Then we apply a clustering head on top of the backbone designed to predict the assignments for each sample. The clustering head is trained in a two-step iterative manner. Each iteration begins by assigning pseudo labels based on the clustering head's predictions (logits) \(l(y|x)=f_{2}\circ f_{1}(x)\), and the semantic features (embeddings) \(u=f_{1}(x)\). In the second step, the clustering heads are trained using the pseudo labels with cross-entropy loss. Since biases are much more likely to arise in multi-domain tasks, using both embeddings and logits in domain generalization proves crucial, as demonstrated in the ablation study 4. An illustration of the training process is available in figure 3. Training of the clustering head is initiated by sampling a batch of images \(x\) with size \(B\) from all source domains. Each sample \(x_{i}\) passes through the backbone in three different versions. Based on the original image, and using two transformations: a strong augmentation \(x_{i}^{s}=\mathcal{E}(x_{i})\), and style transfer to our BCD \(x_{i}^{bcd}=\mathcal{C}(x_{i})\). Where \(\mathcal{C}()\) represents a style transfer of the input image \(x_{i}\) to an image with a sketch-like style. \(\mathcal{E}()\) is defined as: \[x_{i}^{st}=\begin{cases}\mathcal{C}(x_{i})&,\ w.p\ \ \ p_{st}p_{bcd},\\ \mathcal{ST}(x_{i})&,\ w.p\ \ \ p_{st}(1-p_{bcd}),\\ \mathcal{S}(x_{i})&,\ w.p\ \ 1-p_{st}.\end{cases} \tag{4}\] The BCD transformed, and the original images are used to define the pseudo labels while the strong augmentations are used to train the clustering head \(f_{2}\). First, features are extracted from the original image: \[u=f_{1}(x,\theta_{1}). \tag{5}\] Then, logits are extracted in the BCD based on: \[l(y|x^{bcd})=f(x^{bcd})=f_{2}\circ f_{1}(x^{bcd}). \tag{6}\] The top \(\gamma:=\frac{B}{2K}\) samples are chosen from \(l(y|x_{i}^{bcd})\) as the set of most confident samples of class \(k\) based on the clustering head's score on samples from the BCD. Thus, the selected samples are denoted as follows: \[\mathcal{M}_{k}=\{u_{i}|i\in argtop\gamma(l(k|x^{bcd})),\forall i\in\{1,...,B \}\}, \tag{7}\] where \(argtop\gamma(l(k|x^{bcd}))\in\mathbb{N}^{\gamma}\) is a vector of indexes that chooses the \(\gamma\) most confident samples, based on their corresponding BCD score \(l(k|x^{bcd})\). \(\mathcal{M}_{k}\) is a set of \(\gamma\) embedding vectors. Using \(M_{k}\), the center of class \(k\) is determined by: \[G_{k}=\frac{1}{\gamma}\sum_{u_{i}\in\mathcal{M}_{k}}u_{i}. \tag{8}\] One can calculate the similarity between each sample and each of the centers: \[sim_{k}=\langle\bar{G}_{k},\bar{u}\rangle. \tag{9}\] Where \(\bar{G}_{k}=\frac{G_{k}}{\|G_{k}\|}\), and \(\bar{u}=\frac{u}{\|u\|}\) are the normalized feature and center vectors, respectively. In plain words, \(sim_{k}\in\mathbb{R}^{B}\) holds the information of how close each sample in the batch is to the center of cluster \(k\). Samples that are closest to the center are used as pseudo labels for cluster \(k\); thus, the set of strongly augmented data samples with pseudo labels is formulated as follows: \[\hat{\mathcal{Z}}_{k}=\{x^{s}[argtop\gamma(sim_{k})],k\} \tag{10}\] Where \(x^{s}[argtop\gamma(sim_{k})]\) means indexing \(x^{s}\) using \(argtop\gamma(sim_{k})\), i.e., choosing the samples that will be used as pseudo-labels for class \(k\) using the similarity in the embedding domain. While \(M_{k}\) are chosen based on the heads predictions, which are in the logits space, \(\hat{\mathcal{Z}}_{k}\) are pseudo labels chosen based on information from both the semantic space Eqs. 8,10 and in the logits space Eq. 7. Using only the logit values to infer the pseudo labels results in poor cluster assignments, as examined in section 4. The entire set of representatives and pseudo-label pairs will be denoted as \(\tilde{\mathcal{Z}}\). In the second phase, the clustering head \(f_{2}\) is trained using \(\hat{\mathcal{Z}}\) to minimize cross-entropy loss using the pseudo labels. A batch of samples and pseudo labels \((x^{s}_{pl},y_{pl})\in\hat{\mathcal{Z}}\), are propagated through the backbone and heads, and the objective can be formulated as: \[\mathcal{L}=\frac{1}{B}\mathcal{L}_{ce}(f_{2}\circ f_{1}(x_{pl}),y_{pl}). \tag{11}\] During this training phase, domain balancing is used as detailed in section 3.1. Multiple clustering headsClustering is inherently unstable, especially when dealing with many classes or high-dimensional datasets. Several authors have proposed using feature selection (Solorio-Fernandez et al., 2020; Shaham et al., 2022; Lindenbaum et al., 2021) to improve clustering capabilities by removing nuisance features in tabular data. We are interested in stabilizing clustering performance on diverse high-dimensional image data. Therefore, we propose training multiple clustering heads simultaneously and selecting a reliable head based on an unsupervised criterion. This allows us to handle many categories and overcome the instability that stems from the clustering heads weights' initialization. For more details about the source of randomization between heads, please see appendix A.2. The number of clustering heads is denoted as \(h\), hence the objective in Eq. 11, \(\mathcal{L}\) can now be formulated as the average of the \(h\) head specific losses: \[\mathcal{L}=\frac{1}{Bh}\sum_{i=1}^{h}\mathcal{L}_{ce}^{i}(f_{2}\circ f_{1}(x _{pl}),y_{pl}). \tag{12}\] Next, we define the diversification of head \(j\) as: \[dv_{j}=\operatorname*{unique}_{B}\operatorname*{argmax}_{k\in K}l_{j}(y|x)/K. \tag{13}\] First, \(argmax\) reduces the prediction \(l_{j}(y|x)\) of the \(j\)-th head to a cluster index. Next, we use the \(unique\) operator to extract the number of clusters that head \(j\) predicts; this process is done using the entire dataset in parallel, i.e., there is no parameter update during this evaluation. Due to high variability in the training procedure between heads, some are better than others; we leverage this variability by keeping only the most diversified heads (MDH). Two MDH are chosen out of \(h\) clustering heads based on higher \(dv_{j}\) values compared to the other heads. The heads with lower \(dv_{j}\) are discarded, and we replace the weights of the non-MDH with a linear combination of the two MDH weights. Mathematically speaking, let us define \(\theta_{2_{i}}\) as the weights of the \(i\)-th head, and let us assume that \(j,k\) are the MDH indices; hence the weights of the non-MDH heads are overridden in the following manner: \[\theta_{2_{i}}=r_{k}\frac{\theta_{2_{k}}}{\|\theta_{2_{k}}\|}+r_{j}\frac{ \theta_{2_{j}}}{\|\theta_{2_{j}}\|},\forall i\neq k,j. \tag{14}\] Where \(r_{k}\sim\mathcal{U}(0,1)\) and \(r_{j}=1-r_{k}\). This removes the influence of non-diverse heads and maintains some degree of variability for the following optimization steps. In cases where there is equality in \(dv_{j}\) between several heads, which results in more than two MDHs, we limit the Figure 2: The proposed pre-training procedure. Each image is transformed using strong augmentations and/or style transfer augmentation. The features \(u^{s}\) (strong) and \(u^{st}\) (style) are extracted using the backbone \(f_{1}\). Then we use a domain head \(f_{d}\) to classify the domain identity of each sample, minimizing the domain loss \(\mathcal{L}_{f_{d}}\); we use gradient reversal to update the backbone to fool the domain head in an adversarial fashion. The contrastive loss \(\mathcal{L}_{f_{proj}}\) is minimized by the projection head’s output of \(u^{s}\), \(u^{st}\), and \(u^{-}\) (negative samples). The losses complement each other in training the backbone. number of MDHs to five. The rationale behind this limitation can be elucidated through the following illustrative scenario, w.l.o.g., assume that the first head does not predict one class, and the other heads do not predict five classes; if the number of MDH kept is not limited, the advantage of the first head is not utilized. Since all heads will perform poorly in the early training phase, MDH selection is initiated after a few epochs. Furthermore, to allow the heads to make gradual learning, the process repeats every \(n\) epochs. ## 4 Experiments Experiments are conducted using three datasets commonly used for evaluation of domain generalization methods. Representative images from several datasets and domains appear in appendix A.3. The **Office31** dataset (Saenko et al., 2010) consists of images collected from three domains: Amazon, Webcam, and DSLR, with \(2817\), \(795\), and \(498\) images, respectively. The dataset includes \(31\) different classes shared across all domains. The samples consist of objects commonly encountered in an office setting. The **PACS** dataset (Li et al., 2017) consists of four domains: Sketch, Cartoon, Photo, and Artpainting with 3929, 2344, 1670, and 2048 images, respectively. It includes seven different classes, which are shared across all domains. The **Officehome** dataset (Venkateswara et al., 2017) contains four domains: Art, Product, Realworld, and Clipart, with 2427, 4439, 4357, and 4365 images, respectively. It includes 65 different classes, which are shared across all domains. The large number of domains, and classes, make the task challenging. In particular, since we aim to cluster the data without access to labeled observations. Existing state-of-the-art results on this data (Menapace et al., 2020) corroborate this claim. Implementation detailsOur work is implemented using PyTorch (Paszke et al., 2019). We use Resnet18 (He et al., 2016) as our backbone for a fair comparison with the results of Menapace et al. (2020); Harary et al. (2022); Wang et al. (2022). The models in the pre-training were trained using SGD with momentum \(0.9\) and weight decay \(1e-4\). We use a batch size of 8 and train the model for 500 epochs. To train the clustering head, we use the same optimizer with batches of size 256 for 100 epochs for Office31 and Officehome datasets and 50 epochs for the PACS dataset. The reason for this difference is the small number of classes in the PACS dataset, which enables the model to converge much faster. To create style transfer augmentations, we use a pre-trained AdaIN model (Huang and Belongie, 2017). The most diversified head selection mechanism initiates at epoch 30 and is repeated every \(n=10\) epochs. For more information on the head selection mechanism, see section 3.2. An important regularization for diversified training is label smoothing (Szegedy et al., 2016). By using pseudo-labels, we assume that there is a high ratio of mislabeled samples; label smoothing helps preventing the model from predicting the training samples too confidently. Empiric evidence of label smoothing importance in our task can be seen in the ablation study. Comparison with other methodsTo evaluate the capabilities of our model, we focus on the following scheme: train the model using \(d\) unlabelled source domains, then evaluate our model on the unseen and unlabelled target domain. We compare our approach to several recently proposed deep learning-based models for multi-domain clustering. When evaluating Office31 and Officehome datasets, we compare with the baselines from (Menapace et al., 2020): popu Figure 3: Clustering head training scheme. The image is passed through the backbone in its original, strongly augmented, and BCD form. The weights of the backbone are frozen and used to produce the features. Representatives are selected from the original image features based on the clustering head’s predictions over the BCD images. The class representatives are used as pseudo labels for the CE loss. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Method & Target fine-tuned & Supervision & \(C,P,S\to A\) & \(A,P,S\to C\) & \(A,C,S\to P\) & \(A,C,P\to S\) & Avg \\ \hline DeepClusterCaron et al. (2018) & ✓ & - & 22.2 & 24.4 & 27.9 & 27.1 & 25.4 \\ IICJi et al. (2018) & ✓ & - & 39.8 & 39.6 & 70.6 & 46.6 & 49.1 \\ IIC-MergeJi et al. (2018) & ✓ & - & 32.2 & 33.2 & 56.4 & 30.4 & 38.1 \\ IIC + DIALJi et al. (2018) & ✓ & - & 30.2 & 30.5 & 50.7 & 30.7 & 35.3 \\ Continuous DA Mancini et al. (2019) & ✓ & - & 35.2 & 34.0 & 44.2 & 42.9 & 39.1 \\ ACIDS Menapace et al. (2020) & ✓ & - & 42.1 & 44.5 & \(64.4\) & **51.1** & 50.5 \\ \(K\)-means Lloyd (1982) & ✓ & - & 17.7 & 18.5 & 21.1 & 22.4 & 19.9 \\ Ours & - & - & **46.7** & **44.7** & **66.8** & 49.2 & **51.9** \\ \hline BrADHarary et al. (2021) & - & 11\% & 33.6 & 43.5 & 61.8 & 36.4 & 43.8 \\ BrAD-KNN Harary et al. (2021) & - & 15\% & 35.5 & 38.1 & 55.0 & 34.1 & 40.7 \\ BrADHarary et al. (2021) & - & 5\% & 41.4 & 50.9 & 65.2 & 50.7 & 52.0 \\ BrAD-KNN Harary et al. (2021) & - & 5\% & 39.1 & 45.4 & 58.7 & 46.1 & 47.3 \\ BrADHarary et al. (2021) & - & 10\% & 44.2 & 50.0 & 72.2 & 55.7 & 55.5 \\ BrAD-KNN Harary et al. (2021) & - & 10\% & 42.0 & 45.3 & 67.2 & 50.0 & 51.1 \\ \hline \hline \end{tabular} \end{table} Table 1: Results on the Office31 dataset (31 classes) upon all three domain combinations, each of the letters \(A,W,D\) represent the domains Amazon, Webcam, and DSLR, respectively. The notation \(X,Y\to Z\), means the model was trained on \(X,Y\) domain and tested on the \(Z\) domain. Target fine-tuned means the method was trained or adapted to the test domain. In \(K\)-means, we first pre-trained on the \(Z\) domain. Target fine-tuned means the method was trained or adapted to the test domain. In \(K\)-means, we first pre-trained the MocoV2 model and trained \(K\)-means on top of its embeddings. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Method & Target fine-tuned & Supervision & \(D,W\to A\) & \(A,W\to D\) & \(A,D\to W\) & Avg \\ \hline DeepClusterCaron et al. (2018) & ✓ & - & 19.6 & 18.7 & 18.9 & 19.1 \\ IICJi et al. (2018) & ✓ & - & 31.9 & 34.0 & 37.0 & 34.3 \\ IIC-MergeJi et al. (2018) & ✓ & - & 29.1 & 36.1 & 33.5 & 32.9 \\ IIC + DIALJi et al. (2018) & ✓ & - & 28.1 & 35.3 & 30.9 & 31.4 \\ Continuous DA Mancini et al. (2019) & ✓ & - & 20.5 & 28.8 & 30.6 & 26.6 \\ ACIDS Menapace et al. (2020) & ✓ & - & **33.4** & 36.1 & 37.5 & 35.6 \\ \(K\)-means Lloyd (1982) & ✓ & - & 14.9 & 24.3 & 20.8 & 29.9 \\ Ours & - & - & 23.1 & **49.2** & **45.2** & **39.2** \\ \hline \hline \end{tabular} \end{table} Table 2: Results on the Officehome dataset (65 classes) upon all four domain combinations, each of the letters \(A,P,R,C\) represent the domains Art, Product, Realworld, and Clipart, respectively. The notation \(W,X,Y\to Z\), means the model was trained on \(W,X,Y\) domain and tested on the \(Z\) domain. Target fine-tuned means the method was trained or adapted to the test domain. In \(K\)-means, we pre-train the MocoV2 model and then train \(K\)-means on top of its embeddings. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Method & Target fine-tuned & Supervision & \(C,P,S\to A\) & \(A,P,S\to C\) & \(A,C,S\to P\) & \(A,C,P\to S\) & Avg \\ \hline DeepClusterCaron et al. (2018) & ✓ & - & 22.2 & 24.4 & 27.9 & 27.1 & 25.4 \\ IIC Ji et al. (2018) & ✓ & - & 39.8 & 39.6 & 70.6 & 46.6 & 49.1 \\ IIC-Merge Ji et al. (2018) & ✓ & - & 32.2 & 33.2 & 56.4 & 30.4 & 38.1 \\ IIC + DIALJi et al. (2018) & ✓ & - & 30.2 & 30.5 & 50.7 & 30.7 & 35.3 \\ Continuous DA Mancini et al. (2019) & ✓ & - & 35.2 & 34.0 & 44.2 & 42.9 & 39.1 \\ ACIDS Menapace et al. (2020) & ✓ & - & 42.1 & 44.5 & \(64.4\) & **51.1** & 50.5 \\ \(K\)-means Lloyd (1982) & ✓ & - & 17.7 & 18.5 & 21.1 & 22.4 & 19.9 \\ Ours & - & - & **46.7** & **44.7** & **66.8** & 49.2 & **51.9** \\ \hline BrADHarary et al. (2021) & - & 1\% & 33.6 & 43.5 & 61.8 & 36.4 & 43.8 \\ BrAD-KNN Harary et al. (2021) & - & 15\% & 35.5 & 38.1 & 55.0 & 34.1 & 40.7 \\ BrADHarary et al. (2021) & - & 5\% & 41.4 & 50.9 & 65.2 & 50.7 & 52.0 \\ BrAD-KNN Harary et al. (2021) & - & 5\% & 39.1 & 45.4 & 58.7 & 46.1 & 47.3 \\ BrAD Harary et al. (2021) & - & 10\% & 44.2 & 50.0 & 72.2 & 55.7 & 55.5 \\ BrAD-KNN Harary et al. (2021) & - & 10\% & 42.0 & 45.3 & 67.2 & 50.0 & 51.1 \\ \hline \hline \end{tabular} \end{table} Table 3: Results on the PACS dataset (7 classes) upon all four domain combinations, each of the letters \(A,P,S,C\) represents the domains: Art painting, Photo, Sketch, and Cartoon, respectively. The notation \(W,X,Y\to Z\), means the model was trained on \(W,X,Y\) domain and tested on the \(Z\) domain. Target fine-tuned means the method was trained or adapted to the test domain. In \(K\)-means, we first pre-trained the MocoV2 model and trained \(K\)-means on top of its embeddings. lar deep clustering papers Invariant Information Clustering for Unsupervised Image Classification and Segmentation (IIC) (Ji et al., 2018), and DeepCluster (Caron et al., 2018). Importantly, they were both trained directly on the target domain before predicting the clusters. Menapace et al. (2020) used two variations of IIC. Specifically, IIC-Merge involves training IIC on all domains, including the target domain; IIC+DIAL: IIC, which contains a domain-specific batch norm layer jointly trained on all domains. Continuous DA: continuous domain adaptation strategy used in (Mancini et al., 2019) using the method presented in (Menapace et al., 2020) denoted as Adaptive Clustering of Images under Domain Shift (ACIDS), i.e., train on \(d\) domains, adapt on the target domain and then test on the target domain. Note that all the former baselines compared with our work were trained on the target domain. We added another baseline, training MoCoV2 on all the source domains and then fitting the \(K\)-means clustering algorithm on the target domain. On PACS dataset, we also compared ourselves to BrAD (Harary et al., 2021) with various amounts of source domain labels. This comparison is very challenging as we do not use any class labels. **Results** Table 1 depicts the results on all three domain combinations of the Office31 dataset. On both DSLR and Webcam as target domains, our method outperforms the current state-of-the-art (SOTA) by a large margin, even without adaptation to the target domain. Our method performance on the Amazon domain is inferior to the current SOTA; we believe this is due to the very limited source domain data. The target fine-tuned method (unsupervised fine-tuning on the target domain) relies on 317% more data for their training scheme. Overall, on average, our method is better by 10.1% than the method that uses the target domain for adaptation and 31.1% over the baseline with the same conditions. Results on the Officehome dataset can be seen in Table 2. This dataset is more challenging than the former and consists of four domains. Our method outperforms the baselines on all four domain combinations and is better on average by 47.1% percent than the previous SOTA. On the PACS dataset (Table 3), our method is compared to both target fine-tuned and limited source domain label settings. Our method outperforms the current SOTA on 3 out of 4 target domains for the target fine-tuned case. On the fourth domain, Sketch, our method achieves slightly lower results than the current target fine-tuned SOTA. This can be explained by a large amount of additional data (65%) from the Sketch domain, exploited by baselines that are fine-tuned on the target domain. Our method performs much better than the baseline on all domains compared to the same setting. When comparing the two variations of BrAD (Harary et al., 2021) with 1% source domain labels, we achieve superior results on all domains. Overall, even when using 10% of source domain label, our method is better than BrAD-KNN (Harary et al., 2021). **Ablation study** We use the PACS dataset to perform an ablation study. The first variant of our model, which we term "Plain pre-training", uses a standard MoCoV2 backbone followed by training the clustering heads using our best setup. We omit the style transfer augmentation in pre-training and clustering heads training in the second ablation. A third ablation, "no domain head," is performed using the full training procedure except for the domain head and its adversarial loss. The fourth ablation on the PACS dataset was done by removing the label smoothing (using 1 as the smoothing value). As indicated by the results presented in Table 4, our model performs better than all of its ablated versions. This suggests that all proposed components contribute to our ability to generalize to unseen domains. In cases where not all clusters were predicted and thus, no clustering accuracy can be calculated (NA). ## 5 Conclusions This paper presents a novel framework for completely unsupervised multi-source domain generalized clustering. To the best of our knowledge, this paper presents, for the first time, a framework where no class labels are used for either source or target domains. Further, no adaptation to the target domain is required, which demonstrates better generalization abilities to unseen domains. Our solution outperforms all existing baselines while being evaluated in a more stringent (and realistic) setting. We consider several future directions, specifically extending our model to other modalities, such as audio and text, and generalizing the clustering task to other important unsupervised learning tasks, such as anomaly detection or feature selection. We believe our idea has great potential to advance the unsupervised multi-domain regime and be applied in other future research. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Method & \(C,P,S\) & \(A,P,S\) & \(A,C,S\) & \(A,C,P\) & Avg \\ & \(\to A\) & \(\to C\) & \(\to P\) & \(\to S\) & \\ \hline Logits only & NA & NA & NA & NA & NA \\ Plain pre-training & 25.0 & 22.7 & 29.8 & NA & NA \\ No style transfer & 40.6 & 37.2 & 50.2 & 45.8 & 43.5 \\ No domain head & 40.2 & 41.5 & 58.6 & 42.8 & 45.8 \\ No smoothing & 46.4 & 44.5 & \(65.8\) & 43.9 & 50.4 \\ Ours & **46.7** & **44.7** & **66.8** & **47.2** & **51.9** \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study on PACS dataset upon all four domain combinations, each of the letters \(A,P,S,C\) represent the domains Art painting, Photo, Sketch, and Cartoon, respectively. The notation \(W,X,Y\to Z\), means the model was trained on \(W,X,Y\) domain and tested on the \(Z\) domain. We denote by NA cases in which some of the classes were not predicted by the model, which makes calculating clustering accuracy unavailable.
2309.07367
The kernel-balanced equation for deep neural networks
Deep neural networks have shown many fruitful applications in this decade. A network can get the generalized function through training with a finite dataset. The degree of generalization is a realization of the proximity scale in the data space. Specifically, the scale is not clear if the dataset is complicated. Here we consider a network for the distribution estimation of the dataset. We show the estimation is unstable and the instability depends on the data density and training duration. We derive the kernel-balanced equation, which gives a short phenomenological description of the solution. The equation tells us the reason for the instability and the mechanism of the scale. The network outputs a local average of the dataset as a prediction and the scale of averaging is determined along the equation. The scale gradually decreases along training and finally results in instability in our case.
Kenichi Nakazato
2023-09-14T01:00:05Z
http://arxiv.org/abs/2309.07367v1
# The kernel-balanced equation for deep neural networks ###### Abstract Deep neural networks have shown many fruitful applications in this decade. A network can get the generalized function through training with a finite dataset. The degree of generalization is a realization of the proximity scale in the data space. Specifically, the scale is not clear if the dataset is complicated. Here we consider a network for the distribution estimation of the dataset. We show the estimation is unstable and the instability depends on the data density and training duration. We derive the kernel-balanced equation, which gives a short phenomenological description of the solution. The equation tells us the reason for the instability and the mechanism of the scale. The network outputs a local average of the dataset as a prediction and the scale of averaging is determined along the equation. The scale gradually decreases along training and finally results in instability in our case. ## I Introduction In the recent decade, data-driven modeling has been empowered with techniques from machine learning. Among them, deep neural networks are the most powerful ones with a large number of applications[1; 2; 3; 4; 5]. Despite the fruitful ones, we do not know much about the mechanism behind them[6; 7]. Specifically, the network can get generalized functions only with the finite dataset. In other words, the network can learn a generalized relation between input and output from the finite one. We can get predictions for unknown inputs but do not fully understand how it works. A neural network, \(y=f(\mathbf{x},\mathbf{w})\), can be defined with an input, \(\mathbf{x}\), and output, \(y\). In training, we adjust the parameters, \(\mathbf{w}\), of the network so that the pre-defined relation, \(y_{i}=f(\mathbf{x}_{i})\), is satisfied as possible. As the pre-defined relation, we give a dataset, \(\{(\mathbf{x}_{i},y_{i})\}\), in advance. We usually update the parameters step by step along the gradient, \(\sum_{i}\nabla L(|y_{i}-f(\mathbf{x}_{i})|)\), of the minimizing function, \(L\). The name _neural network_ stems from the architectures of the function, \(f\), which is inspired by the brain network. In fact, one of the most famous architecture, convolutional neural networks, is originally a model of retinal structure[8; 9; 10; 11]. In this paper, we focus on them. We can derive neural tangent kernels, NTKs, or training responses as a theoretical approach to understanding the generalization mechanism[12; 13; 14; 15; 16]. There, we can describe the training response, \(\Theta(\mathbf{x},\mathbf{x}_{i})\), which shows the influence on the output, \(f(\mathbf{x})\), by a training step, in the following equation, \[f(\mathbf{x},\mathbf{w}+\mathbf{\delta}) \sim f+\frac{\partial f}{\partial\mathbf{w}}\cdot\mathbf{\delta} \tag{1}\] \[= f-\eta\frac{\partial f}{\partial\mathbf{w}}\cdot\frac{\partial f_{i }}{\partial\mathbf{w}}\frac{dL}{df_{i}}\] (2) \[\equiv f-\eta\Theta(\mathbf{x},\mathbf{x}_{i})\frac{dL}{df_{i}}, \tag{3}\] where the model is trained with a single data, \((\mathbf{x}_{i},y_{i})\), with a minimizing target, \(L\), known as loss function and the parameter, \(\eta\), is a learning rate. We usually have a dataset with many data points, \(\{(\mathbf{x}_{i},y_{i})\}\), and train the model with that. In such a case we can write the equation with the sum of training responses, \[\Delta f(\mathbf{x})\propto-\sum_{j}\Theta(\mathbf{x},\mathbf{x}_{j})\frac{dL}{df_{j}}. \tag{4}\] Furthermore, in some cases, we can assume a simple ansatz for the training response with an aging effect. It can be expressed in the following, \[\Theta(\mathbf{x},\mathbf{x}_{i})\propto t^{-\alpha}K(\mathbf{x},\mathbf{x}_{i}), \tag{5}\] where the exponent, \(\alpha\), shows aging decay and the response kernel, \(K\), is a positive decreasing function of the distance, \(|\mathbf{x}-\mathbf{x}_{i}|\). The decreasing scale would be determined by the architecture, but we assume it is a similar one with an exponential curve, in this paper. As a minimizing function, we consider it a more complicated problem than simple supervised training. In standard supervised training, we optimize a network so that the relation, \(y_{i}=f(\mathbf{x}_{i})\), is satisfied for any data point. On the contrary, we want to estimate the distribution of the dataset, \(\{(\mathbf{x}_{i},y_{i})\}\). To do that, we estimate the local mean, \(\mu(\mathbf{x})\), and standard deviation, \(\sigma(\mathbf{x})\), for each input, \(\mathbf{x}\). As a network, we assume the following one, \[\mu(\mathbf{x}) =g\circ f(\mathbf{x}) \tag{6}\] \[\sigma(\mathbf{x}) =h\circ f(\mathbf{x}). \tag{7}\] We have a shared function, \(f\), and specific ones, \(g\) and \(h\), to estimate the mean and standard deviation, respectively. As an optimizing function, we minimize the following, \[L\equiv-\log(\Pi_{i}\frac{1}{\sigma(\mathbf{x}_{i})\sqrt{2\pi}}\exp(-\frac{1}{2} (\frac{y_{i}-\mu(\mathbf{x}_{i})}{\sigma(\mathbf{x}_{i})})^{2})). \tag{8}\] In other words, we want to fit with a Gaussian distribution. This type of problem is known as _uncertainty estimation_ in the field of machine learning[17; 18; 19]. We want to know both maximum likelihood estimation, \(\mu(\mathbf{x})\), and uncertainty of that, \(\sigma(\mathbf{x})\), at the same time. Our problem setting is a much simpler one than various applications. In the case of the standard prediction, we usually minimize the distance, \(|\mu_{i}-y_{i}|\), and the optimal solution is the exact one, \(\mu_{i}=y_{i}\). However, we can have a different mean value because we simultaneously estimate standard deviation, \(\sigma\), in our network. As stated above, training with a data, \((\mathbf{x}_{i},y_{i})\), can influence on another prediction for a data point, \((\mathbf{x}_{j},y_{j})\). In sum, the predicted mean and standard deviation, \(\mu(\mathbf{x}_{i})\) and \(\sigma(\mathbf{x}_{i})\), can be a local estimation of the data distribution. Our main question is how the scale of estimation is determined. We assume two different inputs, \(\mathbf{x}_{a}\), and \(\mathbf{x}_{b}\), should have similar outputs, \(f(\mathbf{x}_{a})\sim f(\mathbf{x}_{b})\), depending on a distance between them, as the nature of prediction. However, we do not know the reality in the cases of deep neural networks. In other words, we want to know how and to what extent the estimation is generalized. As a hypothesis, we can assume some specific scales of the structure of the dataset are reflected in the prediction. In other words, the scale may be the reflection of the semantic structure of the dataset. However, we use a randomly generated dataset rather than specific public ones, like the other studies in the statistical physics[20; 21]. The advantage of this way is that we can get a universal understanding of the nature of the neural networks independent of the dataset instance. In the next section II, we introduce our model. There we describe our network and dataset. In addition, we note on training method as well. We show the training dynamics in the section III. We show the estimation is unstable. Furthermore, we introduce a phenomenological description, kernel-balanced equation, of the solution, which explains the instability. It gives answers, the scale and generalization, for our question. Finally, we show that the equation can be redisplayed with dynamics by training response. ## II Model In general, deep neural networks consist of layers of nonlinear transformations, \(f_{i}\), and an input, \(\mathbf{x}\), and output, \(y\), \[y=f_{n}\circ\cdots\circ f_{0}(\mathbf{x}). \tag{9}\] In each layer, we often use combination of linear convolution, \(c_{jk}\), and non-linear activation function, \(R\), \[\mathbf{h}_{k}=R(b_{k}+\sum_{j}c_{jk}(\mathbf{h}_{j})), \tag{10}\] where j-th channel of input for a layer, \(\mathbf{h}_{j}\), is transformed into k-th channel of output, \(\mathbf{h}_{k}\). Hidden variables, \(\mathbf{h}_{i}\), have often multi-channels for more degree of freedom of the network. We call this as a convolution layer[8; 9; 10; 11]. Here, we focus on a simple convolutional network with \(n\) layers. We assume the input, \(\mathbf{x}\), is a 1-dimensional bit string with the size, \(b\). In other words, we assume the input, \(\mathbf{x}\), is a binary vector in this paper. Each convolution layer can be defined with the number of output channels, \(s_{c}\), and kernel size, \(s_{k}\), of the convolution. In our model, the number of channels, \(s_{c}\), in each convolution layer is fixed. Needless to say, the output of a mid-layer means the input to the next layer. In the final layer, we usually use a linear network, \[y=R(\sum_{i}\mathbf{a}_{i}\cdot\mathbf{h}_{i}+b), \tag{11}\] where the hidden variable, \(\mathbf{h}_{i}\), is the input for the last layer. Non-linear activation, \(R\), is applied after linear transformation with parameters, \(\mathbf{a}_{i}\) and \(b\). In numerical experiments, we use a setting, \(b=8\), \(s_{k}=3\), \(s_{c}=3\), with ELU as an activation function and SGD for the training algorithm with a learning rate, \(\eta=0.1\), without a learning momentum[22; 23; 24; 25]. Since we consider a network with two outputs, \(\mu\), and \(\sigma\), we have two linear networks, \(g\) and \(h\), after the convolution layers, \(f\), in eq. (6) and (7). We call the network as _variance network_, here. In addition, we also consider a simpler one with only one output, \(\mu\), for easier understanding, in eq. (6). We call that as _average network_. As the dataset, we consider a random bit encoding[16]. The dataset, \(\{(\mathbf{x}_{i},y_{i})\}\), consists of pairs of an input, \(\mathbf{x}_{i}\), and output, \(y_{i}\). We randomly generate \(N\) pairs in advance and use them as a training dataset. We train a model with the dataset and a loss function, eq. (8). We can focus on training dynamics itself independent of a specific dataset instance by testing randomly generated ones and analyzing its statistical features. We also consider a simplified model for understanding training dynamics. As stated, training dynamics can be described simply in a simple equation, (3) or (5). The training response, \(\Theta(\mathbf{x},\mathbf{x}_{i})\), is known as a neural tangent kernel and can be constant during training in an infinity limit of network size[12; 13]. Even if the size is finite, it can be represented by a product of a time-dependent term, \(A(t)\), and almost constant kernel, \(K(\mathbf{x},\mathbf{x}_{i})\), like equation (5)[16]. In the case of an average network, we can write down simplified dynamics, \[\frac{d\mu_{i}}{dt}\propto-\sum_{j}K(\mathbf{x}_{i},\mathbf{x}_{j})\frac{dL}{d\mu_{j}}, \tag{12}\] where we ignore the time-dependent term in training response. In other words, we consider short-term training dynamics and call it as _response kernel dynamics_. In many cases, the loss function, \(L\), evaluates the distance between the prediction, \(f_{i}\), and the answer, \(y_{i}\), e.g. mean squared error. In such a case, we can write it as follows, \[\frac{df_{i}}{dt}\propto\sum_{j}K_{ij}(y_{j}-f_{j}), \tag{13}\] where the response kernel, \(K_{ij}\), can be seen constant during training as assumption. ## III Results ### 1-point training We start from the simplest one, where our dataset has only a pair, \((\mathbf{x}_{0},y_{0})\). We train a variance network with that. We call it _1-point training_, here. When the dataset consists of N-pairs, we call it _N-point training_. Before showing results, we should confirm the form of the loss function, equation 8. Since the loss is the log-likelihood of the Gaussian, it can be rewritten with the summation, easily, \[L\propto\Sigma_{i}(\sigma(\mathbf{x}_{i})+\frac{1}{2}(\frac{y_{i}-\mu(\mathbf{x}_{i})} {\sigma(\mathbf{x}_{i})})^{2}). \tag{14}\] The first term, \(\Sigma_{i}\sigma(\mathbf{x}_{i})\), can be minimized by the minimal of the standard deviation, \(\sigma(\mathbf{x}_{i})=0\). However, the second term includes that in its denominator and can be divergent by the zero value. If the first term is minimized faster, the second term can change its value abruptly. In other words, we can see a numerical instability, there. On the contrary, if the scale of the standard deviation can be adjusted moderately, we do not see numerical instability. We can focus on the trajectory of the point, \((y_{i}-\mu(\mathbf{x}_{i}),\sigma(\mathbf{x}_{i}))\), to see the determination of the prediction scale. We show the results of 1-point training in FIG. 1. In the figure, we show trajectories of training dynamics on the vector field along the loss gradient. All of them start from around the center, \(|\mu_{0}-y_{0}|\sim 0.5\) and \(\sigma_{0}\sim 0.5\), and converged into the optimal point, \(\mu_{0}\sim y_{0}\) and \(\sigma_{0}\sim 0\). As we can confirm, they move almost along the vector field. However, we cannot get to the optimal one, \((\mu_{0},\sigma_{0})=(y_{0},0)\), because that is numerically unstable. One of our network outputs, \(\sigma_{0}\), is in a denominator of the loss function, equation (8). In other words, our formulation of a variance network cannot have the optimal point as a solution for 1-point training. It is a reasonable result because we do not have a meaningful definition of standard deviation only with a data point. ### data density transition Next, we consider N-point training with a variance network. We sample pairs of random encoding, \((\mathbf{x}_{i}),y_{i}\), with a size, \(N\), and train the network with it. Firstly we want to roughly grasp the feature of training dynamics. To do that we evaluate training results with variance. We can estimate it in two ways, mean squared error, \(V\equiv<(y_{i}-\mu_{i})^{2}>\), and another one, \(V^{*}\equiv<\sigma_{i}^{2}>\). In FIG. 2, we show those estimated variances with different sizes of datasets, \(N\). We can confirm the non-negligible difference between them in the cases with a little dataset, but it is negligible in the cases with a larger dataset. In FIG. 2, we show the difference after training of a fixed epoch, \(e_{mx}=2000\), but it can depend on the duration of the training epoch itself. In FIG. 3, we show the dependence on the training epoch. In the figure, we plot the difference, \(|V-V^{*}|\), against the size of a dataset, \(N\), and the duration of the training epoch, \(e_{mx}\). As we can see, the difference tends to grow when the duration, \(e_{mx}\), is large. On the other hand, it tends to be reduced when the size, \(N\), is large. In other words, the difference can be negligible when the data density is large but it may grow after enough training. As we already confirmed, 1-point training is numerically unstable. In addition, N-point training is also un stable if the training dataset is sparse enough, FIG. 2. If we assume the difference, \(|V-V^{*}|\), stems from numerical instability, the results are reasonable. However, why the instability grows after longer training? We can expect that network outputs, \(\mu_{i}\) and \(\sigma_{i}\), depend not only on the local data point, \((\mathbf{x}_{i},y_{i})\), but also on other ones nearby it. We evaluate the output, \(\mu_{i}\), with a weighted average, \(\sum_{j}\exp(-\alpha|\mathbf{x}_{i}-\mathbf{x}_{j}|)y_{j}/N\). The parameter, \(\alpha\), means a spatial scale of the average. In FIG. 4, we show training dynamics with a dataset, \(N=20\). On the left, we show the dynamics of two variances. On the right, we show the growth of the scale, \(\alpha\). The scale is optimized so that the weighted average and the output, \(\mu_{i}\), should match with each other. The variances show no difference at first. But we see a significant difference between them in the end. At the same time, the scale, \(\alpha\), grows along the training. This suggests a spatial scale of prediction is reduced after enough training. ### kernel-balanced equation We consider the simplified dynamics, (12) or (13), to understand the growth of scale, \(\alpha\). Firstly, we study the training dynamics of an average network with its simplified one, \[\frac{d\mu_{i}}{dt}\propto-\sum_{j}K_{ij}(y_{j}-\mu_{j}). \tag{15}\] Figure 1: Training dynamics with a single point data. A network is trained with a data point, \((\mathbf{x}_{0},y_{0})\), and the error, \(|\mu_{0}-y_{0}|\), and standard deviation, \(\sigma_{0}\), are shown on the map of loss function. In the figure, we show 4 results with different colors and initial conditions, but all trajectories finally end up with numerical instability. The vector field shows the gradient of the loss function. The color of the arrows shows the steepness of the gradient in the log scale. Here we used the setting, learning rate \(\eta=0.1\), the size of input \(b=8\), the number of layers 3. We used SGD and ELU as an optimizer and activation, respectively. Figure 3: Numerical instability against sample size and training epochs, \(e_{mx}\). We show the difference between two predicted variances. The horizontal axis means the sample size of the training dataset. The vertical axis means training epochs. The difference is shown with color. The matrix, \(K_{ij}\), consists of kernel distance terms, \(K(|\mathbf{x}_{i}-\mathbf{x}_{j}|)\), and we assume it can be expressed in an exponential form as already introduced, \[K_{ij}\sim\exp(-\beta|\mathbf{x}_{i}-\mathbf{x}_{j}|). \tag{16}\] We show eigenvectors and values with an ideally simple case, in FIG. 5. We constructed a matrix, \(K_{ij}=\exp(-|x_{i}-x_{j}|)\), with sorted 100 random values, \(0\leq x_{i}\leq 1\). In other words, we show the features of a response kernel with randomly distanced data points. As we can see, the eigenvalues decrease in a power-law manner. On the other hand, eigenvectors show Gabor wavelet-like forms[26; 27]. Major modes show broader waves than minor ones. This means that training dynamics reduces large scaled spatial error at first. Local error is reduced after enough training. These dynamics can be interpreted in our case as follows, \[\frac{d\mu_{i}}{dt} \propto \sum_{j}K_{ij}(y_{j}-\mu_{j}) \tag{17}\] \[\sim \sum_{j}K_{ij}(y_{j}-\mu_{i})\] (18) \[\sim \sum_{k}\lambda_{k}\sum_{j}\tilde{K}_{ijk}(y_{j}-\mu_{i}), \tag{19}\] where the values, \(\tilde{K}_{ijk}\sim\exp(-\beta_{k}|x_{i}-x_{j}|)\), show differently scaled kernel terms. If we can assume a relation, \(\mu_{j}\sim\mu_{i}\pm\delta\), around the point, \(\mathbf{x}_{i}\), we get the equation, 18. This means the local expectation, \(\mu_{i}\), should match the weighted average, \(K_{ij}y_{j}\). Finally, we rewrite it in a manner of multi-scale expansion, eq. 19, using the differently scaled kernels, \(\tilde{K}_{ijk}\), and the weight of each mode, \(\lambda_{k}\). Needless to say, we assume the weight should be larger for the global averaging term, which has a small parameter, \(\beta_{k}\). Thus, the weighted balance of kernel response, \[\mu_{i}=\frac{\sum_{j}\tilde{K}_{ijk}y_{j}}{\sum_{j}\tilde{K}_{ijk}}, \tag{20}\] is realized from more global modes to local ones through training. In the same way, we can write down the kernel-balanced equation for a variance network in the following, \[\frac{d\mu_{i}}{dt} \propto \sum_{j}\frac{K_{ij}}{2\sigma_{j}^{2}}(y_{j}-\mu_{j}) \tag{21}\] \[\sim \sum_{k}\lambda_{k}\sum_{j}\tilde{K}_{ijk}\frac{y_{j}-\mu_{i}}{2 \sigma_{j}^{2}}\] (22) \[\frac{d\sigma_{i}}{dt} \propto \sum_{j}\frac{K_{ij}}{\sigma_{j}^{3}}((y_{j}-\mu_{j})^{2}-\sigma _{j}^{2})\] (23) \[\sim \sum_{k}\lambda_{k}\sum_{j}\tilde{K}_{ijk}\frac{(y_{j}-\mu_{i})^ {2}-\sigma_{i}^{2}}{\sigma_{j}^{3}}. \tag{24}\] We can notice the denominator, \(\sigma_{j}\), in the equation (22), as the difference from the previous one. In addition, we have one more equation on the other dynamics, in eq. (23) and (24). These equations suggest we can have Figure 4: Training dynamics with a training dataset, \(N=20\). (a) Two predicted errors, \(<(y_{i}-\mu_{i})^{2}>\) and \(<\sigma_{i}^{2}>\), are shown. (b) We can approximate the predicted value, \(\mu_{i}\), as a weighted average, \(\sum_{j}\exp(-\alpha|\mathbf{x}_{i}-\mathbf{x}_{j}|)y_{j}/N\). Here we show the evolution of the prediction scale, \(\alpha\). Figure 5: Eigenvalues and vectors of a response kernel, \(K_{ij}\). We calculated eigenvalues and vectors with a randomly distanced one within a range, \((0,1)\). We generated 100 random values, \(x_{i}\), and made a response kernel, \(\exp(-|x_{j}-x_{i}|)\), with those ones. numerical instability again, because the point, \(\sigma_{i}=0\), is a local optimal as a result of training. To confirm the numerically unstable dynamics, we show the training dynamics of N-point training, \(N=20\), in FIG. 6. All the trajectories, \((|y_{i}-\mu_{i}|,\sigma_{i})\), are plotted on the loss gradient, in the same manner as FIG. 1. At a first glance, we notice jumps in them. Needless to say, these suggest numerical instability. We also show the training dynamics against the training epoch, in FIG. 7. We show some convergent trajectories, in (a), and unstable ones, in (b). All of the trajectories, \(\sigma_{i}\) and \(\mu_{i}\), are plotted, in (c) and (d). We can confirm that jumps happen at the timing when the convergent trajectories approach the optimal. ## IV Discussion Machine learning is now one of the most powerful ways to construct a model for a given dataset. Among the techniques in the field, deep neural networks play central roles not only in business applications but also in scientific studies. If we train a model with a dataset, \(\{(\mathbf{x}_{i},y_{i})\}\), the trained model, \(f\), can predict outputs for any input, \(y=f(\mathbf{x})\), successfully even if the point, \(\mathbf{x}\), is not included in the dataset. This feature is known as generalization. We know generalization should be a reflection of proximity in the input space. In other words, two outputs, \(f(\mathbf{x}_{a})\) and \(f(\mathbf{x}_{b})\), should be similar if those inputs, \(\mathbf{x}_{a}\) and \(\mathbf{x}_{b}\), are similar with each other. However, the distribution of the dataset is often very complicated and the spatial scale of similarity can be complicated as well. In this paper, the structure of the prediction scale is focused on. We consider neural tangent kernel or training response because they can describe the spatial effect of a training step and be applied to training dynamics. We elucidate the determination mechanism of the prediction scale with simple convolution networks and simplified models for the dynamics. As an estimation problem for the dataset distribution, we adopt a loss function for the fitting to Gaussian one. We minimize the loss and get an optimal fitting model by training with a dataset. Our model, a variance network, outputs an expectation and standard deviation, \(\mu(\mathbf{x})\) and \(\sigma(\mathbf{x})\). As the dataset, we simply set a randomly generated encoding from a 1D bit-string, \(\mathbf{x}_{i}\), to a binary output, \(y_{i}\). This problem set is a very simple one so that we can understand the dynamics of the prediction scale. As shown in FIG. 1, the dynamics is numerically unstable in the simplest case, 1-point training, because we cannot determine a meaningful standard deviation. In a similar manner, N-point training suffers from numerical instability, as shown in FIG. 2 and FIG. 3. The numerical instability stems from the reduction of the prediction scale, shown in FIG. 4. The scale gradually reduces along training and finally two types of variances, Figure 6: Training dynamics with multiple training points, \(N=20\). In the figure, we show all trajectories with different colors. The vector field shows the gradient of a loss function. The color of the arrows shows the steepness of the gradient in the log scale. Figure 7: A demonstration of numerical instability. We show training dynamics with sample size, \(N=20\). (a) Trajectories of \(\sigma_{i}\) for the group, converged into the state, \(y_{i}^{*}-\mu_{i}=0,\sigma_{i}=0\). In the box, we show the pairs of input, \(x_{i}\), and output, \(y_{i}\). (b) Trajectories of \(\sigma_{i}\), showing first instability. (c) All trajectories of \(\sigma_{i}\). (d) All trajectories of \(\mu_{i}\). and \(<\sigma_{i}^{2}>\), do not show consistency with each other. To understand the dynamics of the prediction scale, response kernel dynamics for an average network are studied. The kernel matrix has Gabor wavelet-like eigenvectors with eigenvalues decaying along a power law. This suggests the network learns spatially larger scale patterns at first and local patterns later. These dynamics can be shown in a kernel-balanced equation, in eq. (20). An average network outputs a weighted mean value averaged over the answers, \(y_{i}\). The weight determines the prediction scale and it decreases along the training. In the case of variance network, the solution is not so straightforward, eq. (23) and (24), but the prediction scale decreases along training as well. In addition, the predictions suffer from numerical instability again. Once any prediction, \((\mu_{i},\sigma_{i})\), approaches to the optimal point, \(\sigma_{i}=0\), it destabilizes training dynamics of other predictions, \((\mu_{j},\sigma_{j})\). The kernel-balanced equation suggests we can understand convolutional networks as a function that outputs the local average. The scale of averaging is not fixed and decreases along the training. In other words, the network can output the same values as the dataset after enough training, if the response kernel does not collapse during it. It is known that the kernel is constant in an ideal condition. Even if the network is not ideal and has a finite size, it can be seen as almost constant in some cases. We still need more studies on kernel stability but we believe the equation should be effective for a more variety of cases. The problem we consider here is known as uncertainty estimation in the field of machine learning. In reality, we suffer from uncertainty for many reasons in practical usage. If we should assume some noises in the observation, we cannot regard the dataset as the ground truth anymore. Even if we can exclude such a noise in a way, the truths may not be deterministic. In addition, we have some more origins of it in the model and its training. The model can be redundant and have many solutions for the given dataset. We often use non-deterministic training algorithms and this can result in many solutions. It is known that uncertainty can be divided into two classes, epistemic and aleatoric one[19]. What we consider here is the latter one. In such a case, we can model the dataset as a distribution at most. In this context, what we show here is the insufficiency of Gaussian modeling. Our formulation suggests Laplacian modeling is insufficient as well. On the contrary, a Bayesian approach, in which the parameter has its distribution, can be a solution. In fact, the numerical instability occurs at the optimal point, \(\sigma_{i}=0\), and can be overcome by replacement of it, \(\sigma_{i}\), with another probabilistic one, \(P(\sigma_{i})\). Actually, it is reported that t-distribution is an effective way of that[28]. Our formulation explains the reason for the effectiveness, therefore. In our days, we often train very large models with large datasets. In such a case, the dataset usually distributes in a non-uniform manner. As pointed out, we often see model variability in such cases[33]. As we have shown with the kernel-balanced equation, the prediction can be different between training phases, especially for minor modes in the dataset. Since the training for minor modes is postponed to a much later stage depending on the initial condition, we can see model variability. Our instability stems from the shape of the loss function, equation 8. As shown in the kernel-balanced equation, this sort of instability cannot be prevented in a straightforward way. However, when the model and the dataset are very large, training requires much longer epochs and we often need much larger computing time for a training epoch. In such a case, we may not see instability because of short training time, in reality. As a possibility, when we have drop-out layers in the network, we can have a non-zero variance in the prediction for the same input. If the standard deviation for the input, \(\mathbf{x}\), is not zero, we do not see the instability. In this meaning, the instability is not necessarily universal for real applications. However, the analysis with the kernel-balanced equation, especially for the average network, can be applicable to a wide range of applications, because of its simplicity. As the kernel-equation equations shows, training always starts from major modes and proceeds to minor modes later, we can carefully design the dataset density for more efficient training, though we still need more studies. As such an application, we can apply it for variance estimation without a variance head for prediction of standard deviation, \(\sigma(\mathbf{x})\). Since the output after \(t\)-epochs, \(f_{t}(\mathbf{x})\), approximates a local average dependent on the prediction scale, the difference between two outputs, \((f_{t}(\mathbf{x})-f_{T}(\mathbf{x}))^{2}\), tells us the local variance in the limit, \(T\rightarrow\infty\). We do not know the quantitative precision of this formula yet, but it can be a convenient way for variance estimation. We are getting more and more data through ubiquitous sensors and it is even available on the web. The field of data science sheds light on the complexity of the real world with them. Actually, astonishingly powerful applications emerge one after another with the aid of such huge datasets and machine powers[1; 2; 3; 4]. Machine learning algorithms are necessary not only in such practical applications but also in scientific studies in the real world's complexity[29; 30; 31]. However, we note that those algorithms still require further understanding. In reality, new technologies often focus on an innovative mathematical formulation and its implementation, but a dynamical understanding must be necessary as well[16; 32]. We can design an efficient and even safe one with such an understanding. We believe the science of complexity can be an effective approach for the field. ## Acknowledgements This work is motivated through discussions in Sense-Time Japan, HONDA, and an internship student WL.
2309.08732
The path to detecting extraterrestrial life with astrophotonics
Astrophysical research into exoplanets has delivered thousands of confirmed planets orbiting distant stars. These planets span a wide ranges of size and composition, with diversity also being the hallmark of system configurations, the great majority of which do not resemble our own solar system. Unfortunately, only a handful of the known planets have been characterized spectroscopically thus far, leaving a gaping void in our understanding of planetary formation processes and planetary types. To make progress, astronomers studying exoplanets will need new and innovative technical solutions. Astrophotonics -- an emerging field focused on the application of photonic technologies to observational astronomy -- provides one promising avenue forward. In this paper we discuss various astrophotonic technologies that could aid in the detection and subsequent characterization of planets and in particular themes leading towards the detection of extraterrestrial life.
Nemanja Jovanovic, Yinzi Xin, Michael P. Fitzgerald, Olivier Guyon, Peter Tuthill, Barnaby Norris, Pradip Gatkine, Greg Sercel, Svarun Soda, Yoo Jung Kim, Jonathan Lin, Sergio Leon-Saval, Rodrigo Amezcua-Correa, Stephanos Yerolatsitis, Julien Lozi, Sebastien Vievard, Chris Betters, Steph Sallum, Daniel Levinstein, Dimitri Mawet, Jeffrey Jewell, J. Kent Wallace, Nick Cvetojevic
2023-09-15T19:46:02Z
http://arxiv.org/abs/2309.08732v1
# The path to detecting extraterrestrial life with astrophotonics ###### Abstract Astrophysical research into exoplanets has delivered thousands of confirmed planets orbiting distant stars. These planets span a wide ranges of size and composition, with diversity also being the hallmark of system configurations, the great majority of which do not resemble our own solar system. Unfortunately, only a handful of the known planets have been characterized spectroscopically thus far, leaving a gaping void in our understanding of planetary formation processes and planetary types. To make progress, astronomers studying exoplanets will need new and innovative technical solutions. Astrophotonics - an emerging field focused on the application of photonic technologies to observational astronomy - provides one promising avenue forward. In this paper we discuss various astrophotonic technologies that could aid in the detection and subsequent characterization of planets and in particular themes leading towards the detection of extraterrestrial life. **Keywords:** Exoplanets, astrophotonics, integrated photonics, photonic lanterns, beam combiners, spectrographs ## 1 Introduction There have been over 5400 exoplanets confirmed to date. Figure 1 shows the mass vs orbital period distribution of the known planets. Data is color coded to highlight the detection technique used. For the vast bulk of this population of planets, very little is known. Spectroscopy of the exoplanet atmosphere is critical to revealing a wealth of information (composition and abundance, spin rate, weather patterns, etc). The field of exoplanetary sciences is now focusing efforts on characterization of these systems by for example providing direct exoplanet spectroscopy [1, 2]. By understanding more about the known exoplanets, we can refine planetary formation and evolution models and better understand where life is likely to exist. The blue region in Fig. 1 highlights the terrestrial planet regime (planets with Earth sizes/masses). Only a handful of terrestrial mass planets have thus far been detected. The Earth is overlaid highlighting that we don't currently know of an Earth/Sun analog, which would constitute a primary candidate in the search for life. Of the planets that have been detected in this regime, very few are in the habitable zone of their host planets. To detect extraterrestrial life, we must first detect more terrestrial like planets in the habitable zones of their host stars. In particular, finding planets around solar type stars (G stars) is critical for studying systems similar to our own to understand our place in the Universe. ## 2 Detecting Terrestrial Planets The transit and radial velocity techniques have detected the bulk of the known exoplanets thus far. Although both techniques will continue to add more terrestrial candidates around M type stars, the radial velocity technique is the most likely technique to detect Earth-like planets around G type stars, needed to form the target list for the Habitable Worlds Observatory (HWO) mission. This will require measuring a velocity change in the speed of light of \(<\)50 cm/s and more likely 10-20 cm/s (1 part in \(3\times 10^{9}\)) [3]. And if the Earth-like planet is in the Figure 1: 5400+ confirmed exoplanets as of the 11\({}^{th}\) of August 2023 on a Jupiter mass vs orbital period plot. The Earth is shown on the plot to indicate where an Earth-like planet around a sun-like star would appear. The blue region highlights the terrestrial regime. Very few Earth mass/size planets have been detected thus far. Only a handful are in the habitable zone of their host star. Data taken from exoplanetsarchive.ipac.caltech.edu habitable zone, this velocity change would occur over half a year, so the rate of change is extremely small. For this reason this endeavour is known as extreme precision radial velocity (EPRV). ### Laser Frequency Combs To be able to measure such a small velocity change, it is critical to be able to calibrate out any instrument drifts to better than 1 cm/s. This requires an extremely stable spectral calibration source which is known as the laser frequency comb (LFC). An LFC emits a series of ultra-stable, uniformly spaced lines across a broad spectrum that can be used to calibrate an instrument over many years and even decades. There are numerous ways to generate the comb. A mode-locked laser was originally used for this application. It generates an extremely densely packed comb (comb line spacing's are in the MHz) [4]. At such small line spacing's an R 100,000 spectrograph, typically applied to EPRV can not separate the lines out. Fabry-Perot Etalon filter cavities are therefore used downstream to remove the bulk of the lines and establish line spacing's of 10-30 GHz, which are better suited for astronomical spectrometers [5, 6]. Electro-optic combs rely on electo-optic modulators to generate side bands on a continuous-wave laser line [7]. This allows the comb to be formed with the adequate line spacing from the outset. The line spacing can be stabilized by locking it to a clock reference for example. Finally, LFCs have also been realized on a chip [8]. These systems are still not quite turn key, but once they are matured may be applicable to many applications that have tight space constraints, like those in space. The preliminary spectrum of the LFC is then sent through a non-linear medium, such as a highly nonlinear fiber or crystal to broaden it. In this way its possible to realize combs that span more than 1 octave. LFCs can be locked to clock references and also to one another allowing for the frequency uncertainty to be easily reduced to well below those needed for EPRV. These locking loops also ensure that the frequencies of the lines can be maintained for decades [9], which will be critical to detecting Earth like planets around G stars. ### Spectral flatteners-on-a-chip The amplitude across an LFC can vary by orders of magnitude. The precision of the wavelength solution scales with the square root of the number of lines used to derive it, and the signal-to-noise of a line scales with the square root of the number of photons in a line. To optimize the wavelength solution the lines would have uniform amplitude with high signal-to-noise ratio in a given exposure time. To achieve this with an LFC, the spectrum must first be flattened. Traditionally a flattener consists of a bulk optic set up that collimates the light out of a single mode fiber (SMF), disperses it with a diffraction grating and bounces it off a spatial light modulator (SLM) which can control the reflected amplitude of the beam before it is spectrally recombined and injected into another SMF to be sent to the science instrument [10]. A compact spectrometer is also needed to measure the actual spectrum to drive the SLM to flatten the output. This system is bulky, costly and unstable. Recently on-chip flatteners have been investigated [11]. These consist of the same basic elements on a chip. An arrayed-waveguide grating (AWG) is used to disperse the light into a series of discrete output channels before light is sent through Mach-Zehnder interferometers (MZI) which can be used to adjust the amplitude of the channel by thermally tuning the phase of one of the arms of the MZI before a thermo-optic phase modulator is used to re-phase all the spectral channels before a final recombination of the entire spectrum in another AWG. The prototype device developed operated on a single polarization, over a 400 nm range in the astronomical H band and offered \(\sim 38\) dB of amplitude tuning. Spectral flatteners on a chip are useful for ground based LFCs but will be critical to any future mission that requires spectral shaping in flight. LFCs combined with spectral flatteners could play a key role in enabling state-of-the-art spectrometer calibration and enable terrestrial planet detection over the next decade. ## 3 Characterizing Terrestrial Planets The most effective way to characterize an exoplanet is to collect a spectrum. A spectrum can indicate the presence of molecules needed to sustain life, like water and oxygen, as well as ozone which might protect the planet from UV and greenhouse gases like carbon monoxide, carbon dioxide and/or methane. This is the most effective way to determine if a planet could harbor life. To take a spectrum of an Earth-like planet in the habitable zone we must first reduce the glare from the host star, which could be \(10^{10}\times\) brighter. Given the small angular separation to the orbital distance of the habitable zone planet, this will require a large telescope (\(>6\) m sized aperture) and advanced high contrast imaging techniques. These consist of wavefront sensing and control to eliminate aberrations and improve contrast, followed by starlight suppression from for example a coronagraph. Although there has been tremendous progress in coronagraph performance over the past few decades, these systems have still not achieved the requirement of \(10^{10}\) contrast across a 10-20% bandwidth needed to take a spectrum. Photonics can be used to support the wavefront sensing and control and starlight suppression aspects of the high contrast imaging system as well as for the science instrument as outlined below. ### Photonic Lantern Wavefront Sensing & Control Photonic technologies offer the ability to coherently mix light, necessary for generating signals for wavefront sensing. Although there are several possible approaches, photonic lanterns provide a convenient avenue. Photonic lanterns are waveguide devices that convert a multimoded input into several single mode outputs via an adiabatic transition. As long as the number modes at the single mode end is greater than the number of modes in the multimode end, the transition will be efficient [12]. When a lantern is placed in a focal plane, the input beam is coupled amongst the modes the lantern supports and traverses the transition to the output array of SMFs. Owing to their few moded nature (3, 6, 19, 61 mode counts are typical), lanterns have a greater collecting efficiency as compared to SMFs. At the output, the flux distribution across the ports uniquely encodes the information about the complex field of the incident beam. Therefore, a lantern could be placed in a focal plane to collect light for a downstream instrument and provide the ability to do wavefront sensing. Focal plane wavefront sensors of this nature can eliminate non-common path and chromatic errors. It is possible to use a neural net to map the input electric field to output intensity distributions to maximize the dynamic range of the lantern [13] and use this mapping to reconstruct the input complex field. Operating the lantern in the linear regime has also been modelled [14] and recently demonstrated on the SCExAO testbed [15]. This test was done around 1550 nm with a 19 port photonic lantern and demonstrated the successful closed loop control of 5+ of the low order Zernike modes off-sky. The outcome of this work is discussed in another paper in the same conference [16]. An interesting prospect is to use a hybrid lantern - a lantern that can transport light injected into the LP01 mode to one isolated output, while the other ports consist of a combination of light from each of the modes (i.e. are coherently mixed) [17]. This concept allows for light to be routed directly to a science instrument while also providing on-board wavefront sensing which can allow for improved coupling to the lantern. It was recently proposed and simulated but has not been demonstrated. ### Photonic Nulling Starlight cancellation is extremely important to reduce the photon noise, which would otherwise dominate a spectrum collected from the planet. There are numerous technologies/approaches that could be used to suppress starlight using photonics. GLINT is a photonic instrument in operation on the SCExAO testbed at Subaru telescope [18]. It relies on segmenting the pupil and injecting each of the beamlets into a pupil remapping chip realized with ultrafast laser inscription. The beams are routed in 3D inside the photonic component and tapped via splitters for the purpose of photometric monitoring. Flux in the main channel is combined pairwise in photonic couplers with beams from other parts of the pupil. Carefully arranging the relative phases between combined beams generates null signals. For our case this can be done across many baselines simultaneously. To improve Fourier coverage, nulls on numerous baselines need to be obtained simultaneously. Scaling the number of input channels is possible with this approach[19], but getting achromatic nulls across multiple baselines at once remains challenging. Solutions are being proposed to design more achromatic circuits[20]. Photonic lanterns can also be used for nulling. Specifically, mode-selective lanterns - lanterns that map LP modes to unique single-mode outputs naturally provide a nulling capability. For a 6 port lantern, LP11a, LP11b, LP21a, and LP21b all have phase inversions on axis prohibiting light in a pure even-symmetry mode to couple into them[21, 22]. Therefore, at the output SMFs corresponding to these modes, the starlight is suppressed to some extent while planet light is coupled. This has recently been demonstrated in the laboratory with a 6 port mode selective lantern operating around 1550 nm[23]. Preliminary results show monochromatic and 10% bandwidth polychromatic null depths ranging from \(10^{-3}\)-\(10^{-2}\) across the 4 nulled outputs, which were limited by the finite cross-talk between the modes in the device. Next steps include developing lower cross-talk devices. Nonetheless, this passive component provides a simple avenue to improve contrast between 0.5-2 \(\lambda/D\). A possible extension of this concept would be to realize a hybrid lantern that allows the two LP11 and two LP21 modes to be separated into unique SMF outputs, while keeping all other modes coherently mixed together. If this were possible, it would allow for only the channels used in nulling to be separated and used for this application, while all of the rest of the light from the planet would be used for focal plane wavefront control. This could improve the contrast and stabilize it as well. Similar to the GLINT concept, its also possible to either segment the focal plane into an integral field unit that subtends the inner \(2\lambda/D\) or use a standard lantern to collect the light. At the output, the beams from the various parts of the focal plane, or the ports of the lantern can be interfered pair wise using directional or multi-mode interference couplers. Indeed, several stages of beam combination with fine path length adjustments could be used to reduce any light leak from an upstream coupler due to for example imperfections in the coupler or difficulties in phasing, to get to deeper nulls[24]. This circuit could also be combined with the mode-selective nuller above as a second stage of nulling. For a mission like the HWO, the contrast is extreme and photon rates in the final stage will be low, so it may be difficult to generate sufficient signals to phase the circuit appropriately. This is something that would need to be investigated. Kernel nulling self-calibration can also be applied to analyze the output of such circuits and will also provide a boost in the contrast[25]. ### Photonic Spectroscopy As outlined above, collecting a spectrum of the planet itself is critical to confirming that the planet can host life. Photonic spectrometers in the form of AWGs form an ideal solution as they are 1) compact and can be flown on the HWO, 2) consist of a single-monolithic component with no moving parts that can be readily thermally stabilized, and 3) offer a discretized output that can be routed to a detector located elsewhere. At such extreme contrasts, even on 6-m apertures, the photon rates from the exoplanets are very low. Therefore, lower resolving powers ranging from R\(\sim\) 50 and up to 1000 are being considered for the NIR characterization channel[26]. Typical commercial AWGs operate at R\(\sim\)7000[27], so this represents a reduction in resolution. However, larger bandwidths than those typically used in telecommunications will be needed. A minimum bandwidth of 10% (150 nm at 1550 nm) will be needed and more likely even broader bandwidths. When operating at low resolutions its possible to indeed broaden the free spectral range (FSR) as well as the bandwidth of the device. We have recently developed several low resolution devices between Caltech/JPL in both SiN and Silica photonics. The SiN device was optimized for a FSR of 500 nm while the Silica device was optimized for a FSR of 220 nm. Both devices were designed for a channel spacing of 8 nm at 1600 nm, corresponding to an R\(\sim\)200. The devices have recently been characterized and the results will be presented in detail in an upcoming publication. But its worth noting that the Silica device had an end-to-end efficiency \(>70\%\) including fiber coupling to and from the chip. This demonstrates the viability of optimizing AWGs for exo-Earth characterization on the future HWO mission. ## 4 Characterization Instrument Architecture There are numerous possible architectures that a photonic-based instrument could take. However, given the extreme contrast needed to detect and characterize and Earth-like planet around a sun-like star, its unlikely photonics will do it alone. A more likely scenario is to use a coronagraph at moderately high contrast (\(10^{-6}\)-\(10^{-7}\) at 2 \(\lambda/D\)) and then use photonic components down stream to build on this. Several possible architectures are shown in Fig. 2. One architecture could consist of a GLINT like nuller injecting light in the same pupil plane as the Lyot stop of the coronagraph (see bottom panel of Fig. 2). Another possibility is to use the mode-selective lantern nuller in a downstream focal plane with a back end beam combiner chip optimized for Kernel nulling (see top panel in Fig. 2). This approach has the added advantage that that mode-selective lantern is a passive component offering some starlight suppression without the need for sensing and active control. The Kernel nuller however will need some active control, which again will be challenging at the low photon rates expected after such extreme starlight suppression. Another approach would be to segment the pupil into two apertures, inject each into individual mode selective photonic lanterns and then combine the outputs in a chip to realize a double Bracewell architecture [22]. In addition to the coronagraph and photonics, the use of masks (vortex, phase knife, etc) and/or phase plates (applied to the deformable mirrors or a separate plate) need to also be considered in the architecture. Photonic spectrographs will be critical to characterize the planet. However, if the beam combiner chips can not be made to provide deep nulls over broad bandwidths, then AWGs might need to be used immediately after light collection. In this way the spectral channels are split early on and then downstream beam combination is conducted on a spectral channel by spectral channel basis with narrower overall band passes. This is similar to the layout of the prototype spectrum shaper on a chip [11]. This increases the complexity of the circuit as well as the total number of degrees of freedom, which should ultimately allow for a broader null to be achieved. The challenge's that need to be addressed include: * Defining if and what sort of coronagraph is ideally suited to work in tandem with a photonic nuller. * Understanding the chromatic behavior of the various photonic nulling options and if pre-dispersion can be used as a viable pathway to broaden the null and Figure 2: Two potential photonic-based planet detection and characterization instrument architectures for the HWO. Light is captured by the telescope, and then bounces off the 2 deformable mirrors used for wavefront control before passing through a coronagraph (shown as a focal plane mask) before being filtered at the downstream Lyot stop. (Top) The light is injected into a photonic lantern and routed to a downstream beam combiner before being dispersed in AWGs and sent to a detector. (Bottom) Light is captured at the pupil plane immediately following the Lyot stop by a 3D pupil remapper and beam combiner chip. The outputs are dispersed and sent to a detector. * Investigating how to control and calibrate photonic components used with low photon rates far downstream in the starlight suppression system. ## 5 Summary We have outlined how the functionalities of photonic technologies could be exploited to detect and characterize terrestrial planets in the habitable zones of sun-like stars. From the ground, giant segmented mirror telescopes could be used to detect similar planets around cooler M stars using similar technical solutions. Technologies like LFCs are already at technology readiness level (TRL) 9 and are being widely used at ground based observatories. Nulling and wavefront control technologies range from TRL 2-5 depending on the approach. There is a lot of work to be done to advance all these concepts to a sufficient readiness level to be able to properly evaluate their potential for application to the HWO. This work closely follows the 2023 astrophotonics roadmap [28]. For more details, please consult the roadmap. ## Acknowledgments This work has been supported by the National Science Foundation under Grant No. 2109231. Y.X acknowledges support from the National Science Foundation Graduate Research Fellowship under Grant No. 1122374. This work was supported by the Wilf Family Discovery Fund in Space and Planetary Science, funded by the Wilf Family Foundation. This research was carried out in part at the California Institute of Technology and the Jet Propulsion Laboratory under a contract with the National Aeronautics and Space Administration (NASA). Support for P Gatkine was provided by NASA through the David & Ellen Lee Prize Postdoctoral Fellowship and NASA Hubble Fellowship Grant HST-HF2-51478.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA Contract NAS5-26555.
2309.11263
Active Inference for Sum Rate Maximization in UAV-Assisted Cognitive NOMA Networks
Given the surge in wireless data traffic driven by the emerging Internet of Things (IoT), unmanned aerial vehicles (UAVs), cognitive radio (CR), and non-orthogonal multiple access (NOMA) have been recognized as promising techniques to overcome massive connectivity issues. As a result, there is an increasing need to intelligently improve the channel capacity of future wireless networks. Motivated by active inference from cognitive neuroscience, this paper investigates joint subchannel and power allocation for an uplink UAV-assisted cognitive NOMA network. Maximizing the sum rate is often a highly challenging optimization problem due to dynamic network conditions and power constraints. To address this challenge, we propose an active inference-based algorithm. We transform the sum rate maximization problem into abnormality minimization by utilizing a generalized state-space model to characterize the time-changing network environment. The problem is then solved using an Active Generalized Dynamic Bayesian Network (Active-GDBN). The proposed framework consists of an offline perception stage, in which a UAV employs a hierarchical GDBN structure to learn an optimal generative model of discrete subchannels and continuous power allocation. In the online active inference stage, the UAV dynamically selects discrete subchannels and continuous power to maximize the sum rate of secondary users. By leveraging the errors in each episode, the UAV can adapt its resource allocation policies and belief updating to improve its performance over time. Simulation results demonstrate the effectiveness of our proposed algorithm in terms of cumulative sum rate compared to benchmark schemes.
Felix Obite, Ali Krayani, Atm S. Alam, Lucio Marcenaro, Arumugam Nallanathan, Carlo Regazzoni
2023-09-20T12:42:50Z
http://arxiv.org/abs/2309.11263v1
# Active Inference for Sum Rate Maximization in UAV-Assisted Cognitive NOMA Networks ###### Abstract Given the surge in wireless data traffic driven by the emerging Internet of Things (IoT), unmanned aerial vehicles (UAVs), cognitive radio (CR), and non-orthogonal multiple access (NOMA) have been recognized as promising techniques to overcome massive connectivity issues. As a result, there is an increasing need to intelligently improve the channel capacity of future wireless networks. Motivated by active inference from cognitive neuroscience, this paper investigates joint subchannel and power allocation for an uplink UAV-assisted cognitive NOMA network. Maximizing the sum rate is often a highly challenging optimization problem due to dynamic network conditions and power constraints. To address this challenge, we propose an active inference-based algorithm. We transform the sum rate maximization problem into abnormality minimization by utilizing a generalized state-space model to characterize the time-changing network environment. The problem is then solved using an Active Generalized Dynamic Bayesian Network (Active-GDBN). The proposed framework consists of an offline perception stage, in which a UAV employs a hierarchical GDBN structure to learn an optimal generative model of discrete subchannels and continuous power allocation. In the online active inference stage, the UAV dynamically selects discrete subchannels and continuous power to maximize the sum rate of secondary users. By leveraging the errors in each episode, the UAV can adapt its resource allocation policies and belief updating to improve its performance over time. Simulation results demonstrate the effectiveness of our proposed algorithm in terms of cumulative sum rate compared to benchmark schemes. Active Inference, UAV, NOMA, Cognitive Radio. ## I Introduction The present and emerging wireless technologies, such as 6G, are expected to experience an increase in data intensity. There is a growing expectation to connect a larger number of self-autonomous devices and Internet of Things (IoT) devices, putting pressure on the existing wireless network [1]. To address these demands, future wireless systems need to incorporate innovative technologies like unmanned aerial vehicles (UAVs), cognitive radio (CR), and non-orthogonal multiple access (NOMA). The integration of these technologies requires intelligent resource allocation to improve system performance. UAVs have gained significant attention in wireless communications due to their advantageous line-of-sight (LoS) communication, cost-effectiveness, miniaturization, and flexibility [2]. CR serves as the main technology that enables secondary users (SUs) to utilize licensed spectrum when available without causing interference to primary users (PUs) [3]. Conversely, there has been a shift towards NOMA as an alternative to orthogonal multiple access (OMA) to overcome its limitations. NOMA has demonstrated superior spectrum efficiency, user fairness, and the ability to accommodate multiple users simultaneously in the same sub-channel by employing superposition coding (SC) at the transmitter and successive interference cancellation (SIC) at the receiver [4]. However, in order to fully harness the promised benefits of NOMA, a key challenge lies in jointly optimizing discrete subchannels and continuous power to maximize the sum rate in such a dynamic system. Additionally, it has been established that the problem of maximizing the sum rate in wireless networks is strongly NP-hard [5]. Hence, numerous suboptimal or heuristic approaches have been proposed by researchers. The authors in [6] explore heuristic and iterative search optimization methods for user pairing and resource allocation in the uplink NOMA scenario. In [7], the authors investigate the stochastic successive convex approximation method to maximize the sum rate of users under imperfect channel state conditions. In [8], an upper bound is derived for the optimal weighted sum rate, and the authors propose a near-optimal approach using Lagrangian duality and dynamic programming. For a comprehensive review of conventional optimization approaches, we refer the readers to [9]. It is important to note that these traditional optimization schemes lack adaptive online self-awareness and often involve complex mathematical formulations, making them impractical for real-time systems that require minimal latency. In recent years, machine learning techniques have demonstrated significant potential in addressing complex computational tasks and have been widely implemented in various wireless communication systems [10]. However, deep learning methods require well-labeled datasets for training in order to achieve accurate results. Obtaining such datasets can be challenging in complex wireless networks, and the resulting models may be difficult to interpret [11]. Likewise, in [9], deep reinforcement learning (Deep RL) is employed to maximize the sum rate, power allocation, and channel assignment in a multi-carrier NOMA scheme. Nevertheless, despite the recent successes of RL, several limitations hinder its full implementation in dynamic systems [12]. Firstly, RL algorithms often require numerous iterations to converge to an optimal solution due to the strong influence of negative rewards [13, 14]. An RL agent must take several bad actions in order to learn how to improve its policy. Additionally, RL agents are typically trained for specific predefined tasks, which limits their ability to generalize to new experiences. Generalizing to new experiences necessitates retraining or modifying the agent, or incorporating meta-learning capabilities [15]. An alternative approach widely studied and rooted in neurocognitive science, called active inference, provides a fundamental framework for characterizing adaptive behaviors in unknown and complex environments [13, 16]. In this framework, every agent (considered a self-organizing system) maintains a dynamic equilibrium with its external environment to minimize prediction errors [17]. Preliminary results suggest that active inference is more adaptable and resilient in a variety of settings that are challenging for RL models [18]. We also observe that the majority of active inference agents are trained to learn generative models with predefined sections of the state space [13, 19]. While this approach is suitable for discrete state spaces, it becomes impractical for complex dynamic systems [20]. In this paper, we explore active inference using a unique generalized dynamic Bayesian network (Active-GDBN) to learn the complex, time-changing network environment. The key contributions of this study are summarized as follows: * We have developed and implemented an active inference-based algorithm called Active-GDBN to address the sum rate maximization problem in a UAV-assisted cognitive NOMA network. In this algorithm, the UAV is equipped with a generative model that is learned offline, capturing the dynamic rules that generate preferred observations (i.e., optimal superimposed signals). This learned knowledge serves as a prior target when the UAV becomes active during the online deployment process. We describe how the UAV dynamically learns both discrete subchannels and a continuous power allocation policy online to minimize prediction errors or abnormalities. * We formulate the problem of maximizing the sum rate as a challenge of minimizing abnormalities, employing a generalized state-space formulation to capture the temporal dynamics of the radio environment. Unlike most existing papers, which discretize power allocation, our proposed framework optimizes continuous power. Discretizing power allocation introduces quantization errors and increases computational complexity [21]. Additionally, our algorithm is explainable because it estimates and represents the dynamic causal structure of the training environment at both discrete and continuous states. * The numerical findings using simulated data provide evidence of the efficiency of our proposed algorithm in achieving a higher cumulative sum rate compared to benchmark schemes. The remainder of this paper is organized as follows: Section II describes the system model and problem formulation. The proposed method for joint sub-channel and power allocation is described in Section III. The simulation results and discussion are presented in Section IV. Section V concludes the paper. ## II System Model and Problem Formulation As illustrated in Fig. 1, we examine a multi-channel uplink Cognitive-NOMA system that encompasses a primary network (PN) and a secondary network (SN), with a UAV positioned centrally and hovering above randomly moving secondary users (SUs). In practice, a single and hovering UAV could be used to provide communication services to emergency responders in the event of a disaster. This could be used to coordinate the response, communicate with victims, or provide medical assistance. The PN consists of a primary base station (PBS) that serves primary users (PUs) over the primary channels in a time-slotted manner. The SN consists of a UAV that assists the PBS and serves a set of SUs. Let \(\mathcal{N}\) denote the set of SUs and \(\mathcal{K}\) represent the number of sub-channels in the network, expressed as \(\mathcal{N}=\{1,2,\cdots,N\}\) and \(\mathcal{K}=\{1,2,\cdots,K\}\), respectively. We assume non-interference among the different sub-channels due to the orthogonality provided by frequency division. In the uplink, each SU \(n\) transmits its signal to the UAV on subchannel \(k\) with assigned transmit power \(p_{n}^{k}\) and channel gain \(g_{n}^{k}\). By using QPSK modulation, the system can maintain a certain level of performance and minimize the impact of interference compared to higher-order modulation schemes. This is particularly important in scenarios with multiple NOMA users, where the signals from different users may interfere with each other. Let \(\mathcal{U}_{k}\triangleq\left\{n\in\mathcal{N}:p_{n}^{k}>0\right\}\) denote the set of SUs that are multiplexed on sub-channel \(k\) and \(|\mathcal{U}_{k}|\) represents the cardinality of that set. In each transmission time slot, the channel of a specific SU remains constant but changes independently in each period or episode. The UAV, equipped with Active-GDBN, can continuously update its policy online based on new observations of the channel state information (CSI). This adaptability allows the UAV to handle variations in the wireless channel conditions and adjust its actions accordingly. For simplicity, we assume a line-of-sight (LOS) channel and adopted the free-space path loss (FSPL) model as defined by [22]. Thus, the distance \(d_{u,n}\) from the UAV \(u\) to ground SUs \(n\) at a given time instance \(t\) is expressed as: \(d_{u,n}=\sqrt{h^{2}+\|\mathbf{q}_{u}(t)-\mathbf{w}_{n}\|^{2}}\), where \(h\) is the UAV's Fig. 1: System model with uplink NOMA signaling. altitude, the horizontal coordinate of the UAV is represented by \(\mathbf{q}_{u}(t)\), and \(\mathbf{w}_{n}\)=\([x_{n}\ y_{n}]^{T}\) denotes the horizontal coordinate for the mobile ground \(n\)-th SU. Similarly, the link power gain from UAV to SUs is given by: \[h_{n}(t)=g_{n}^{k}(t)\Omega_{n}(t), \tag{1}\] where \(g_{n}^{k}(t)\) is the large-scale power gain, accounting for path losses and shadowing, and is calculated as follows: \[g_{n}^{k}(t)=\rho_{0}d_{n,u}^{-2}(t)=\frac{\rho_{0}}{h^{2}+\|\mathbf{q}_{u}(t)- \mathbf{w}_{n}\|^{2}}, \tag{2}\] where the link power gain at a reference distance is \(\rho_{0}\). In (1), \(\Omega_{n}(t)\) is the small-scale fading coefficient, which follows a Rician distribution with a non-central chi-square probability density function (PDF) [23]. To ensure a minimum distance \(\Delta_{y}\) between the superimposed SUs' signal, each SU is mapped to a unique QPSK constellation at the transmitter. This minimum distance is well-spaced to minimize interference and ensure successful SIC decoding at the receiver. By using the learned generative model, the UAV can make informed predictions about each user's signal and estimate their contributions to the observed optimal superimposed signal. The UAV will perform SIC by actively adapting its actions online to decode \(x_{1}\) first, which is the SU with the strongest channel gain, subtract it from the total received signal \(\mathrm{y}_{t,k}\), and treat the other signals (\(x_{2}\) to \(x_{M}\)) as interference. The UAV performs subsequent SIC and the next user with a stronger channel gain is decoded. The uplink SIC is performed in decreasing order of channel gain. The achievable data rate \(\mathrm{R}_{k,n}\) in the uplink is expressed as: \[\mathrm{R}_{k,n}\triangleq b_{k}\log_{2}\bigg{(}1+\frac{p_{n}^{k}g_{n}^{k}}{ \sum_{j=\sigma_{k}^{k-1}(n)+1}^{k}p_{\sigma_{k}(j)}^{k}g_{\sigma_{k}(j)}^{k}+ \eta_{n}^{k}}\bigg{)}. \tag{3}\] Our objective is to ensure a maximum sum-rate subject to power constraints while assuring the maximum number of allowable SUs per sub-channel. The maximization problem can be formulated mathematically as follows: \[\max_{p_{n}^{k}} \sum_{k=1}^{K}\sum_{n=1}^{N}\mathrm{R}_{k,n}\] (4a) s.t. \[\sum_{k=1}^{|\mathcal{U}_{k}|}p_{n}^{k}\leq p_{max},\quad n\in \mathcal{N},k\in\mathcal{K} \tag{4b}\] \[p_{n}^{k}\geq 0,\quad n\in\mathcal{N},k\in\mathcal{K}\] (4c) \[|\mathcal{U}_{k}|\leq M,\quad k\in\mathcal{K}\] (4d) \[p_{n}^{k}\leq p_{max}^{k,n},\quad n\in\mathcal{N},k\in\mathcal{K}. \tag{4e}\] Constraint (4b) defines the maximum allowed total power budget for each SU, which cannot surpass \(p_{max}\). (4c) specifies that the power allocation for each SU on each sub-channel is non-negative. (4d) restricts the maximum number of SUs multiplexed on a particular sub-channel to \(M\). (4e) sets power restrictions for each sub-channel. The optimization task in (4a) is nonconvex, and solving the global optimal using heuristic approaches is computationally infeasible. Therefore, we propose an active inference-based approach that efficiently learns the optimal subchannel and power allocation policy. ## III Proposed method for joint sub-channel and power allocation We describe active inference as a partially observable Markov decision process (POMDP) [24]. In a given time instance \(t\), the actual state of an environment \(\mathrm{\tilde{S}}_{t}\in\mathbb{R}^{d_{x}}\) changes according to a random transition process \(\mathrm{\tilde{S}}_{t}\sim\mathrm{Pr}(\mathrm{\tilde{S}}_{t}|\mathrm{\tilde{S }}_{t-1},\mathbf{\mathcal{A}})\), where \(\mathbf{\mathcal{A}}\in\mathbb{R}^{d_{x}}\) represents the actions of an agent (UAV). The actual environmental state is usually hidden from the agent, but the agent can only infer them through observations \(\mathrm{\tilde{Z}}_{t}\in\mathbb{R}^{d_{x}}\), given by \(\mathrm{\tilde{Z}}_{t}\sim\mathrm{Pr}(\mathrm{\tilde{Z}}_{t}|\mathrm{\tilde{S }}_{t})\). As a result, the agent works with beliefs about the hidden state \(\mathrm{\tilde{S}}_{t}\). Under the active inference framework, the relationship between the UAV and its environment can be described as a 6-element tuple (\(\mathrm{\tilde{S}}_{t}\), \(\mathrm{\tilde{X}}_{t}\), \(\mathbf{\mathcal{A}}\), \(\mathbf{T}_{\mathbf{\mathcal{T}}}^{\mathbf{u}}\), \(\mathbf{\Pi}_{\mathbf{\mathcal{A}}}^{\mathbf{\alpha}}\), \(\mathbf{\tilde{Z}}_{t}\)), where \(\mathrm{\tilde{S}}_{\mathbf{t}}\) and \(\mathrm{\tilde{X}}_{\mathbf{t}}\) are sets of the environmental hidden states that include noise, PUs and/or SUs. \(\mathbf{\mathcal{A}}=\{\mathcal{A}^{[\tilde{\mathcal{I}}]},\mathcal{A}^{[\tilde{ \mathcal{P}}]}\}\) is the action space containing all the possible sub-channel decisions and initial power allocation values. \(\mathbf{T}_{\mathbf{\mathcal{T}}}^{\mathbf{pu}}\) is the time-varying transition model for PUs. \(\mathbf{\Pi}_{\mathbf{\mathcal{T}}}^{\mathbf{\alpha}}\) is the Active Inference-table that encodes the state-action pair and \(\mathbf{\tilde{Z}}_{t}\) is the set of \(K\) sensory signals. #### Iii-1 Radio Environment Representation The UAV can observe \(K\) sensory signals expressed as: \(\mathrm{\tilde{Z}}_{t}\)=\(\{\mathrm{\tilde{Z}}_{t,1},\mathrm{\tilde{Z}}_{t,2},\dots,\mathrm{\tilde{Z}}_{t,K}\}\), which correspond to \(K\) sub-channels. In addition, we describe the radio environment using a generalized hierarchical state-space model, which includes the following components: \[\mathrm{\tilde{S}}_{t,k}^{(e)}=\mathrm{f}(\mathrm{\tilde{S}}_{t-1,k}^{(e)})+ \mathrm{w}_{t,k}, \tag{5}\] \[\mathrm{\tilde{X}}_{t,k}^{(e)}=\mathrm{C\tilde{X}}_{t-1,k}^{(e)}+\mathrm{DU}_{ \mathrm{\tilde{S}}_{t,k}^{(e)}}+\mathrm{w}_{t,k}, \tag{6}\] \[\mathrm{\tilde{Z}}_{t,k}=\mathrm{H}\big{(}\mathrm{\tilde{X}}_{t,k}^{(1)}+\dots+ \mathrm{\tilde{X}}_{t,k}^{(M)}+\mathrm{\tilde{X}}_{t,k}^{(pu)}\big{)}+\mathrm{v}_ {t,k}. \tag{7}\] In (5), the discrete random variables describing the discrete state clusters of the physical signal, the sub-channel carrying the signal and its power level are denoted by \(\mathrm{\tilde{S}}_{t,k}^{(e)}\). Also, \(\mathrm{f}(.)\) is a non-linear function that expresses how \(\mathrm{\tilde{S}}_{t,k}^{(e)}\) evolve over time as a function of \(\mathrm{\tilde{S}}_{t-1,k}^{(e)}\) and \(\mathrm{w}_{t,k}\) is the process noise, such that \(\mathrm{w}_{t,k}\)\(\sim\)\(\mathcal{N}(0,\Sigma_{\mathrm{w}_{t,k}})\). The dynamic equation defined in (6) expresses how the Generalized States (GS) \(\mathrm{\tilde{X}}_{t,k}^{(e)}\) evolve over time as a function of \(\mathrm{\tilde{X}}_{t-1,k}^{(e)}\) and \(\mathrm{\tilde{S}}_{t,k}^{(e)}\) where \(e\in\{no,pu,c\}\), \(no\), \(pu\), and \(c\) stands for noise, PU and the \(M\) superimposed signals, respectively. \(\mathrm{C}\) and \(\mathrm{D}\) represent the dynamic and control matrices, respectively, and \(\mathrm{U}_{\mathrm{\tilde{S}}_{t,k}^{(e)}}\) is the control vector. The observation model in (7) describes dependence of the sensory signals on the hidden GS. The hierarchical dynamic models formulated in terms of stochastic processes in (5), (6), and (7) are structured in a graphical GDBN as depicted in Fig. 2. The procedure includes an offline phase (i.e., the UAV's perception of desired observation), and the UAV is equipped with a hierarchical GDBN at discrete and continuous states to learn a generative model of the network, as depicted in Fig. 2(a). Due to the Markov separation between the UAV and the external world, the UAV learns an optimal policy of discrete sub-channels and continuous power by taking into account network conditions and user position. Fig. 2(b) denotes the online active inference phase, where the UAV performs joint actions (i.e., dynamically selects continuous power \(A_{t-1}^{[p]}\) and discrete sub-channels \(A_{t-1}^{[t]}\)) to reach the desired observation. #### Iii-A2 Perceptual Learning of Preferred Observations At the beginning of the learning stage, the UAV is equipped with an initial model similar to the Unmotivated Kalman Filter (UKF) which assumes that the environmental states evolve along with static rules and relies on (6) to predict the continuous environmental states where U\({}_{\widetilde{\mathrm{S}}_{t,k}^{(e)}}\)=0 [16]. The UAV's memory produces initial errors known as generalized errors (GEs) [25]. The GEs are further used to learn new models incrementally. We used the Growing Neural Gas (GNG) unsupervised clustering algorithm to learn the GDBN model that receives the GEs and generates discrete state clusters. Similarly, the time-varying transition matrix \(\Pi_{k,\tau}\) is learned by estimating the transition probability \(\mathrm{Pr}(\tilde{\mathrm{S}}_{t,k}^{(e)}|\tilde{\mathrm{S}}_{t-1,k}^{(e)},\tau)\). The UAV repeats the previous learning procedure to learn distinct vocabularies that represent the various entities, such as, noise, PU, SU, and the combined signals generated from multiple SUs. #### Iii-A3 Active Inference Phase The UAV's decision-making depends on the state-action pair encoded in \(\mathbf{\Pi}_{\mathbf{\tau}}^{[\mathbf{f}]}\), a time-varying matrix encoding the probabilistic dependencies between states and discrete actions, and \(\mathbf{\Pi}_{\mathbf{\tau}}^{[\mathbf{p}]}\), a time-varying matrix encoding the probabilistic dependencies between states and continuous actions. Action selection processInitially, during the first iteration, the UAV performs random sampling to select the discrete actions as every possible discrete action has the same probability (\(\frac{1}{K}\)) of being chosen and selects the initial continuous action \(A_{t-1}^{[p]}=A_{0}^{[p]}\) for power allocation. The selected actions indicate what will be the next discrete and continuous environmental states \(\widetilde{\mathrm{S}}_{t,k}\), \(\widetilde{\mathrm{X}}_{t,k}\) which are characterized by \(\mathrm{Pr}(\widetilde{\mathrm{S}}_{t,k}|\widetilde{\mathrm{S}}_{t-1,k},A_{t-1 }^{[t]})\) and \(\mathrm{Pr}(\widetilde{\mathrm{X}}_{t,k}|\widetilde{\mathrm{X}}_{t-1,k},A_{t- 1}^{[p]})\). In the successive iterations, the UAV can adjust the action selection process by predicting implicitly the future activity of PUs according to \(\mathbf{T}_{\mathbf{\tau}}^{\mathbf{p}\mathbf{u}}\) and skipping the resources that are expected with high probability to be occupied by PUs. By utilizing a modified Markov Jump Particle Filter (M-MJPF) [26], the UAV is capable of predicting the outcomes of its actions. The M-MJPF utilizes a switching model, employing Particle Filtering (PF) for prediction and updating in the discrete state, and Kalman Filtering (KF) for prediction and updating in the continuous state. Through dynamic causal relationships, a top-down inference can be distinguished from a bottom-up inference. Additionally, the UAV observes and senses the unselected sub-channels to determine their state (occupied or vacant) and detect the activity of primary users (PUs) in the spectrum. This information is used to enhance future decision-making processes. Time-based inter-slice top-down predictive messages \(\pi(\widetilde{\mathrm{X}}_{t,k})\) and \(\pi(\widetilde{\mathrm{S}}_{t,k})\) is based on the information acquired in the dynamic model. The intra-slice bottom-up inference is built on the likelihood function and consists of backward propagated messages \(\lambda(\widetilde{\mathrm{X}}_{t,k})\) and \(\lambda(\widetilde{\mathrm{S}}_{t,k})\) towards the discrete level. The prediction at the continuous level depends on the discrete level. For each particle propagated at the discrete level, a KF is activated to predict the equivalent continuous level \(\widetilde{\mathrm{X}}_{t,k}\). PF propagates \(L\) particles equally weighted based on the proposal density encoded in transition matrix \(\Pi_{k}\). After receiving the new observation, diagnostic messages propagate in a bottom-up manner to update the belief in hidden variables at the different hierarchical levels (continuous and discrete states). Abnormality measurements and action evaluationThe continuous level abnormality indicator calculates the similarity between the two messages entering the node \(\widetilde{\mathrm{X}}_{t,k}\), namely, \(\pi(\widetilde{\mathrm{X}}_{t,k})\) and \(\lambda(\widetilde{\mathrm{X}}_{t,k})\) to understand how much the observation supports the predictions, according to: \[\small\mathbf{\Upsilon}_{\widetilde{\mathbf{X}}_{t,k}}=-\ln\left(\mathcal{B} \mathcal{C}\left(\pi(\widetilde{\mathrm{X}}_{t,k}),\lambda(\widetilde{\mathrm{ X}}_{t,k})\right)\right)=\int\sqrt{\pi(\widetilde{\mathrm{X}}_{t,k})\lambda( \widetilde{\mathrm{X}}_{t,k})}d\widetilde{\mathrm{X}}_{t,k}, \tag{8}\] where \(\mathcal{B}\mathcal{C}\) is the Bhattacharyya coefficient. The UAV can decide whether the allocated actions were good or wrong by comparing the multiple abnormalities. Updating of action selection processThe UAV receives sensory signals to perceive the radio environment and modifies the environment through its actions. It then infers the effects of the executed actions, both discrete and continuous, through observations. By selecting appropriate actions, the UAV can adapt its strategy and determine its future behavior by minimizing the GEs given by: \[\tilde{\mathcal{E}}_{A_{t-1}}=\left[A_{t-1},\tilde{\mathcal{E}}_{A_{t-1}} \right]=\left[A_{t-1},\lambda(A_{t-1})-\mathbf{\pi}(A_{t-1})\right]. \tag{9}\] Fig. 2: Graphical representations of the proposed method: (a) GDBN, (b) Active-GDBN: As depicted in sub-figure (b), the highest level of the hierarchy is the active states (\(A_{t-1}^{[1],[1]}\)), which indicate the joint actions of the UAV. Representing the joint sub-channel and power allocation variables using Active-GDBN enables us to describe the dynamic causal structure of the radio environment at discrete and continuous states, facilitated by constant message passing and belief updating. The blue arrows denote prior messages, while the red arrows represent future messages. In essence, the past and future states are constantly represented over time as new evidence becomes available. The joint actions ( \(A_{t-1}^{[t]}\) for discrete sub-channel selection) and ( \(A_{t-1}^{[p]}\) for continuous power allocation) affect the present states \(\widetilde{\mathrm{S}}_{t,k}\), \(\widetilde{\mathrm{X}}_{t,k}\) at time \(t\) on sub-channel \(k\) and defines the present observation \(\widetilde{\mathrm{Z}}_{t}\) which depends on the previous states \(\widetilde{\mathrm{S}}_{t-1,k}\), \(\widetilde{\mathrm{X}}_{t-1,k}\). ## IV Simulation Results and Discussion In this section, we assess the performance of our proposed Active-GDBN. We fix the radius of the cell \(R\) to 1000 meters. Within the cell, there is a PBS, and a UAV is positioned in the middle, serving \(N\) SUs. We assumed that three PUs are actively occupying three sub-channels and the other sub-channels are vacant. Table I summarizes the network parameters. Fig. 3 illustrates the convergence performance of the proposed algorithm where the sum rate values are plotted versus the number of episodes for different numbers of SUs (\(M\)) multiplexed per sub-channel. When \(M=1\), the problem reduces to orthogonal multiple access (OMA). As revealed, it takes within zero and fifty episodes to converge for all possible values of \(M\). Moreover, the sum rate increases with an increasing number of SUs. The proposed algorithm achieved a maximum number of \(5\) SUs multiplexed per sub-channel. Also, as we increased the number of SUs to \(7\), the proposed algorithm shows performance degradation. This is because, beyond this limit, the symbols of the superimposed SU signals begin to overlap, making accurate SIC and signal decoding impossible for the UAV. Thus, the difference between the superimposed constellation points \(\Delta_{y}\) is kept at a reasonable distance to avoid inter-symbol interference. Fig. 4 shows the cumulative abnormality results of the proposed algorithm, validating Fig. 3. We transform the sum rate maximization problem into abnormality minimization. As a result, Active-GDBN abnormality minimization is equivalent to Active-GDBN reward (i.e., cumulative sum rate) maximization. Fig. 5 reveals the cumulative abnormality value as a function of different GNG learning rates during offline training. As is evident, setting the learning rate to 0.01 results in the fewest episodes required to attain the minimum cumulative abnormality. Therefore, to achieve faster convergence, we set the learning rate to 0.01 for all simulation settings. To compare the performance of the proposed Active-GDBN, we adopt and modify the Q-learning algorithm from [28] and the successive convex approximation technique from [7]. As clearly indicated in Fig. 6, the proposed method surpasses the Q-learning scheme to reach a better and more stable sum rate in fewer episodes. This is because, in each time step, the UAV detects abnormalities, implying a mismatch between the preferred observations and the predictions due to the performed actions. As a result, the UAV exploits the errors in each episode to learn how to take better actions that minimize future abnormalities. Moreover, the proposed Active-GDBN performs dynamic continuous power allocation to SUs. Due to the strong influence of negative rewards on Q-learning, it requires more training episodes to achieve a significant improvement in the sum rate. The low sum rate performance of convex approximation is due to its inability to learn from experience and adapt its strategies to the time-varying radio environment. Fig. 4: Cumulative Anbornality of the proposed Active-GDBN with different numbers of multiplexed SUs when \(M\) = 5, \(P_{th}=1\), \(P_{max}=20\) Watts. Fig. 5: Cumulative abnormality of the proposed Active-GDBN with different GNG learning rates when \(M=5\), \(P_{th}=1\), \(P_{max}=20\) Watts. Fig. 3: Convergence of Active-GDBN with different numbers of multiplexed SUs when \(M\) = 5, \(P_{th}=1\), \(P_{max}=20\) Watts. Fig. 6: Cumulative sum rate comparison of the proposed Active-GDBN with benchmark schemes when \(M=5\), \(P_{th}=1\), \(P_{max}=20\) Watts. Fig. 7 indicates an example of the errors on the in-phase component of the predicted combined SUs' signal. The initial errors are high in the first episode (blue line) because the UAV's initial beliefs about the SUs' positions and channel conditions are not accurate. The red line shows that the errors have decreased significantly after 10 episodes, as the UAV has learned to adapt its resource allocation policies and belief updating. The errors decrease to a minimum after 30 episodes (green), as the UAV continues to learn and improve by exploiting the generalized prediction errors. ## V Conclusion In this study, we investigate the joint sub-channel and power allocation problem in an uplink UAV-assisted cognitive NOMA network. We propose an active inference-based algorithm, called Active-GDBN, to solve the sum rate maximization problem. Due to network dynamics and practical limitations on the number of users multiplexed per sub-channel, the problem is usually difficult to solve analytically. As a result, we use a generalized state space model to characterize the dynamic network environment and transform the problem into an abnormality minimization problem. After performing extensive simulations, the results reveal the effectiveness of our proposed algorithm over the benchmark schemes in terms of cumulative sum rate. Future research will examine the effect of the UAV's trajectory in relation to the mobility of SUs and high-order modulation schemes.
2309.03667
Exploring an LM to generate Prolog Predicates from Mathematics Questions
Recently, there has been a surge in interest in NLP driven by ChatGPT. ChatGPT, a transformer-based generative language model of substantial scale, exhibits versatility in performing various tasks based on natural language. Nevertheless, large language models often exhibit poor performance in solving mathematics questions that require reasoning. Prior research has demonstrated the effectiveness of chain-of-thought prompting in enhancing reasoning capabilities. Now, we aim to investigate whether fine-tuning a model for the generation of Prolog codes, a logic language, and subsequently passing these codes to a compiler can further improve accuracy. Consequently, we employ chain-of-thought to fine-tune LLaMA7B as a baseline model and develop other fine-tuned LLaMA7B models for the generation of Prolog code, Prolog code + chain-of-thought, and chain-of-thought + Prolog code, respectively. The results reveal that the Prolog generation model surpasses the baseline in performance, while the combination generation models do not yield significant improvements. The Prolog corpus based on GSM8K and the correspondingly finetuned Prolog generation model based on LLaMA7B are released to the research community.
Xiaocheng Yang, Yik-Cheung Tam
2023-09-07T12:10:47Z
http://arxiv.org/abs/2309.03667v2
# Exploring an LM to generate Prolog Predicates from Mathematics Questions ###### Abstract Recently, there has been a surge in interest in NLP driven by ChatGPT. ChatGPT, a transformer-based generative language model of substantial scale, exhibits versatility in performing various tasks based on natural language. Nevertheless, large language models often exhibit poor performance in solving mathematics questions that require reasoning. Prior research has demonstrated the effectiveness of chain-of-thought prompting in enhancing reasoning capabilities. Now, we aim to investigate whether fine-tuning a model for the generation of Prolog codes, a logic language, and subsequently passing these codes to a compiler can further improve accuracy. Consequently, we employ chain-of-thought to fine-tune LLaMA7B as a baseline model and develop other fine-tuned LLaMA7B models for the generation of Prolog code, Prolog code + chain-of-thought, and chain-of-thought + Prolog code, respectively. The results reveal that the Prolog generation model surpasses the baseline in performance, while the combination generation models do not yield significant improvements. The Prolog corpus1 based on GSM8K2 and the correspondingly finetuned Prolog generation model3 based on LLaMA7B4 are released to the research community. Footnote 1: [https://huggingface.co/datasets/Thomas-X-Yang/gamk-prolog](https://huggingface.co/datasets/Thomas-X-Yang/gamk-prolog) Footnote 2: [https://huggingface.co/datasets/gam8k](https://huggingface.co/datasets/gam8k) Footnote 3: [https://huggingface.co/Thomas-X-Yang/Llama-7b-gsm-prolog](https://huggingface.co/Thomas-X-Yang/Llama-7b-gsm-prolog) Footnote 4: [https://huggingface.co/decapoda-research/llama-7b-hf](https://huggingface.co/decapoda-research/llama-7b-hf) ## 1 Introduction Presently, there exists a notable surge in interest in Natural Language Processing (NLP) catalyzed by the advent of ChatGPT. ChatGPT, being a transformer-based generative language model, exhibits versatility in performing a wide range of tasks grounded in natural language. The remarkable achievement of the GPT model can be attributed, in significant part, to its utilization of an exceptionally extensive corpus and a vast parameter set for acquiring features from the corpus. Nevertheless, mere augmentation in the model's size falls short in addressing mathematical inquiries encompassing arithmetic, commonsense, and symbolic reasoning - topics that may appear deceptively simple to individuals [1]. One conceivable explanation for this issue is that generative models overly depend on their training corpus. Its proficiency in specific tasks stems from the presence of sentences closely, and sometimes explicitly, linked to those tasks within the corpus. The GPT model, in turn, assimilates these sentences and effectively memorizes the corresponding answers. However, mathematical problems pose a challenge as they can manifest in various contextual frames, articulated through diverse approaches, and involve distinct numerical values. Consequently, the model is prone to encountering ostensibly unfamiliar mathematical queries, resulting in suboptimal performance. Therefore, the significance of this project hinges upon the inadequacy of current capabilities in addressing this type of questions. We posit that to surmount this limitation, an NLP model should possess the capability to ingest natural language sentences and produce corresponding logical predicates. These predicates can then be processed by an external tool, distinct from a language model, to ultimately compute the desired result. In this context, we employ the Prolog language, known for its efficacy in performing such tasks, as the external logic tool. In essence, the role of the language model is restricted to semantic parsing and question comprehension, while the logical and computational tasks are delegated to a more precise tool. In this manner, the language model is relieved of the burden of memorizing every conceivable answer to questions, focusing instead on proficiently translating natural language into logic language. This shift in approach has the potential to significantly diminish the model's reliance on an excessively large corpus and enhance its performance in tackling such questions. Furthermore, a significant issue with neural networks is the limited space for human intervention. It poses a challenge for humans to comprehend the inner workings and exert control over a vast neural network. Through the adoption of this model paradigm, human involvement is facilitated through the control of an external tool responsible for executing logic language, thus augmenting the model's explainability. The paper's structure is as follows: The concept of chain-of-thought will be discussed in the Related Work section. Subsequently, the Approach section will provide a detailed, step-by-step account of project implementation. The Results section will showcase the performance of the fine-tuned models. Lastly, the Conclusion section presents drawn conclusions and outlines future research directions. ## 2 Related Work In prior research, chain-of-thought prompting has demonstrated its efficacy in enhancing the reasoning capabilities of large language models when compared to conventional prompts that only supply questions and answers [2]. The fundamental concept behind chain-of-thought is to elucidate the intermediate problem-solving steps to the model. Consequently, we are motivated to employ chain-of-thought in fine-tuning as a foundational benchmark for this project, with the aim of investigating whether fine-tuning for Prolog generation as the output surpasses fine-tuning for chain-of-thought as the output in terms of performance. ## 3 Approach Initially, this study leverages ChatGPT in conjunction with human correction to acquire Prolog code for each sample within the GSM8K dataset. It then employs the Chinese-Vicuna framework to fine-tune the LLaMA model across four data output configurations. Ultimately, the study assesses performance under these configurations to gauge the effectiveness of Prolog code generation for solving mathematical problems. ### Base Corpus The selection of GSM8K as the foundational corpus is based on its high relevance and quality. GSM8K comprises more than 8.5k elementary school-level math problems along with their solutions, articulated in natural language [3]. Each solution employs a chain-of-thought approach to address the question, presenting a step-by-step solution culminating in a final answer, which can already be directly used as one output style. ### Prolog Code Retrieval We formulate prompts to extract Prolog code from ChatGPT. We first manually compose 10 examples for integration into the few-shot prompts. Here, we present an illustrative example. Figure 1 depicts a question from GSM8K, while Figure 2 showcases the corresponding Prolog code designed to address it. In an effort to enhance the accuracy of retrieval results, the prompts incorporate natural language answers from GSM8K. Initially, the gpt-3.5-turbo-16k model is employed due to its cost-effectiveness. We process a pool of 100 samples, selecting 20 of them to construct the prompts, maximizing the utilization of the input token length. Utilizing the newly crafted prompts, we generate codes for all samples, retaining those codes that are both executable and yield accurate outcomes and regenerating codes for the rest. This iterative process continues until a bottleneck is encountered. To overcome this bottleneck, we inject randomness into the process by reconfiguring the prompt candidates. The revised prompts comprise two components: the fixed part and the random part. The fixed part retains 8 old candidates. We select the 64 longest correctly generated code pieces, as longer codes often encompass more intricate arithmetic operations and potentially contribute to improved correctness. For each sample, we randomly select five candidates from this group to constitute the random part. This generation process continues iteratively until the bottleneck is encountered once again. By this stage, the number of remaining questions has dwindled considerably, allowing us to employ gpt-4, which has the potential to further decrease the number of remaining questions to fewer than 100. Finally, the remaining pieces of Prolog code are finalized through manual completion, followed by a manual verification of both executability and correctness of all the codes. ### Finetuning Owing to VRAM limitations, we employ LoRAs to facilitate the fine-tuning of the LLaMA7B model within the Chinese-Vicuna framework5. While the inputs consist of mathematical problems, we explore four different output styles for fine-tuning: chain-of-thought, Prolog code, chain-of-thought + Prolog code, and Prolog code + chain-of-thought. This approach yields four fine-tuned 7B models. Throughout the fine-tuning process, the same configuration of hyperparameters is maintained to ensure a fair comparison of the performance of the four models. Footnote 5: [https://github.com/Facico/Chinese-Vicuna](https://github.com/Facico/Chinese-Vicuna) ## 4 Results All four fine-tuned models undergo performance testing using the identical test set. Beam search is chosen as the generation strategy due to its superior performance in comparison to random sampling. Table 1 and Table 2 present the accuracy results for chain-of-thought and Prolog code, respectively. In the case of a chain-of-thought result, a syntax error is identified when the parser fails to retrieve an answer, whereas a semantic error occurs when the retrieved answer is incorrect. In the case of a Prolog code result, a syntax error indicates non-executability of the code, whereas a semantic error signifies that the executable code produces an incorrect answer. Figure 1: One example of a question in GSM8K Figure 2: One example of a piece of Prolog code ### Finetuned on Chain-of-Thought This model results from the fine-tuning of LLaMA7B directly using the natural language answers from GSM8K. Earlier research revealed that LLaMA7B, prior to fine-tuning with the math corpus, achieved an accuracy of 11.0% on GSM8K's test set [4]. Following fine-tuning, LLaMA7B exhibits an accuracy of 25.1%, surpassing its previous performance. The increment in performance can be attributed to both fine-tuning and the adoption of the beam search generation strategy. This suggests that introducing chain-of-thought samples during the fine-tuning phase can positively impact the model's performance. We consider this performance as the baseline. ### Finetuned on Prolog Code Generation This model undergoes fine-tuning to produce Prolog codes for mathematical questions. The generated Prolog codes are subsequently forwarded to a Prolog compiler for correctness verification. Remarkably, this fine-tuned Prolog generation model attains an impressive accuracy of 30.9%, a substantial improvement over the baseline. This suggests that entrusting the logical and computational aspects to an external tool and relegating the model's role to that of a translational device can effectively enhance its performance in solving math problems that necessitate logical and computational inference. Nonetheless, it has come to our attention that certain outputs categorized as having syntax errors do not necessarily entail critical errors involving ambiguity or semantic flaws. These outputs incorporate operations that the compiler does not support, rendering the codes non-executable. Such issues can be rectified by expanding the compiler's capabilities to encompass a broader range of operations. Therefore, it may be overly stringent to classify these outputs as incorrect. An illustrative example is provided in Figure 3, where the error arises due to the problem of integer solutions to an inequality, which the compiler cannot handle accurately. Following manual review, approximately 1.5% of the samples are reclassified as correct when the criteria are relaxed, considering the type of samples mentioned earlier as correct. As illustrated in Table 3, the accuracy now stands at 32.4%. While the improvement may not be substantial, it underscores the significance of the reliability of the external tool. ### Finetuned on Chain-of-Thought + Prolog Code This finetuned model generates a chain-of-thought solution followed by a piece of Prolog code. This experiment is motivated by the fact that transformer models utilize the current sequence to generate sub \begin{table} \begin{tabular}{l l l l} \hline Finetuning Data & Accuracy & Syntax Error & Semantic Error \\ \hline GSM & 25.1\% & 2.5\% & 72.4\% \\ \hline GSM Prolog & & & \\ \hline GSM Prolog (COT+Code) & 16.1\% & 47.8\% & 36.0\% \\ \hline GSM Prolog (Code+COT) & 26.2\% & 3.2\% & 70.7\% \\ \hline \end{tabular} \end{table} Table 1: The chain-of-thought part performance of four finetuned models \begin{table} \begin{tabular}{l l l l} \hline Finetuning Data & Accuracy & Syntax Error & Semantic Error \\ \hline GSM & & & \\ \hline GSM Prolog & 30.9\% & 20.6\% & 48.4\% \\ \hline GSM Prolog (COT+Code) & 17.7\% & 59.1\% & 23.2\% \\ \hline GSM Prolog (Code+COT) & 30.1\% & 24.9\% & 45.0\% \\ \hline \end{tabular} \end{table} Table 2: The Prolog code part performance of four finetuned models sequent tokens, implying that the content generated initially can influence subsequent content generation. Our objective is to assess whether generating chain-of-thought solutions first can improve the accuracy of the generated codes. The experiment's outcome reveals that this combination not only diminishes the performance of the chain-of-thought but also lowers the code accuracy to as low as 17.7%, a level even below the baseline. One contributing factor is that this output combination contaminates the data, making it challenging for the model to discern the relationships between tokens during the fine-tuning phase. Another potential explanation is that chain-of-thought may not inherently enhance the quality of code generation during the inference stage. ### Finetuned on Prolog Code + Chain-of-Thought The motivation of this experiment aligns with that of the combination experiment in the previous section. Our objective is to investigate the impact of Prolog codes on chain-of-thought generation. Interestingly, when Prolog code generation is not influenced by chain-of-thought this time, its quality, achieving an accuracy of 30.1%, closely approximates that of solely generating Prolog codes. Furthermore, code generation appears to exert a marginal, positive influence on chain-of-thought generation that follows. As a result, the accuracy of chain-of-thought rises to 26.2%. This observation suggests that generations characterized by a clear and easily comprehensible structure may aid the model in extracting information that benefits subsequent generated content. ## 5 Conclusion In this study, we utilized ChatGPT to generate Prolog codes for the GSM8K corpus, employing four distinct output settings and subsequently fine-tuning four LLaMA7B models accordingly. Through a comparative analysis of accuracies on the test set, the following conclusions can be drawn. Firstly, fine-tuning undeniably enhances performance in the mathematics question domain. Secondly, the process of generating Prolog codes and subsequently sending them to an external compiler yields superior results compared to chain-of-thought generation. Leaving the logical and computational aspects to an external tool and reducing the model to a translational device can effectively enhance its performance in solving math problems. This approach has the potential to be applied to other domains that involve logical and com Figure 3: One example of a piece of Prolog code with syntax errors \begin{table} \begin{tabular}{l l l} Finetuning Data & Accuracy & Syntax Error & Semantic Error \\ \hline GSM Prolog & 30.9\% & 20.6\% & 48.4\% \\ \hline GSM Prolog (Revised) & 32.4\% & 19.1\% & 48.4\% \\ \hline \end{tabular} \end{table} Table 3: The revised Prolog code part performance of the Prolog model putational reasoning. Thirdly, generating combinations of chain-of-thought and Prolog code does not result in a significant further enhancement of inference performance. So far, the finetuned LLaMA7B for Prolog generation continues to exhibit a 19.1% syntax error rate after revision and a 20.6% before revision. Therefore, future efforts can be directed towards reducing syntax errors and expanding the capabilities of the external tool to support additional operations. Furthermore, given the remarkably high semantic error rate of 48.4%, there is a pressing need for models with enhanced question comprehension capabilities to address this bottleneck. ## 6 Acknowledgement Special thanks are given to Professor Yik-Cheung Tam for mentoring this project, and to NYU Shanghai for providing the platform to support undergraduate research.
2309.16524
HOI4ABOT: Human-Object Interaction Anticipation for Human Intention Reading Collaborative roBOTs
Robots are becoming increasingly integrated into our lives, assisting us in various tasks. To ensure effective collaboration between humans and robots, it is essential that they understand our intentions and anticipate our actions. In this paper, we propose a Human-Object Interaction (HOI) anticipation framework for collaborative robots. We propose an efficient and robust transformer-based model to detect and anticipate HOIs from videos. This enhanced anticipation empowers robots to proactively assist humans, resulting in more efficient and intuitive collaborations. Our model outperforms state-of-the-art results in HOI detection and anticipation in VidHOI dataset with an increase of 1.76% and 1.04% in mAP respectively while being 15.4 times faster. We showcase the effectiveness of our approach through experimental results in a real robot, demonstrating that the robot's ability to anticipate HOIs is key for better Human-Robot Interaction. More information can be found on our project webpage: https://evm7.github.io/HOI4ABOT_page/
Esteve Valls Mascaro, Daniel Sliwowski, Dongheui Lee
2023-09-28T15:34:49Z
http://arxiv.org/abs/2309.16524v2
# HOI4ABOT: Human-Object Interaction Anticipation for Human Intention Reading Collaborative roBOTs ###### Abstract Robots are becoming increasingly integrated into our lives, assisting us in various tasks. To ensure effective collaboration between humans and robots, it is essential that they understand our intentions and anticipate our actions. In this paper, we propose a Human-Object Interaction (HOI) anticipation framework for collaborative robots. We propose an efficient and robust transformer-based model to detect and anticipate HOIs from videos. This enhanced anticipation empowers robots to proactively assist humans, resulting in more efficient and intuitive collaborations. Our model outperforms state-of-the-art results in HOI detection and anticipation in VidHOI dataset with an increase of 1.76% and 1.04% in mAP respectively while being 15.4 times faster. We showcase the effectiveness of our approach through experimental results in a real robot, demonstrating that the robot's ability to anticipate HOIs is key for better Human-Robot Interaction. Keywords:Human-Object Interaction, Collaborative Robots, Human Intention ## 1 Introduction In recent years, the field of robotics has witnessed significant interest in human-robot interaction (HRI), with a focus on enhancing the ability of robots to assist humans in various tasks [1, 2, 3, 4]. To facilitate effective human-robot collaboration (HRC), it is crucial for the robot to possess an understanding of both the surrounding environment and the individuals within it, including their Figure 1: **Overview of our HOI4ABOT framework. A robot leverages RGB data to detect and anticipate the human-object interactions in its surroundings and assist the human in a timely manner. The robot anticipates the human intention of holding the cup, so it prepares itself for pouring by grabbing the bottle. The robot reacts to the human holding the cup by pouring water.** intentions. For example, consider the scenario visualized in Fig. 1 where a robot assists a person in the kitchen. By recognizing the person's intention to prepare a drink and understanding their actions such as reaching for the cup, the robot can proactively provide the necessary support in a timely manner, such as picking up a bottle and pouring water. Therefore, by recognizing and anticipating human-object interactions (HOIs), the robot gets a solid understanding of the person's intention and better caters to their needs [1]. While HOI is a long-standing challenge in the computer vision community, most approaches only consider the detection of these interactions from single frames [5; 6; 7; 8; 9; 10]. However, to minimize the latency when a person is assisted by a robot, the detection is not enough, but the anticipation is needed [11; 12; 13]. Therefore, we consider the task of HOI detection and anticipation, and we propose to leverage temporal cues from videos to better understand human intention. HOI recognition in videos has been explored recently [14; 15; 16; 17]. In this paper, we propose a real-time deep learning architecture that combines pre-trained models with spatio-temporal consistency to successfully detect and anticipate HOIs. Our model outperforms the state-of-the-art in VidHOI dataset [14] in terms of accuracy and speed. Moreover, we ensemble our framework with behavior trees [18] to adapt in real-time the robot actions for better interaction with the human. We implement our framework in a real robot and demonstrate the effectiveness of our approach in the pouring task, showcasing the robot's ability to anticipate HOIs and proactively assist the human while reducing latency in the execution. The contributions of our paper are summarized next: * A real-time transformer-based model for HOI detection and anticipation. * A novel patch merging strategy to align image features to pre-extracted bounding boxes. * To the best of our knowledge, we are the first to assess HOI anticipation in a real robot experiment for a collaborative task. ## 2 Related Works ### Human Intention in Robotics Recognizing and predicting human intention is crucial to ensure seamless human-robot collaboration (HRC) [12; 13; 19; 20]. [12] observed significant differences in the robot's contribution and commitment in an experiment of a human carrying car parts to a shared workspace with an anticipatory robot to assemble them. Recent works in computer vision have highlighted the potential of harnessing human intention to better anticipate future human actions [21; 22; 23]. In particular, [23] leverages the detection of human-object interactions (HOIs) within a scene to understand this high-level intention of the individuals. Despite the benefits of using HOIs, their application in robotics from vision data has not been extensively explored [1]. [4] proposes a conditional random field (CRF) to assess the feasibility of a robot executing a given task based on the anticipated human actions. The CRF predicts the next human actions by considering object affordances and positions in the future. However, [4] is not scalable to new tasks as the CRF relies on hand-crafted features. Instead, we train our model in the largest HOI video dataset available to learn robust features that enhance the robot's ability to anticipate human intention. Recently, [24] proposed a spatial-attention network to extract scene graphs from images in an industrial scenario. However, [24] neglects the time dependency in the task and does not anticipate the human intention to enhance HRC. [25; 26; 27] also adopted scene graphs but focused on task planning. ### HOI Detection and Anticipation HOI focuses on localizing the humans and objects in a scene and classifying their interactions using a \(\langle\)human, interaction, object\(\rangle\) triplet (e.g. \(\langle\)person1, hold, cup\(\rangle\)). HOI task has recently gained attention in the computer vision community due to its promising applications in downstream tasks, such as scene understanding [28] or action recognition [29]. The primary focus is the detection of HOI from images [5; 6; 7; 8; 9; 10]. Some [7; 8; 9] adopt a one-stage approach, directly operating on the images to predict the HOI triplet. However, these methods require higher training resources and do not benefit from pre-trained object detections. On the contrary, [5; 6; 10] employ a two-stage method to first locate the objects and humans in the image using pre-trained models and then classify each interaction using multi-stream classifiers. In particular, [10] uses a ViT transformer [30] to extract the patched features and proposes Masking with Overlapped Area (MOA) to extract features per object or human through a self-attention layer. Our work shows that weighting the patched features is sufficient to outperform MOA while not requiring any additional parameters. While processing individual frames may be adequate for HOI detection, we argue that HOI anticipation benefits from leveraging the temporal aspects inherent in these interactions. Several studies in HOI detection address this temporal dimension by focusing on videos [14; 15; 16; 17]. [16] fuses patched features at multiple levels to generate instance representations utilizing a deformable tokenizer. [14] employs a two-stage model that uses 3D convolutions to merge features across the temporal dimension. [15] also adopts a two-stage approach but relies on a spatio-temporal transformer [31] to detect the interactions in videos. Finally, [17] extends the architecture from [15] by concatenating the human and object temporal features and fusing them with the human gaze information using cross-attention. [17] is the first work to propose both HOI detection and anticipation in videos. Similarly to [10], [17] also adopts focal loss [32] to tackle the HOI imbalance in training. We adopt the findings from [17] but observe their model to not be feasible to work in real-time. Moreover, [17] trains a unique model for each anticipation horizon in the future. Instead, we propose a novel real-time multi-head model that can detect and anticipate HOIs in a single step. ### Task and Motion Planning For a robot to effectively assist and collaborate with a human in a particular task, it needs to understand the structure and order of actions involved, enabling the robot to achieve desired goals [33]. Finite State Machines (FSM) have been the standard choice for representing the task structure for a long time [34; 35]. However, scaling FSM poses a challenge due to their lack of modularity and flexibility [18]. Recently, Behavior Trees (BT) [18] have gained popularity as they can facilitate task planning in HRC tasks [36; 37], where the environment is dynamic. Our work adopts BT and defines its behavior based on the anticipated human intention and its uncertainty. Once a suitable chain of actions has been found by the task planner, motion planning is responsible for determining the low-level movements of the robot. Motion planning is a core problem in robotics [38; 39; 40; 41; 42]. [38; 39] proposed to randomly sample points in the state space towards the goal. However, they consider humans as obstacles or constraints, not collaborators. Some approaches [40; 41] formulate motion planning as an optimization problem, but their applications in HRC are limited as determining the cost function related to humans is not trivial. Alternatively, motion generators can be learned from human demonstrations to obtain more natural movement [42; 43]. Dynamic Movement Primitives (DMPs) [42] have been successfully employed in HRC, by dynamically adapting their parameters [44; 45; 46]. ## 3 Methodology In this section, we present our **H**uman-**O**bject **I**nteraction Anticipation for Coll**A**borative ro**BOT**s (**HOI4ABOT**) framework. First, we formulate the HOI detection and anticipation task. Then, we describe the integration of the deep learning architecture into the robot framework. ### Human-Object Interaction Let \(\mathbf{V}=[\mathbf{f}_{-T},\cdots,\mathbf{f}_{0}]\) be a frame sequence of duration \(T+1\). The goal is to predict the interaction class \(i_{k}^{\tau}\) in the subsequent time \(\tau\) between any human \(\mathbb{H}_{n}\) and object \(\mathbb{O}_{m}\) pair \(\mathbb{P}_{k}=\{\mathbb{H}_{n},\mathbb{O}_{m}\}\) observed during the video \(\mathbf{V}\), where \(0\leq n\leq N,0\leq m\leq M,0\leq k\leq K=M*N\). A visual illustration of our HOI4ABOT architecture is depicted in Fig. 2. **Detection and tracking**. HOI4ABOT is a two-stage method. First, we leverage off-the-shelf state-of-the-art object detection and tracking methods to identify the bounding boxes \(\mathbf{B}_{m}\in\mathbb{R}^{(T+1)\times 4}\), label \(c_{m}\), and track identifier \(id_{m}\) for any object \(\mathbb{O}_{m}=\{id_{m},c_{m},\mathbf{B}_{m}\}\) in the video \(\mathbf{V}\). \(\mathbf{B}_{m}=[\mathbf{b}_{m}^{-T},\cdots,\mathbf{b}_{m}^{0}]\) represents a list of \(XY\) pixel coordinates of the top-left corner and right-bottom corner of the bounding box that locates a given object \(\mathbb{O}_{m}\) at each frame \(\mathbf{f}_{\mathbf{f}}\) of \(\mathbf{V}\). We obtain the same information for each human \(\mathbb{H}_{n}\). In the second stage, we exploit each individual pair \(\mathbb{P}_{k}=\{\mathbb{H}_{n},\mathbb{O}_{m}\}\) to predict its interaction class \(i_{k}^{\tau}\) in a given time horizon \(\tau\) using various data modalities. This requires understanding the visual features of the pair, how their spatial relationship evolves through time \(\mathbf{B}_{k}=[\mathbf{B}_{n},\mathbf{B}_{m}]\) and also the intrinsic semantics of the object \(c_{m}\). **Visual features**. We use Dinov2 [47] as a pre-trained Visual Transformer (ViT) [30] backbone to divide each frame \(\mathbf{f}_{t}\) into \(L\times L\) patches and project each patch \(\mathbf{p}_{l}^{t}\) to a visual token \(\mathbf{e}_{l}^{t}\) that encodes the image information of that patch \(l\). In total, the image encoder obtains \(\mathbf{E}^{t}\in\mathbb{R}^{L^{2}\times d}\) that captures the local visual features, plus the global context vector \(\mathbf{cls}_{t}\in\mathbb{R}^{d}\) of a frame \(\mathbf{f}_{t}\). We develop a simple but efficient technique, called Patch Merger, to extract individual features per human and object from a frame through a single step. Let \(\mathbb{O}_{m}^{t}\) be an object \(m\) with its box \(\mathbf{b}_{m}^{t}\) at frame \(\mathbf{f}_{t}\). First, we create a binary mask for \(\mathbf{f}_{t}\), where \(1\) denotes a pixel laying within \(\mathbf{b}_{m}^{t}\). We convert the binary mask in a sequence of patches following [30]. Then, we obtain a weighting vector \(\mathbf{\omega}_{m}^{t}\) by computing the percentage that \(\mathbf{b}_{m}^{t}\) overlaps each patch using 2D Average Pooling and normalization. Finally, we compute the weighted sum of local visual features \(\mathbf{e}_{m}^{t}=\sum\mathbf{\omega}_{m}^{t}\mathbf{E}^{t}\), obtaining the individual representation of \(\mathbb{O}_{m}^{t}\). Compared to [10], which normalizes along the patch dimension and uses a quantized sequence as the attention mask for a self-attention layer, our algorithm is parameter-free, more efficient, and shows better performance in our experiments. We propose to capture the context within a frame using \(\mathbf{cls}_{t}\in\mathbb{R}^{d}\), contrary to the spatial transformer proposed in [17]. We claim that this context (e.g. a kitchen, an office) should be invariant in short time periods and be the dominant component among all \(\mathbf{cls}_{t}\) tokens. Consequently, we use Average Pooling to reduce the N \(\mathbf{cls}_{t}\) features to a single representation \(\widehat{\mathbf{cls}}=AvgPool([\mathbf{cls}_{-T},\cdots,\mathbf{cls}_{0}])\), which is the context of the scene. Figure 2: **HOI4ABOT architecture overview.** We consider a video of \(T+1\) frames with the pre-extracted object and human bounding boxes \(\mathbf{B}^{t}\). Our module initially extracts relevant features per frame (left) to later on detect and anticipate HOIs (right) later. First, a ViT backbone [47] extracts patch-based local \(\mathbf{E}^{t}\) and global \(\mathbf{cls}_{t}\) features per each frame \(t\). Then, we obtain features per human \(\mathbf{e}_{n}^{t}\) and object \(\mathbf{e}_{m}^{t}\) by aligning \(\mathbf{E}^{t}\) to their bounding boxes, as shown in light blue. We also project each \(\mathbf{B}^{t}\) to \(\hat{\mathbf{B}}^{t}\) using a box embedder [48], and the object category to \(\mathrm{s}_{\mathrm{m}}\) using CLIP [49]. Our Dual Transformer, shown in purple, leverages the human and object-constructed windows (sequences in red and blue respectively) through two cross-attention transformers, where Key, Query, and Value are used in the attention mechanism. \(\mathrm{q}\) is a learnable parameter to learn the evolution of the location in time. Finally, we project the enhanced last feature from the Human Blender to detect and anticipate HOIs at several time horizons \(i_{k}^{\tau}\) in the future through our _Hydra_ head (shown in light green). **Spatial features**. For each bounding box \(\mathbf{b}_{m}^{t}\), we extract the \(XY\) normalized pixel coordinates for the top-left corner and right-bottom corner. Then, we adopt a positional encoding using random spatial frequencies [48] to embed the location of each point and merge these two corner representations into one box representation \(\hat{\mathbf{b}}_{m}^{t}\in\mathbb{R}^{d}\) using a fully connected layer. This process is also applied to humans, thus obtaining \(\hat{\mathbf{b}}_{n}^{t}\in\mathbb{R}^{d}\) to encode each human \(\mathbb{H}_{n}^{t}\) position in the scene. **Object semantics**. Leveraging the object semantics is essential to understanding the possible interactions in a given pair. While 'holding a cup' or 'holding a bottle' are both feasible, 'holding a car' becomes more unrealistic. Thus, we extract object semantic information \(\mathbf{s}_{m}\in\mathbb{R}^{d}\) per object \(\mathbb{O}_{m}\) to facilitate the model predicts the intention class \(i_{k}^{\tau}\). For that, we use the CLIP text encoder [49]. **Pair Interaction.** We construct a temporal architecture that leverages the evolution of the interactions between a human \(\mathbb{H}_{n}\) and an object \(\mathbb{O}_{m}\) in time. We process each pair independently, and therefore we focus on a single pair in the formulation. We stack both the visual tokens \(\mathbf{E}_{n}=[\mathbf{e}_{n}^{-T},\cdots,\mathbf{e}_{n}^{0}]\) and the spatial features \(\hat{\mathbf{B}}_{n}=[\hat{\mathbf{b}}_{n}^{-T},\cdots,\hat{\mathbf{b}}_{n}^{0}]\) in time and construct a human temporal window \(\mathbf{W}_{\mathbf{H}_{n}}=[\hat{\mathbf{B}}_{n},\mathbf{E}_{n}]\). Similarly, we also construct an object's temporal window \(\mathbf{W}_{\mathbf{O}_{m}}=[\hat{\mathbf{B}}_{m},\mathbf{E}_{m}]\). We add a sinusoidal positional encoding to \(\mathbf{W}_{\mathbf{H}_{n}}\) and \(\mathbf{W}_{\mathbf{O}_{m}}\), Later, we prepend the global visual feature and a learnable spatial parameter \([\mathbf{q},\widehat{\mathbf{cls}}]\) to \(\mathbf{W}_{\mathbf{H}_{n}}\). \(\mathbf{q}\) learns the evolution of the location of the human in time through the attention mechanism. We also extend \(\mathbf{W}_{\mathbf{O}_{m}}\) by prepending the semantic token \(\mathbf{s}_{m}\) that encodes the object type. Therefore, we obtain a temporal feature \(\mathbf{W}_{\mathbf{H}_{n}}\in\mathbb{R}^{(T+2)\times d}\) and \(\mathbf{W}_{\mathbf{O}_{m}}\in\mathbb{R}^{(T+2)\times d}\) per pair. To extract the HOI relationships between \(\mathbb{H}_{n}\) and \(\mathbb{O}_{m}\), we train a dual transformer with cross-attention layers. First, an Object Blender transformer enhances the object window \(\mathbf{W}_{\mathbf{O}_{m}}\) based on the human knowledge \(\mathbf{W}_{\mathbf{H}_{n}}\). Then, the blended object features \(\widehat{\mathbf{W}}_{\mathbf{O}_{m}}\) are used to extend the human representation \(\mathbf{W}_{\mathbf{H}_{n}}\) in the Human Blender transformer to \(\widehat{\mathbf{W}}_{\mathbf{H}_{n}}\). Finally, we extract the last token from \(\widehat{\mathbf{W}}_{\mathbf{H}_{n}}\), which encodes the most current status of the scene, and classify the interaction pair \(i_{k}^{\tau}\) using a fully connected layer. As a given human-object pair can have multiple interactions simultaneously, we use a sigmoid function and define a threshold to classify the current interactions. **Multi-head classification for multiple future horizons.** The goal is to predict the interaction class \(i_{k}^{\tau}\) in the subsequent time \(\tau\) between any human \(\mathbb{H}_{n}\) and object \(\mathbb{O}_{m}\) pair \(\mathbb{P}_{k}=\{\mathbb{H}_{n},\mathbb{O}_{m}\}\). We considered the problem of HOI detection (\(\tau=0\)) and also the anticipation in multiple future horizons (\(\tau>0\)). Contrary to [17] that proposes one trained model for each \(\tau\), we developed a single model that can predict multiple time horizon interactions. For that, we froze the HOI4ABOT trained in the detection task, and train an additional linear layer that projects the last token from \(\widehat{\mathbf{W}}_{\mathbf{H}_{n}}\) to the interaction for the particular \(\tau\). We call this shared backbone the _Hydra_ variant, which allows us to simultaneously predict interactions across multiple \(\tau\), making our model faster and more efficient. We consider our _Hydra_ variant with \(A\) number of heads. ### Motion generation and task planning **Motion Generation.** The proposed framework segments the complex movements into simpler movement primitives, which are learned with DMPs. To collect demonstrations of each movement primitive, we employ kinesthetic teaching, where an operator guides the robot's end effector by physically manipulating it [50]. Generating the motion requires estimating the goal position, which we obtain through the use of a calibrated vision system that relies on a pre-trained object detector (i.e. YOLOv8 [51]) and a depth camera. The position of the goals with respect to the robot base is computed using the intrinsic and extrinsic camera matrices. **Task planning.** Properly scheduling the acquired movement primitives is crucial to reach a desired goal. We implement Behavior Trees (BT) [18] as a ROS node that subscribes to the predicted HOIs and their confidence. The reactiveness of BTs allows adapting the robot's behavior by considering the anticipated human intention and changing to the appropriate sub-tree if needed. This is motivated by how humans interact with each other. For example, if a bartender observes a client approaching the bar, they can prepare for the interaction by grabbing a glass, thus reducing the serving time. **Robot control.** The generated poses from the motion generator are passed to the controller. In our system, we employ a Cartesian impedance controller [52; 53] to achieve the compliant behavior of the manipulator. This controller enhances the safety of human-robot collaboration by allowing the robot to respond in a compliant manner to external forces and disturbances. ## 4 Experiments ### Dataset and Metrics We train and evaluate our model on the VidHOI dataset [14], the largest dataset available for human-object interactions in videos. This dataset encompasses 7.3 million frames with 755,000 annotated interactions of one frame per second. To assess the performance of our approach, we adopted the same evaluation metrics as those presented in [17]. We computed the mean average precision (mAP) using the method presented in [54]. The mAP@50 incorporates the precision-recall curves for all interaction classes. To determine a correct HOI triplet, three conditions need to be met: (i) the detected bounding boxes for human and object must overlap with their corresponding ground truths with an Intersection over Union (IoU) of 50 %, (ii) the predicted object category is correct, (iii) the predicted interaction is correct. Following standard evaluation in VidHOI, we report mAP across three different HOI sets: (i) Full: all interaction categories, (ii) Non-Rare: frequent interactions in the validation set (more than 25 appearances), (iii) Rare: non-frequent interactions (less than 25). Additionally, we evaluated our approach in _Oracle mode_, where we use the human and object detections from ground truth, and in _Detection mode_, where those are predicted using YOLOv5 [55] as in [17]. Finally, we computed the Person-wise top-k metrics [17] where the anticipation was considered correct if one of the top-k predicted interactions matched the ground truth. ### Quantitative evaluation HOI4ABOT outperforms state-of-the-art models [14; 54; 16; 15; 17] in terms of accuracy and speed across all different tasks and scenarios, as shown in Table 1 and Table 2. Moreover, Table 2 shows how our _Hydra_ variant outperforms all models in the anticipation task, even training from scratch a separate model for each anticipation horizon. We consider that the detections provide a great deal of information regarding what a human is doing now, and what they might be interested in doing next. By using the _Hydra_ variant we ground the anticipation to what is happening at the present time. ### Ablation study This section analyses our proposed approaches and their impact on the performance of the HOI task. All results are depicted in Table 3. For simplification, we only consider the HOI detection task. \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & Full & Non-Rare & Rare \\ \hline \hline \multicolumn{4}{c}{**Oracle Mode**} \\ \hline ST-HOI [14] & 17.6 & 27.2 & 17.3 \\ QPIC [54] & 21.4 & 32.9 & 20.56 \\ TUTOR [16] & 26.92 & 37.12 & 23.49 \\ STTran [15] & 28.32 & 42.08 & 17.74 \\ ST-Gaze [17] & 38.61 & 52.44 & 27.99 \\ \hline Ours _(Dual)_ & 40.37 & **54.52** & 29.5 \\ Ours _(Stacked)_ & **40.55** & 53.94 & **30.26** \\ \hline \hline \multicolumn{4}{c}{**Detection Mode**} \\ \hline STTran [15] & 7.61 & 13.18 & 3.33 \\ ST-Gaze [17] & 10.4 & 16.83 & 5.46 \\ \hline Ours _(Dual)_ & **11.12** & **18.48** & **5.61** \\ Ours _(Stacked)_ & 10.79 & 17.79 & 5.42 \\ \hline \hline \end{tabular} \end{table} Table 1: Detection mAP. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{\(\tau_{a}\)} & \multirow{2}{*}{mAP} & \multicolumn{4}{c}{Person-wise top-5} \\ \cline{3-6} & & & Rec & Prec & Acc & F1 \\ \hline \multirow{3}{*}{STTran [15]} & 1 & 29.09 & **74.76** & 41.36 & 36.61 & 50.48 \\ & 3 & 27.59 & **74.79** & 40.86 & 36.42 & 50.16 \\ & 5 & 27.32 & **75.65** & 41.18 & 36.92 & 50.66 \\ \hline \multirow{3}{*}{ST-Gaze [17]} & 1 & 37.59 & 72.17 & 59.98 & 51.65 & 62.78 \\ & 3 & 33.14 & 71.88 & 60.44 & 52.08 & 62.87 \\ & 5 & 32.75 & 71.25 & 59.09 & 51.14 & 61.92 \\ \hline \multirow{3}{*}{Ours _(Dual, Scratch)_} & 1 & **38.46** & 73.32 & 63.78 & 55.37 & 65.59 \\ & 3 & 34.58 & 73.61 & 61.7 & 54 & 64.48 \\ \cline{1-1} & 5 & **33.79** & 72.33 & 63.96 & 55.28 & 65.21 \\ \hline \multirow{3}{*}{Ours _(Dual, Hydra)_} & 1 & 37.77 & 74.07 & **64.9** & **56.38** & **66.53** \\ \cline{1-1} & 3 & **34.75** & 74.37 & **64.52** & **56.22** & **66.4** \\ \cline{1-1} & 5 & **34.07** & 73.67 & **65.1** & **56.31** & **66.4** \\ \hline \hline \end{tabular} \end{table} Table 2: Anticipation mAP in Oracle mode. Firstly we explore different variations in the extraction and arrangement of features to compose the human and object windows. We compare our Patch Merger strategy to the MOA strategy from [10]. Using MOA requires an additional self-attention block, which increases the model's parameters while under-performing. Moreover, we explore different feature aggregation strategies to classify an interaction. Instead of using the last observed token in \(\widehat{\mathbf{W}_{\mathbf{H}_{n}}}\) for classification, we prepend an additional learnable token to \(\mathbf{W_{H}}_{n}\) which aggregates the interaction relationships, inspired by the ViT class token [30]. However, Table 3 shows that classifying from the last observed features is better while not requiring additional parameters. Last, we consider varying the order of the cross-attention branches, first the Human Blender and second the Object Blender. We claim that the decrease in performance is due to the different behavior between humans and objects: objects are static and therefore less informative than humans, which are dynamic and lead the interaction. Secondly, we assess our dual transformer by comparing it with other variants. We consider the _Single_ variant when only using the Human Blender transformer, which is not able to effectively capture the HOIs. We also consider stacking both \(\mathbf{W_{H}}_{n}\in\mathbb{R}^{(T+2)\times d}\) and \(\mathbf{W_{O}}_{m}\in\mathbb{R}^{(T+2)\times d}\) to a single feature window pair, \(\mathbf{W}\mathbf{P}_{k}\in\mathbb{R}^{(T+2)\times 2d}\). We observe slight improvements in this variant in terms of mAP when detecting in the _Oracle mode_, but it underperforms in the _Detection mode_ and for the anticipation tasks, as shown in Appendix E. Finally, we compare the inference time of our model to [17] to assess the efficiency in real-world applications in robots. Our _Dual_ variant is \(15.4\) times faster than [17] for the detection task. [17] requires extracting gaze maps, which drastically slows down the inference speed of their model. When using our _Hydra_ model, we obtain interactions for the time horizons 0, 1, 3, and 5 using one forward pass, with nearly the same inference speed and parameters as using one head. More information can be found in Appendix D. ### Real World Experiments HOI detection and anticipation are essential for robots to comprehend the surrounding humans and better predict their needs, so the robot can assist in a timely manner. We conduct real experiments with a Franka Emika Panda robot to showcase the benefit of our approach in collaborative robots beyond the offline VidHOI dataset. The VidHOI dataset contains user-collected videos of humans, mostly performing outdoor activities that can not be easily related to robotic collaboration tasks. We consider the 'pouring task' in a kitchen scenario where the robot assumes the role of a bartender with the goal of pouring a beverage for the human. The scenario is shown in Fig. 1. To assess the performance of our model in unseen scenarios, we collected 20 videos of 5 people in our kitchen lab. The human is instructed to grab the cup and informed that the robot will assist them in the task. We manually annotate the time the person grabs the cup to use as ground truth. Our _Hydra_ variant detects and anticipates the HOI between a person and a cup in real-time. When the robot anticipates that the human will be near the cup, it proceeds to grab the bottle. However, if the human moves away the robot releases the bottle and returns to the initial pose. The robot proceeds to pour the liquid into the cup after detecting that the human is holding it. We assess our real-world experiments by considering well-established metrics in HRC [13]. [13] proposes to evaluate human-robot fluency in the joint task by considering four objective metrics. _Human Idle Time_ (H-IDLE) and _Robot Idle Time_ (R-IDLE) are proposed to evaluate the percentage of the total task time that the respective agent is not active, which reflects the team coordination and the inefficiency of the agent in the task. _Concurrent Activity_ (C-ACT) measures the percentage of total task time in which both agents are active concurrently (the action overlap between different members). A higher C-ACT indicates a better-synchronized team. _Functional Delay_ (F-DEL) measures the delay experienced by the agents immediately after completing an activity: the percentage \begin{table} \begin{tabular}{l c} \hline \hline Variant & mAP \\ \hline Feature blender = _MOA_ & 40 \\ Interaction token = _Learnable_ & 40.29 \\ Main branch = _Object_ & 39.85 \\ \hline Transformer type = _Single_ & 40.26 \\ Transformer type = _Stacked_ & **40.55** \\ _Dual_ & 40.37 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study in HOI detection. of total task time between the completion of one agent's action and the beginning of the other agent's action. A negative F-DEL indicates that actions are overlapping and implies an efficient use of team members' time. Figure 3 summarizes the average objective fluency metrics across our pouring experiments. The results indicate that HOI anticipation allows for better human-robot coordination and efficiency of each other's time, thus making the task more fluent. We observe a substantial improvement in Figure 3 when using anticipation (\(\tau_{a}>0\)) compared to detection (\(\tau_{a}=0\)). Additional quantitative and qualitative results are provided in Appendix B. ## 5 Limitations Despite outperforming state-of-the-art models in HOI from videos, we observe from qualitative experiments the challenge of the implementation in the real world. First, there is a domain gap between the VidHOI dataset, mainly representing humans in daily scenes and our robotic scenario. For instance, anticipating that 'a human is holding a cup' is challenging, despite being correctly detected. We explore the VidHOI dataset and observe that most people already appear with the cup in their hand. To overcome this issue, we sample more frequently clips where the interaction changes in the anticipation horizon. Still, this is insufficient to ensure correct anticipation in 'holding a cup' with higher confidence. Other datasets are not better suited for our problem as they mainly are image-based [5; 56] or do not track the humans and objects in videos [57]. Future research directions consider training with a dataset more coupled to our robotics scenario to improve the model predictions. This would allow us to extend our experiments to more complex daily scenarios. Second, in our real experiments, we assume that the objects present in the scene are sufficiently visible so that object detection can recognize them. Finally, the employed DMPs could be expanded or replaced by visual servoing to consider goal-following behaviors. ## 6 Conclusions In this paper, we proposed a **H**uman-**O**bject **I**nteraction **A**nticipation for CollAborative ro**BOT**s framework (**HOI4ABOT**). We consider the task of detecting and anticipating human-object interactions (HOI) in videos through a transformer architecture. We train and evaluate HOI4ABOT in the VidHOI dataset and outperform current state-of-the-art across all tasks and metrics while being \(15.4\times\) faster. Moreover, our model runs in real-time thanks to our efficient design. Additionally, we extend our HOI4ABOT model with a multi-head architecture, which can detect and anticipate HOIs across different future horizons in a single step. We demonstrate the effectiveness of our approach by implementing our model in a Franka Emika Panda robot. We show that anticipating HOIs in real-time is essential for a robot to better assist a human in a timely manner and we support our findings with real experiments. In conclusion, our approach demonstrates its effectiveness and defines a new road to explore, where intention reading plays a crucial role for robots in collaboration scenarios. Figure 3: Mean objective fluency metrics for pouring experiments for different confidence thresholds {0.3, 0.5, 0.7} in the HOIs prediction. #### Acknowledgments This work is funded by Marie Sklodowska-Curie Action Horizon 2020 (Grant agreement No. 955778) for the project 'Personalized Robotics as Service Oriented Applications' (PERSEO).
2309.03897
ProPainter: Improving Propagation and Transformer for Video Inpainting
Flow-based propagation and spatiotemporal Transformer are two mainstream mechanisms in video inpainting (VI). Despite the effectiveness of these components, they still suffer from some limitations that affect their performance. Previous propagation-based approaches are performed separately either in the image or feature domain. Global image propagation isolated from learning may cause spatial misalignment due to inaccurate optical flow. Moreover, memory or computational constraints limit the temporal range of feature propagation and video Transformer, preventing exploration of correspondence information from distant frames. To address these issues, we propose an improved framework, called ProPainter, which involves enhanced ProPagation and an efficient Transformer. Specifically, we introduce dual-domain propagation that combines the advantages of image and feature warping, exploiting global correspondences reliably. We also propose a mask-guided sparse video Transformer, which achieves high efficiency by discarding unnecessary and redundant tokens. With these components, ProPainter outperforms prior arts by a large margin of 1.46 dB in PSNR while maintaining appealing efficiency.
Shangchen Zhou, Chongyi Li, Kelvin C. K. Chan, Chen Change Loy
2023-09-07T17:57:29Z
http://arxiv.org/abs/2309.03897v1
# ProPainter: Improving Propagation and Transformer for Video Inpainting ###### Abstract Flow-based propagation and spatiotemporal Transformer are two mainstream mechanisms in video inpainting (VI). Despite the effectiveness of these components, they still suffer from some limitations that affect their performance. Previous propagation-based approaches are performed separately either in the image or feature domain. Global image propagation isolated from learning may cause spatial misalignment due to inaccurate optical flow. Moreover, memory or computational constraints limit the temporal range of feature propagation and video Transformer, preventing exploration of correspondence information from distant frames. To address these issues, we propose an improved framework, called **ProPainter**, which involves enhanced **ProP**agation and an efficient **T**ransformer**. Specifically, we introduce dual-domain propagation that combines the advantages of image and feature warping, exploiting global correspondences reliably. We also propose a mask-guided sparse video Transformer, which achieves high efficiency by discarding unnecessary and redundant tokens. With these components, ProPainter outperforms prior arts by a large margin of 1.46 dB in PSNR while maintaining appealing efficiency. ## 1 Introduction Video inpainting (VI) aims to fill gaps or missing regions in a video with visually consistent content while ensuring spatial and temporal coherence. This technique has broad applications, including video completion [10], object removal [9, 37], video restoration [31], watermark, and logo removal [19]. VI is challenging because it requires establishing accurate correspondence across distant frames for information aggregation. To address this challenge, various mechanisms have been explored, such as 3D CNN [6, 11], video internal learning [41, 27], flow-guided propagation [37, 10, 43, 42, 19], and video Transformer [22, 42, 19]. Among these mechanisms, flow-guided propagation and video Transformer have become mainstream choices for VI due to their promising performance. Propagation-based methods in VI can be divided into two categories: image propagation and feature propagation. The former employs bidirectional global propagation in the image domain with a pre-completed flow field. While this approach can fill the majority of holes in a corrupted video, it requires an additional image or video inpainting network after propagation to hallucinate the remaining missing regions. This isolated two-step process can result in unpleasant artifacts and texture misalignment due to inaccurate flow, as shown in Figure 1(f). To address this issue, a recent approach called E\({}^{2}\)FGVI [19] implements propagation in the feature domain, incorporating flow completion and content hallucination modules in an end-to-end framework. With the learnable warping module, the feature propagation module relieves the pressure of having inaccurate flow. However, E\({}^{2}\)FGVI employs a downsampled flow field to match the spatial size of the feature domain, limiting the precision of spatial warping and the efficacy of propagation, potentially resulting in blurry results. Moreover, feature propagation can only be performed within a short range of video sequences due to memory and computational constraints, hindering propagation from distant frames and leading to missing texture, as shown in Figure 1(g). Both image- and feature-based propagation have their pros and cons. In this study, we carefully revisit the VI problem and investigate the possibility of combining the strengths of both techniques. We demonstrate that with systematic redesigns and adaptation of best practices in the literature, we can achieve **dual-domain propagation**, as illustrated in Figure 1(a). To achieve reliable and efficient information propagation across a video, we identify several essential components: _i) Efficient GPU-based propagation with reliability check_ - Unlike previous methods that rely on complex and time-consuming CPU-centric operations, such as indexing flow trajectories, we perform global image propagation on GPU with flow consistency check. This implementation can be inserted at the beginning of the inpainting network and jointly trained with the other modules. Thus, subsequent modules are able to correct any propagation errors and benefit from the long-range correspondence information provided by the global propagation, resulting in a significant performance improvement. _ii) Improved feature propagation_ - Our implementation of feature propagation leverages flow-based deformable alignment [3], which improves robustness to occlusion and inaccurate flow completion compared to E\({}^{2}\)FGVI [19]. _iii) Efficient flow completion_ - We design a highly efficient recurrent network to complete flows for dual-domain propagation, which is over 40 times (\(\sim\)192 fps1) faster than SOTA method [43] while maintaining comparable performance. We demonstrate that these designs are essential to achieve efficient propagation of global and local information without texture misalignment or blurring in the filling results. An example is shown in Figure 1(h). Footnote 1: Tested on a single NVIDIA Tesla V100 GPU (32G). In addition to dual-domain propagation, we introduce an efficient **mask-guided sparse video Transformer** tailored for the VI task. The classic spatiotemporal Transformer is computationally intensive due to the quadratic number of interactions between video tokens, making it intractable for high-resolution and long temporal-length videos. For instance, contemporary Transformer-based methods, Fuse-Former [22] and FGT [42], are unable to handle 480p videos with a 32G GPU1 due to excessive memory demands. However, we observe that the inpainting mask usually covers only a small local region, such as the object area2. Moreover, adjacent frames contain highly redundant textures. These observations suggest that spatiotemporal attention is unnecessary for most unmasked areas, and it is adequate to consider only alternating interval frames in attention computation. Motivated by these observations, we redesign the Transformer by discarding unnecessary and redundant windows in the query and key/value space, respectively, significantly reducing computational complexity and memory without compromising inpainting performance. Footnote 2: Object regions account for only 13.6% of the DAVIS [28] dataset. The main contribution of this work is to provide a systematic study into the core problem of VI and offer a practical solution that is both effective and efficient. Propagating information in two distinct image and feature domains and combining them in a unified framework with fast GPU implementation is new for VI task. The mask-guided sparse video Transformer also offers practical insights into designing efficient spatiotemporal attention for VI task. Compared to the state-of-the-art methods, our model achieves superior performance with a large margin of 1.46 dB in PSNR, while also significantly reducing memory consumption. ## 2 Related Work Numerous deep networks with different modules and propagation strategies have achieved significant success in video inpainting. These approaches can be broadly categorized into four categories: **3D convolution.** Earlier video inpainting networks typically employed 3D CNNs [6, 33, 11] or temporal shift [7] to aggregate spatiotemporal information. These methods often suffer from limited receptive fields in both temporal and spatial dimensions and misalignment between adjacent frames. As a result, they are less effective in exploring distant content. **Internal learning.** To fully exploit content of a video, some studies [41, 27, 30] adopt internal learning to encode and memorize the appearance and motion of the video through deep networks. However, these methods require individual training for each test video, limiting their practical use. **Flow-guided propagation.** Optical flow [13, 18, 46] and homography [17, 1] are commonly used in video inpainting networks to align neighboring reference frames to enhance temporal coherence and aggregation. However, incomplete optical flow may not provide valid propagation for completing missing regions. To address this issue, recent flow-based methods [37, 10, 12, 43, 42] focus on first completing the flow and then use it as a guidance for pixel-domain propagation. This approach simplifies RGB pixel inpainting by completing a less complex flow field. However, this offline propagation is independent of the subsequent learnable refinement module, making it difficult to correct content distortion caused by inaccurate propagation. Inspired by flow-guided recurrent networks [2, 3], Li et al. [19] proposed an end-to-end framework that jointly learns flow completion and feature propagation in the downsampled feature domain. However, downsampled flow reduces its ability to provide spatially precise warping. To overcome this limitation, we propose more faithful propagation by performing both pixel and feature propagation with flow consistency checks. **Video Transformer.** Attention [17, 26, 11, 18] and Transformer [40, 21, 22, 1, 19, 42] blocks adopt spatiotemporal attention to explore recurrent textures in a video. This enables them to retrieve and aggregate tokens with similar texture or context for filling in missing regions. Liu [22] present a fine-grained fusion Transformer based on the soft split and composition operations, which further boosts video inpainting performance. However, these methods are computationally and memory intensive. To address this issue, some Transformers [21, 1, 42] decouple the spatiotemporal attention by performing spatial and temporal attention alternately, while others [19, 42] adopt window-based Transformers [23, 38] to reduce the spatial range for efficient video attention. However, these approaches still involve redundant or unnecessary tokens. Inspired by token pruning for adaptive attention [29, 39, 25, 20, 15] in high-level tasks, our study proposes a more efficient and faster video Transformer with sparse spatiotemporal attention and a largely reduced token space while maintaining inpainting performance. Recent studies [18, 19, 42] have demonstrated the effectiveness of combining flow-guided propagation with Transformer in VI. However, the high memory requirement of the Transformer limits the propagation range during both training and inference, severely hindering the ability to propagate temporally distant content. In this paper, we also adopt this combination strategy but propose a reliable propagation scheme, along with an efficient Transformer model that fully exploits the benefits of long-range propagation and attention. This results in superior inpainting performance while maintaining computational efficiency. ## 3 Methodology Given a masked video sequence \(X=\{X_{t}\in\mathbb{R}^{H\times W\times 3}\}_{t=1}^{T}\), which has a sequence length of \(T\), along with corresponding mask sequence \(M=\{M_{t}\in\mathbb{R}^{H\times W\times 1}\}_{t=1}^{T}\), the objective of video inpainting is to generate visually consistent and coherent content within the corrupted or missing regions. ProPainter, as shown in Figure 2, is composed of three key components: Recurrent Flow Completion (RFC), Dual-Domain Propagation (DDP), and Mask-guided Sparse Video Transformer (MSVT). Before feeding the sequence into ProPainter, we extract the forward and backward optical flows, denoted as \(F^{f}=\{F^{f}_{t}=F_{t\to t+1}\in\mathbb{R}^{H\times W\times 2}\}_{t=1}^{T-1}\) and \(F^{b}=\{F^{b}_{t}=F_{t+1\to t}\in\mathbb{R}^{H\times W\times 2}\}_{t=1}^{T-1}\) from a given video \(X\). We first use RFC to complete the corrupted flow fields. Guided by the completed flows, we then perform global image propagation and local feature propagation sequentially. Finally, we employ multiple MSVT blocks to refine propagation features and a decoder to reconstruct the final video sequence \(\hat{Y}=\{\hat{Y}_{t}\in\mathbb{R}^{H\times W\times 3}\}_{t=1}^{T}\). We introduce the specific design of each component below. ### Recurrent Flow Completion Pre-trained flow completion modules are commonly used in video inpainting networks [37, 10, 43, 42]. The rationale behind this approach is that it is simpler to complete missing flow than to directly fill in complex RGB content [37]. Furthermore, using completed flow to propagate pixels reduces the pressure of video inpainting and better maintains temporal coherence. E\({}^{2}\)FGVI [19] proposes to insert the flow completion module into an end-to-end framework, which simplifies the inpainting pipeline. However, flow completion modules that are learned together with inpainting-oriented losses can result in a suboptimal learning process and less accurate completed flow. Moreover, the downsampled flow may limit the precision of spatial warping and the efficacy of propagation, which can result in blurred and incomplete filling content, as shown in Figure 1(g). Therefore, an independent flow completion model is not only important but also necessary for video inpainting. To maintain temporal coherence while completing flows, previous methods [37, 42] adopt sliding-window-based networks to aggregate optical flow information from adjacent frames, which are highly correlated. However, these methods can be computationally expensive as repeated inferences are required in the overlapping frames. To improve efficiency and enhance flow coherence further, we adopt a recurrent network [2, 3] for flow completion, which provides precise optical flow fields for subsequent propagation modules. We complete forward and backward flows using the same process, thus we denote \(F^{f}\) and \(F^{b}\) as \(F\) for simplicity. We first encode the flows \(F_{t}\) into a downsampled feature \(f_{t}\) with a downsampling ratio of 8. Next, we employ deformable alignment [3] that is based on deformable convolution (DCN) [8, 45], to bidirectionally propagate the flow information from nearby frames for flow completion. For simplicity, we only describe the backward propagation process here. Taking the concatenated feature \(c(f_{t},\hat{f}_{t+1})\), where \(\hat{f}_{t+1}\) is the propagation feature of the t+1-th frame, as input a lightweight network with a stack of convolutions is employed to compute DCN offsets \(o_{t\to t+1}\) and modulation masks \(m_{t\to t+1}\). DCN alignment propagation can be expressed as: \[\hat{f}_{t}=\mathcal{R}\big{(}\mathcal{D}(\hat{f}_{t+1};o_{t\to t+1},m_{t \to t+1}),f_{t}\big{)}, \tag{1}\] where \(\mathcal{D}(\cdot)\) denotes deformable convolution, and \(\mathcal{R}(\cdot)\) denotes the convolution layers that fuse the aligned and current features. In this way, information of (\(t+1\))-th flow can be adaptively transferred to the current \(t\)-th flow. Finally, a decoder is used to reconstruct the completed flows \(\hat{F}_{t}\). For clarity, an illustration of deformable alignment is provided in the supplementary material. ### Dual-domain Propagation After completing the flow, we perform global and local propagation in the image and feature domains, respectively. We employ distinct alignment operations and strategies for each domain. Both domains involve bidirectional propagation in the forward and backward directions. Here, we elaborate on the backward propagation since the forward propagation follows the same process. **Image propagation.** To maintain efficiency and simplicity, we adopt flow-based warping for image propagation, along with a simple reliability check strategy. This process does not involve any learnable operation. In the case of a video sequence \(X\) with binary masks \(M\) (a pixel with value 1 represents masked region) and completed flows \(\hat{F}\), we first check the validity of completed flow based on forward-backward consistency error [37, 10]: \[\mathcal{E}_{t\to t+1}\big{(}p\big{)}=\Big{\|}\hat{F}_{t\to t+1}\big{(}p \big{)}+\hat{F}_{t+1\to t}\big{(}p+\hat{F}_{t\to t+1}(p)\big{)}\Big{\|}_{2}^ {2}, \tag{2}\] where \(p\) denotes a pixel position of the current frame. Only pixels with a small consistency error will be propagated, i.e., \(C_{1}:\mathcal{E}_{t\to t+1}(p)<\epsilon\), where \(\epsilon\) is a threshold and set to 5. Furthermore, we only consider the masked areas of the current frame \(X_{t}\) that needs to be filled, i.e., \(C_{2}:M_{t}(p)=1\), and we only propagate the unmasked areas from neighboring frame \(X_{t+1}\), i.e., \(C_{3}:M_{t+1}(p+\hat{F}_{t\to t+1}(p))=0\). By enforcing the three constraints, a reliable propagation area \(A_{r}\) is identified as: \[A_{r}\big{(}p\big{)}=\begin{cases}1&\text{if }p\in C_{1}\cap C_{2}\cap C_{3},\\ 0&\text{otherwise}.\end{cases} \tag{3}\] The process of image propagation is expressed as: \[\hat{X}_{t}=\mathcal{W}\big{(}X_{t+1},\hat{F}_{t\to t+1}\big{)}*A_{r}+X_{t}* \big{(}1-A_{r}\big{)}, \tag{4}\] where \(\mathcal{W}(\cdot)\) denotes warping operation. To ensure continuous propagation, we promptly update the mask \(M_{t}\) of the current frame and convert the propagated area to the unmasked status by updating masks via \(M_{t}=M_{t}-A_{r}\). After global image propagation, we obtain a partially filled video sequence \(\hat{X}\), which greatly eases the learning process for subsequent modules. **Feature propagation.** We use an image encoder with the same structure as previous works [22, 19] to extract features from a local sequence \(\hat{X}_{t=1}^{T_{1}}\), denoted as \(\{e_{t}\in\mathbb{R}^{\frac{H}{4}\times\frac{H}{4}\times C}\}_{t=1}^{T_{1}}\). Similar to E\({}^{2}\)FGVI [19], we also adopt flow-guided deformable alignment module [3] for feature propagation, which has demonstrated remarkable benefits Figure 2: ProPainter comprises three key components: recurrent flow completion, dual-domain propagation, and mask-guided sparse Transformer. First, we employ a highly efficient recurrent flow completion network to complete the corrupted flow fields. We then perform propagation in both image and feature domains, which are jointly trained. This approach enables us to explore correspondences from both global and local temporal frames, resulting in more reliable and effective propagation. The subsequent mask-guided sparse Transformer blocks refine the propagated features using spatiotemporal attention, aided by a sparse strategy that considers only a subset of the tokens. This enhances efficiency and reduces memory consumption, while maintaining performance. in various low-level video tasks [5, 4, 44]. Unlike the deformable alignment used in Sec. 3.1 that directly learns DCN offsets, flow-guided deformable alignment employs the completed flow as a base offset and refines it by learning offset residue. However, our design differs from E\({}^{2}\)FGVI in that we offer richer conditions for learning DCN offsets. As illustrated in Figure 3, apart from the current feature \(e_{t}\), warped propagation feature \(\mathcal{W}(\hat{e}_{t+1},\hat{F}_{t\to t+1}^{\downarrow})\), and completed flows \(\hat{F}_{t\to t+1}^{\downarrow}\), we additionally introduce the flow valid map \(V_{t+1\to t}\) calculated by consistency check (Eq. 2), as well as the original mask \(M_{t}^{\downarrow}\), and updated mask \(\hat{M}_{t}^{\downarrow}\) after image propagation. With these conditions, a stack of convolutions is employed to predict the DCN offset residue \(\widetilde{o}_{t\to t+1}\) and modulation masks \(m_{t\to t+1}\). The flow-guided DCN alignment propagation is expressed as: \[\hat{e}_{t}=\mathcal{R}\big{(}\mathcal{D}(\hat{e}_{t+1};\hat{F}_{t\to t+1}^{ \downarrow}+\widetilde{o}_{t\to t+1},m_{t\to t+1}),f_{t}\big{)}, \tag{5}\] where \(\downarrow\) denotes downsampling. The improved reliability of flow and the additional awareness of mask as a condition make our flow-guided deformable alignment module more stable to learn than previous designs [3, 19]. The current step is able to focus more on truly challenging regions where flow is invalid and former image propagation is unreliable. ### Mask-Guided Sparse Video Transformer While video Transformers have achieved excellent performance in video inpainting, they can be computationally and memory intensive, posing a challenge to their practical application. E\({}^{2}\)FGVI and FGT have addressed this issue by using window-based Transformer blocks, but they still have some efficiency limitations. To overcome this, we propose a novel sparse video Transformer that builds on the window-based approach. Given a video sequence feature \(E_{l}\in\mathbb{R}^{T_{l}\times\frac{H}{l}\times\frac{W}{l}\times C}\), we use the soft split operation [22] to generate patch embeddings \(Z\in\mathbb{R}^{T_{l}\times M\times N\times C_{z}}\). We partition \(Z\) into \(m\times n\) non-overlapping windows, resulting in partitioned features \(Z_{w}\in\mathbb{R}^{T_{l}\times m\times n\times h\times w\times C_{z}}\), where \(m\times n\) and \(h\times w\) are the number and size of the windows, respectively. We obtain the query \(Q\), key \(K\), and value \(V\) from \(Z_{w}\) through linear layers. We design sparse strategies for both query and key/value spaces separately. Note that we also apply the window expand strategy [22] and integrate global tokens [42] into key and value, enabling us to use a small window size of \(5\times 9\) in our experiments. We omit them from the following discussion since they do not affect our sparse strategy designs. **Sparse Query Space.** We observe that mask regions often occupy only a small area of the video, such as in the case of object removal in the DAVIS [28] dataset, where the proportion of object regions is only 13.6%. This indicates that spatiotemporal attention may not be necessary for all query windows. To exploit this observation, we selectively apply attention to query windows that intersect with the mask regions. Specifically, we first use nearest neighbor interpolation to downsample the mask sequence \(M\in\mathbb{R}^{T_{l}\times H\times W}\) to \(M^{\downarrow}\in\mathbb{R}^{T_{l}\times m\times n}\), where \(m\times n\) is the number of non-overlapping windows after partitioning. We then sum it up in the temporal dimension and obtain sparse mask \(S_{Q}\in\mathbb{R}^{m\times n}\) for query cubes following the equation: \[S_{Q}=Clip\Big{(}\sum\nolimits_{t=1}^{T_{l}}M_{t}^{\downarrow},\;1\Big{)}, \tag{6}\] where \(Clip\) represents a clipping function that set \(S_{Q}\) to 1 if \(\sum_{t=1}^{T_{l}}M_{t}^{\downarrow}>0\). In other words, if the query cube at a window \((i,j)\) has never contained any mask region in the past frames, then \(S_{Q}(i,j)=0\), indicating that spatiotemporal attention within this window can be skipped. Figure 4: Mask-guided sparse video Transformer. To reduce computational complexity and memory usage, our mask-guided sparse Transformer filters out unnecessary and redundant windows in the query and key/value space, respectively, before applying self-attention. To enlarge spatial interrelation range, we also adopt the window expand strategy [38] and pooling global tokens [42, 19]. Figure 3: Flow-guided deformable alignment is effective by taking reliable completed flows and mask-aware conditions. We concatenate the validated flow map, original mask, and updated mask into conditions to produce DCN offsets (residue to optical flow). A DCN is then applied to align the propagation feature from the previous frame. Finally, a CNN block is employed to fuse the current and aligned features, achieving the propagation feature of the current frame. **Sparse Key/Value Space.** Due to the highly redundant and repetitive textures in adjacent frames, it is unnecessary to include all frames as key/value tokens in each Transformer block. Instead, we will only include strided temporal frames alternately, with a temporal stride of 2 in our design. That is, in each odd-numbered Transformer block, only odd-number frames are activated to participate in self-attention with their key and value, while even-number blocks include only even-number frames. By doing so, the key and value space is reduced by half, effectively reducing the computation and memory cost of the Transformer module. After filtering out unnecessary and redundant windows based on our sparse strategy, we perform self-attention on the remaining windows to extract refined features. These features are then gathered using a soft composition operation [22] for subsequent modules. Experimental results suggest that our design significantly reduces the computational cost of video Transformers while maintaining performance for video inpainting. ### Training Objectives **Flow Completion.** We utilize L1 loss as the reconstruction loss and a second-order smoothness constraint on the flow field [24] to promote the collinearity of neighboring flows and thus enhance the smoothness of the completed flow field. **Video Inpainting.** We adopt L1 loss as the reconstruction loss for all pixels. To enhance the realistic and temporal consistency of video inpainting results, we also employ an adversarial loss that is measured using a T-PatchGAN [6] discriminator. The details and formulation of these losses are provided in the supplementary material. ## 4 Experiments **Datasets.** We use the training set of YouTube-VOS [36] with 3471 video sequences to train our networks. Two widely-used test sets are adopted for evaluation: YouTube-VOS [36] and DAVIS [28], which consist of 508 and 90 video sequences, respectively. For the DAVIS test set, following FuseFormer [22] and E\({}^{2}\)FGVI [19], we use 50 video clips for evaluations. During training, we follow [13, 17, 22, 19] and generate stationary and object masks in a random fashion to simulate the masks in video completion and object removal tasks. As for evaluation, we adopt the stationary masks provided in [19] to calculate quantitative scores, and the object masks are extracted from their segmentation labels for qualitative comparisons. Video frames are sized to \(432\times 240\) for training and evaluation. **Training Details and Metrics.** We use RAFT [32] to extract optical flow in our approach. For training the RFC network, we set the flow sequence length to 10 and perform deformable propagation on feature maps that are downsampled by a factor of 8 for faster processing. We adopt 8 Transformer blocks for the inpainting modules and use a local video sequence of length 10. The Transformer window size is \(5\times 9\), and the extended size is half of the window size. We train both the RFC and inpainting modules using the Adam [14] optimizer with a batch size of 8, setting the initial learning rate to \(10^{-4}\) and running 700k iterations3 for each. We implement our method using the PyTorch framework and train it on 8 NVIDIA Tesla V100 (32G) GPUs. Footnote 3: We set 450k training iterations for ablation study. We employ the widely used PSNR and SSIM metrics [35] to evaluate the reconstruction performance and VFID [34] scores to measure the perceptual similarity between input videos and outputs, as used in recent video inpainting studies [22, 19]. Additionally, we report the flow warping error \(E_{warp}\)[16] to assess the temporal consistency and smoothness of the resulting video sequences. \begin{table} \begin{tabular}{l|c|c|c|c||c|c|c||c|c} \hline \hline & \multicolumn{6}{c||}{Accuracy} & \multicolumn{2}{c}{Efficiency} \\ \cline{2-10} & \multicolumn{4}{c||}{YouTube-VOS} & \multicolumn{4}{c||}{DAVIS} & \multicolumn{2}{c}{FLOPs} & Runtime \\ \hline Models & PSNR \(\uparrow\) & SSIM \(\uparrow\) & VFID \(\downarrow\) & \(E_{warp}^{*}\) \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & VFID \(\downarrow\) & \(E_{warp}^{*}\) \(\downarrow\) & (10 frames) & (s/frame) \\ \hline DFVI [37] & 29.16 & 0.9429 & 0.066 & 1.651 & 28.81 & 0.9404 & 0.187 & 1.596 & - & 0.837 \\ \hline CPNet [17] & 31.58 & 0.9607 & 0.071 & 1.622 & 30.28 & 0.9521 & 0.182 & 1.521 & 1407G & 0.316 \\ \hline FGVC [10] & 29.67 & 0.9403 & 0.064 & 1.163 & 30.80 & 0.9497 & 0.165 & 1.571 & - & 1.795 \\ \hline STTN [40] & 32.34 & 0.9655 & 0.053 & 1.061 & 30.61 & 0.9560 & 0.149 & 1.438 & 1315G & 0.051 \\ \hline TSAM [46] & 30.22 & 0.9468 & 0.070 & 1.014 & 30.67 & 0.9548 & 0.146 & 1.235 & 1001G & 0.068 \\ \hline FuseFormer [22] & 33.32 & 0.9681 & 0.053 & 1.053 & 32.59 & 0.9701 & 0.137 & 1.349 & 1025G & 0.114 \\ \hline ISVI [43] & 30.34 & 0.9458 & 0.077 & 1.008 & 32.17 & 0.9588 & 0.189 & 1.291 & - & 1.594 \\ \hline FGT [42] & 32.17 & 0.9599 & 0.054 & 1.025 & 32.86 & 0.9650 & 0.129 & 1.323 & - & 1.828 \\ \hline E\({}^{2}\)FGVI [19] & 33.71 & 0.9700 & 0.046 & 1.013 & 33.01 & 0.9721 & 0.116 & 1.289 & 986G & 0.085 \\ \hline \hline ProPainter (Ours) & 34.43 & 0.9735 & 0.042 & 0.974 & 34.47 & 0.9776 & 0.098 & 1.187 & 808G & 0.083 \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparisons on YouTube-VOS [36] and DAVIS [28] datasets. The best and second performances are marked in red and blue, respectively. \(E_{warp}^{*}\) denotes \(E_{warp}\) (\(\times 10^{-3}\)). All methods are evaluated following their default settings. Since DFVI, FGVC, ISVI, and FGT involve several CPU processes, their FLOPs cannot be accurately projected. ### Comparisons **Quantitative Evaluation.** We compare ProPainter with nine state-of-the-art methods including DFVI [37], CPNet [17], FGVC [10], STTN [40], TSAM [46], Fuseformer [22], ISVI [43], FGT [42], and E\({}^{2}\)FGVI [19] on both YouTube-VOS [36] and DAVIS [28]. Thanks to the efficient design, ProPainter uses a temporal length of 20 for inference. Table 1 shows that ProPainter outperforms other methods in all quantitative metrics, especially on the DAVIS dataset, where our method surpasses the state-of-the-art method by 1.14 dB in PSNR. The results suggest that our method has superior inpainting capability, enabling it to produce higher-quality, faithful, and seamless videos. **Qualitative Evaluation.** For the visual comparison, we compare our method with FuseFormer [22], FGT [42], and E\({}^{2}\)FGVI [19], which are representative methods of Transformer-, image propagation-, and feature propagation-based approaches, respectively. Figure 5 presents four comparison results for video completion and object removal. Our method uses dual-domain propagation to ensure reliable and long-range propagation. It completes missing regions with coherence and clear contents, while other compared methods tend to fail or produce unpleasant inpainting results such as texture distortions and black hazy region in FGT [42] results, as well as artifacts in FuseFormer [22] and E\({}^{2}\)FGVI [19]. **Efficiency Comparison.** Table 1 presents the efficiency comparisons between all methods in terms of FLOPs and running time. The FLOPs of all methods are computed based on a temporal length of 10. We consider all learnable modules (including the recurrent flow completion) in our ProPainter to calculate the FLOPs. The running time records the time of all processes in each method, including inpainting, as well as flow calculation and flow completion if involved. To keep efficiency, we use only five iterations of the RAFT network to calculate optical flow. **Flow Completion Comparisons.** We compare our recurrent flow network with previous approaches [37, 10, 43] on both YouTube-VOS and DAVIS datasets. Table 2 presents the end-point-error (EPE) of flow completion and running time of each method. Our recurrent network offers a dual benefit with high accuracy and efficiency. Compared to previous methods, our network is approximately 40 times faster while maintaining a comparable flow completion accuracy to the state-of-the-art methods. ### Ablation Study **Effectiveness of Image Propagation.** Table 3 shows that Exp. (a) experiences a significant performance drop when image propagation is removed. Moreover, the model's propagation ability is reduced without image propagation, as presented in Figure 7, causing it to fail to complete missing \begin{table} \begin{tabular}{l l l l l l} \hline \hline EPE \(\downarrow\) & DFVI [37] & FGVC [10] & FGT [42] & ISVI [43] & Ours \\ \hline YouTube-VOS & 0.046 & 0.032 & 0.021 & **0.019** & 0.020 \\ \hline DAVIS & 0.107 & 0.082 & 0.052 & **0.051** & **0.051** \\ \hline \hline Runtime (s/frame) & 0.130 & 1.125 & 0.312 & 0.231 & **0.005** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparisons of flow completion networks. Our network offers a dual benefit with high accuracy and efficiency. Figure 5: Qualitative comparisons on both video completion and object removal. Our ProPainter exhibits superiority in producing complete and faithful textures, resulting in enhanced spatiotemporal coherence for video inpainting. content with details. To verify the effectiveness of our reliability check strategy in image propagation, we replaced our design with the FGVC image propagation module in Exp. (b) (without retraining), resulting in a noticeable decrease in PSNR. This is because the FGVC image propagation method is prone to being affected by incorrect optical flow, leading to severe texture distortion that subsequent modules cannot correct. Our model can effectively aware and stop unreliable propagation areas using a simple reliability check via Eq.2, and generate more faithful inpainting results. **Effectiveness of Feature Propagation.** Similarly, we observe a slight decrease in performance by either removing feature propagation, _i.e._, Exp. (c), or replacing it with the Feature propagation of E\({}^{2}\)FGVI, _i.e._, Exp. (d), indicating the effectiveness of the feature propagation modules and our reliability mask-aware conditions. This suggests that our design, which learns reliable DCN offsets in the feature domain, can further complement and enhance the propagation ability in the image domain. **Effectiveness of Sparse Transformer.** In theory, our strategy of using masks to guide sparsity only eliminates redundant and unnecessary tokens (windows), while preserving essential information. This means that there should be no adverse effect on performance. To confirm this, we conducted Exp. (d), comparing our approach to standard self-attention without sparse filtering. Our results indicate that our sparse Transformer block performs almost as well as the standard one, indicating that it can achieve high efficiency without sacrificing performance. **Efficiency of Sparse Transformer.** In Figure 8, we compare the FLOPs of different Transformer blocks with respect to temporal length and spatial resolution, including those used in FuseFormer [22], FGT [42], and E\({}^{2}\)FGVI [19]. We use a mask with a missing region ratio of 1/6 (higher than the average object ratio of 13.6% in DAVIS) to calculate the FLOPs of our mask-guided sparse Transformer. The curves indicate that the efficiency advantage of our sparse Transformer becomes more prominent as the temporal length and video resolution increase, indicating great potential for developing longer-range spatiotemporal attention and applying it to larger resolution videos. ## 5 Conclusion This study introduces a novel and improved video inpainting framework called ProPainter. It incorporates an enhanced dual-domain propagation and an efficient mask-guided sparse video Transformer. Thanks to the two modules, our ProPainter exhibits reliable and precise propagation capabilities over long distances, significantly improving the performance of video inpainting while maintaining high efficiency in terms of running time and computational complexity. We believe that the designs in ProPainter will provide valuable insights to the video inpainting community. **Acknowledgement.** This study is supported under the RIE2020 Industry Alignment Fund Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). Figure 8: FLOPs cures of different Transformer blocks. Figure 6: Visual comparison on image propagation methods of FGVC [10] and ours. \begin{table} \begin{tabular}{l c c c c|c} \hline \hline Exp. & (a) w/o Img Prop. & (b) w/ Img Prop. in FGVC & (c) w/o Feat Prop. & (d) w/ Feat Prop. in E\({}^{2}\)FGVI & (f) Full Tokens & ProPainter \\ \hline PSNR & 33.05 & 32.91 & 33.17 & 33.94 & 34.18 & 34.15 \\ \hline SSIM & 0.9724 & 0.9687 & 0.9732 & 0.9756 & 0.9765 & 0.9764 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study of dual-main propagation and sparse Transformer. Figure 7: Comparison of w/ and w/o image propagation.
2302.00034
On the order of semiregular automorphisms of cubic vertex-transitive graphs
We prove that, if $\Gamma$ is a finite connected cubic vertex-transitive graph, then either there exists a semiregular automorphism of $\Gamma$ of order at least $6$, or the number of vertices of $\Gamma$ is bounded above by an absolute constant.
Marco Barbieri, Valentina Grazian, Pablo Spiga
2023-01-31T19:05:23Z
http://arxiv.org/abs/2302.00034v1
# On the order of semiregular automorphisms of cubic vertex-transitive graphs ###### Abstract. We prove that, if \(\Gamma\) is a finite connected cubic vertex-transitive graph, then either there exists a semiregular automorphism of \(\Gamma\) of order at least \(6\), or the number of vertices of \(\Gamma\) is bounded above by an absolute constant. Key words and phrases:Valency 3, Vertex-transitive, Semiregular 2010 Mathematics Subject Classification: 05C25, 20B25 ## 1. Introduction A fascinating old-standing question in the theory of group actions on graphs is the so-called _Polycirculant Conjecture_: non-identity \(2\)-closed transitive permutation groups contain non-identity semiregular elements. This formulation of the conjecture was introduced by Klin [10]. However, the question was previously posed independently by Marusic [12, Problem 2.4] and Jordan [13] in terms of graphs: vertex-transitive graphs having more than one vertex admit non-identity semiregular automorphisms. In this paper, we focus our attention on cubic graphs. In [14], Marusic and Scappellato proved that, each cubic vertex-transitive graph admits a non-identity semiregular automorphism, settling the Polycirculant Conjecture for such graphs. Their proof did not take into account the order of the semiregular elements. In this direction, Cameron _et al._ proved in [11] that, if \(\Gamma\) is a cubic vertex-transitive graph, then \(\operatorname{Aut}(\Gamma)\) contains a semiregular automorphism of order at least \(4\). They also conjectured that, as the number of vertices of \(\Gamma\) tends to infinity, the maximal order of a semiregular automorphism tends to infinity. This was proven false by the third author in [15] by building a family of cubic vertex-transitive graphs where such a maximum is precisely \(6\). In the light of these results, it is unclear whether \(6\) is optimal in the sense of minimizing the maximal order of a semiregular element. Broadly speaking, we are interested in \[\liminf_{\begin{subarray}{c}|V\Gamma|\to\infty\\ \Gamma\text{ cubic vertex-transitive}\end{subarray}}\max\{o(g)\mid g\in \operatorname{Aut}(\Gamma),g\text{ semiregular}\}, \tag{1.1}\] where we denote by \(o(g)\) the order of the group element \(g\). **Theorem 1.1**.: _The value of (1.1) is \(6\)._ Theorem 1.1 is a consequence of the following result and the main result in [15]. **Theorem 1.2**.: _Let \((\Gamma,G)\) be a pair such that \(\Gamma\) is a connected cubic graph and \(G\) is a subgroup of the automorphism group of \(\Gamma\) acting vertex-transitively on \(V\Gamma\). Then either \(G\) contains a semiregular automorphism of order at least \(6\) or the pair \((\Gamma,G)\) appears in Table \(1\)._ There is a considerable amount of work into the proof of Theorem 1.2. Broadly speaking, the proof divides into two main cases. In the first main case, the exponent of the group \(G\) is very small, bounded above by \(5\), and we use explicit knowledge on the finite groups having exponent at most \(5\). The second main case is concerned with graphs admitting a normal quotient which is a cycle. Here, we need to refine our knowledge on the ubiquitous Praeger-Xu graphs and on the splitting and merging operators between cubic vertex-transitive graphs and 4-valent arc-transitive graphs defined in [10]. **Remark 1.3**.: The veracity of Theorem 1.2 for graphs with at most \(1\,280\) vertices has been proven computationally using the database of small cubic vertex-transitive graphs in [10]. Therefore, in the course of the proof of Theorem 1.2 whenever we reduce to a graph having at most \(1\,280\) vertices we simply refer to this computation. Table 1 consists of six columns. In the first column, we report the number of vertices of the exceptional cubic vertex-transitive graph \(\Gamma\). In the second column, we report the order of the transitive subgroups \(G\) of \(\operatorname{Aut}(\Gamma)\) with \(G\) not containing semiregular elements of order at least \(6\): each subgroup is reported up to \(\operatorname{Aut}(\Gamma)\)-conjugacy class. In the third column, we report the cardinality of \(\operatorname{Aut}(\Gamma)\). In the forth column, when \(|\mathrm{V}\Gamma|\leq 1\,280\), we report the number of the graph in the database of small cubic vertex-transitive graphs in [10]. In the fifth column of Table 1, we write the symbol \(\swarrow\) when the graph is arc-transitive and the symbol \(\dagger\) when the graph is a split Praeger-Xu graph (see Section 2.5 for the definition of split Praeger-Xu graphs). Split Praeger-Xu graphs play an important role in our investigation and hence we are keeping track of this information in the forth column. In the sixth column, for the graphs not appearing in the database of small cubic vertex-transitive graphs, we give as much information as possible. \begin{tabular}{|l|l|l|l|l|l|} \hline \(|\mathrm{V}\Gamma|\) & \(|G|\) & \(|\operatorname{Aut}(\Gamma)|\) & DB & \(\swarrow/\dagger\) & Comments \\ \hline 4 & 4, 4, 8, 12, 24 & 24 & 1 & \(\swarrow\) & \\ \hline 6 & 6 & 12 & 1 & & \\ & 6, 36 & 24 & 2 & \(\swarrow\) & \\ \hline 8 & 8 & 16 & 1 & & \\ & 8, 8, 8, 8, 16, 24, 24, 48 & 48 & 2 & \(\swarrow\) & \\ \hline 10 & 10 & 20 & 1 & & \\ & 10 & 20 & 2 & & \\ & 20, 60, 120 & 120 & 3 & \(\swarrow\) & \\ \hline 12 & 12, 24 & 24 & 2 & & \\ & 24, 24 & 48 & 4 & \(\dagger\) & \\ \hline 16 & 16, 16, 32, 32, 64, 64 & 128 & 2 & \(\dagger\) & \\ & 16 & 32 & 3 & & \\ & 16, 48 & 96 & 4 & \(\swarrow\) & \\ \hline 18 & 18, 108 & 216 & 4 & \(\swarrow\) & \\ & 36 & 72 & 5 & & \\ \hline 20 & 20 & 20 & 2 & & \\ & 160, 160 & 320 & 3 & \(\dagger\) & \\ & 60 & 120 & 6 & \(\swarrow\) & \\ & 120 & 240 & 7 & \(\swarrow\) & \\ \hline 24 & 24 & 144 & 2 & \(\swarrow\) & \\ & 24 & 48 & 8 & & \\ & 24 & 24 & 24 & 9 & \\ & 24 & 48 & 10 & & \\ & 24, 24 & 48 & 11 & \(\dagger\) & \\ \hline 30 & 720 & 1 440 & 8 & \(\swarrow\) & \\ & 60, 120 & 120 & 9 & & \\ & 60 & 60 & 10 & & \\ \hline 32 & 32 & 64 & 2 & & \\ & 32, 32, 64, 64 & 128 & 3 & \(\dagger\) & \\ & 32, 96 & 192 & 4 & \(\swarrow\) & \\ \hline 36 & 36 & 72 & 9 & & \\ \hline \end{tabular} \begin{tabular}{|l|l|l|l|l|l|} \hline 40 & 160, 160 & 320 & 12 & \(\dagger\) & \\ \hline 50 & 100 & 200 & 7 & & \\ & 50, 150 & 300 & 8 & \\ \hline 54 & 108 & 216 & 11 & & \\ \hline 60 & 60 & 360 & 2 & \\ & 60, 120 & 120 & 3 & \\ & 60 & 60 & 4 & \\ & 60 & 120 & 5 & \\ & 60 & 120 & 6 & \\ & 60, 120 & 120 & 7 & \\ & 60 & 120 & 8 & \\ & 60 & 120 & 9 & \\ & 60 & 120 & 10 & \\ \hline 64 & 64, 192 & 384 & 2 & \\ & 64 & 256 & 4 & \\ & 64, 64 & 128 & 11 & \(\dagger\) & \\ \hline 80 & 80, 160 & 160 & 29 & \\ & 160, 160 & 320 & 31 & \(\dagger\) & \\ \hline 90 & 720 & 1 440 & 20 & \\ \hline 96 & 96 & 192 & 37 & \\ \hline 100 & 100 & 200 & 19 & \\ \hline 128 & 128 & 256 & 5 & \\ \hline 160 & 160 & 160 & 89 & \\ & 160 & 160 & 90 & \\ & 160 & 320 & 91 & \\ & 160 & 320 & 92 & \\ & 160 & 320 & 93 & \(\dagger\) & \\ & 160 & 320 & 94 & \\ \hline 180 & 720 & & 77 & \\ & 360, 720 & & 78 & \\ \hline 250 & 500 & & 31 & \\ \hline 256 & 256, 768 & & 30 & \\ \hline 360 & 360 & 720 & 176 & \\ & 360 & 720 & 177 & \\ & 360 & 720 & 178 & \\ & 360 & 360 & 179 & \\ & 360 & 720 & 180 & \\ & 360 & 360 & 181 & \\ & 360 & 720 & 182 & \\ & 360 & 720 & 183 & \\ & 360 & 720 & 184 & \\ & 720 & 1 440 & 268 & \\ & 720 & 1 440 & 270 & \\ \hline 512 & 512 & 1 024 & 734 & \\ \hline 810 & 1 620 & 1 620 & 198 & \\ \hline 1 024 & 1 024, 3 072 & 6 144 & 3 470 & \\ \hline 1 250 & 2 500 & 2 500 & 187 & \\ \hline 1 280 & 1 280 & 2 500 & 2 591 & \\ \hline \end{tabular} ## 2. Main ingredients ### Permutations A permutation on the set \(\Omega\) is a _derangement_ if it fixes no elements in \(\Omega\). A permutation is _semiregular_ if all of its cycles have the same length. For instance, any derangement of prime order is semiregular. A permutation group \(G\) on \(\Omega\) is said to be _transitive_ if it has a single orbit on \(\Omega\), and _semiregular_ if the identity is the only element fixing some points. If \(G\) is both semiregular and transitive on \(\Omega\), then \(G\) is _regular_ on \(\Omega\). Given a permutation group \(G\), and an element \(\alpha\in\Omega\), we denote by \(\alpha^{G}\) the orbit of \(\alpha\) under the action of \(G\). **Lemma 2.1**.: _Let \(G\) be a permutation group on \(\Omega\), and let \(p\) be a prime. If all the elements of \(G\) of order \(p\) are derangements, then all \(p\)-elements of \(G\) are semiregular._ Proof.: Let \(g\in G\) be an element of order \(p^{k}\), for some positive integer \(k\). Aiming for a contradiction, assume that \(g\) is not semiregular, that is, there exists \(\alpha\in\Omega\) such that \(|\alpha^{(g)}|\leq p^{k-1}\). Hence \(g^{p^{k-1}}\) fixes \(\alpha\), which implies \(g^{p^{k-1}}\) is not a derangement, a contradiction. **Lemma 2.2**.: _Let \(G\) be a permutation group acting on \(\Omega\), and let \(p\) and \(q\) be two distinct primes. If \(G\) has a semiregular element \(g\) of order \(p\) and a semiregular element \(h\) of order \(q\) with \(gh=hg\), then \(gh\) is a semiregular element of order \(pq\)._ Proof.: Since \(gh=hg\), \(o(gh)=pq\) and hence it remains to prove that \(gh\) is semiregular. Note that \((gh)^{p}=h^{p}\) is semiregular, and also \((gh)^{q}=g^{q}\) is semiregular. Therefore, each orbit of \(\langle gh\rangle\) has size \(pq\), proving that \(gh\) is semiregular. ### Graphs A _digraph_ is a binary relation \(\Gamma=(V\Gamma,A\Gamma)\), where \(A\Gamma\subseteq V\Gamma\times V\Gamma\). We refer to the elements of \(V\Gamma\) as _vertices_ and to the elements of \(A\Gamma\) as _arcs_. In this paper, a _graph_ is a finite simple undirected graph, that is, a pair \(\Gamma=(V\Gamma,E\Gamma)\), where \(V\Gamma\) is a set of vertices, and \(E\Gamma\) is a set of unordered pairs of \(V\Gamma\), called _edges_. In particular, a graph can be thought of as a digraph where the binary relation is symmetric and anti-reflexive. The _valency_ of a vertex \(\alpha\in V\Gamma\) is the number of edges containing \(\alpha\). A graph is said to be _cubic_ when all of its vertices have valency \(3\). A connected graph is a _cycle_ when all of its vertices have valency \(2\). Let \(\Gamma\) be a graph, and let \(G\) be a subgroup of the automorphism group \(\operatorname{Aut}(\Gamma)\) of \(\Gamma\). If \(G\) is transitive on \(V\Gamma\), we say that \(G\) is _vertex-transitive_, similarly, if \(G\) is transitive on \(A\Gamma\), we say that \(G\) is _arc-transitive_. Moreover, \(\Gamma\) is vertex- or arc-transitive when \(\operatorname{Aut}(\Gamma)\) is vertex- or arc-transitive. Let \(\alpha,\beta\in V\Gamma\) be two adjacent vertices. We denote by \(G_{\alpha}\) the _stabilizer_ of the vertex \(\alpha\), by \(G_{\{\alpha,\beta\}}\) the setwise stabilizer of the edge \(\{\alpha,\beta\}\), by \(G_{\alpha\beta}\) the pointwise stabilizer of the edge \(\{\alpha,\beta\}\) (that is, the stabilizer of the arc \((\alpha,\beta)\) underlying the edge \(\{\alpha,\beta\}\)). \begin{table} \begin{tabular}{|l|l|l|l|} \hline \(2\,560\) & \(2\,560\) & \(5\,120\) & \\ \hline \(6\,250\) & \(12\,500\) & \(25\,000\) & covers of the graph with \(1\,250\) \\ & \(12\,500\) & \(12\,500\) & vertices, there are \(2\) graphs \\ \hline \(31\,250\) & \(62\,500\) & \(125\,000\) & covers of the graphs \\ & \(62\,500\) & \(125\,000\) & with \(6\,250\) vertices, \\ & \(62\,500\) & \(125\,000\) & there are five graphs \\ & \(62\,500\) & \(62\,500\) & \\ \hline \(65\,610\) & \(131\,220\) &? & cover of the graph with \(810\) vertices, only one graph \\ \hline \(2\cdot 5^{\ell}\) & \(4\cdot 5^{\ell}\) & & \(7\leq\ell\leq 34\) \\ \hline \end{tabular} \end{table} Table 1. Exceptional cases for Theorem 1.2 Let \(\Gamma\) be a graph, and let \(N\leq\operatorname{Aut}(\Gamma)\). The _normal quotient_\(\Gamma/N\) is the graph whose vertices are the \(N\)-orbits of \(V\Gamma\), and two \(N\)-orbits \(\alpha^{N}\) and \(\beta^{N}\) are adjacent if there exists an edge \(\{\alpha^{\prime},\beta^{\prime}\}\in E\Gamma\) such that \(\alpha^{\prime}\in\alpha^{N}\) and \(\beta^{\prime}\in\beta^{N}\). Note that the valency of \(\Gamma/N\) is at most the valency of \(\Gamma\), and that, whenever \(\Gamma\) is connected, so is \(\Gamma/N\). Furthermore, if the group \(N\) is normal in some \(G\leq\operatorname{Aut}(\Gamma)\), then \(G/N\) acts (possibly unfaithfully) on \(\Gamma/N\). If the group \(G\) acts vertex- or arc-transitively on \(\Gamma\), then \(G/N\) has the same property on \(\Gamma/N\). The following result is inspired by an analogous result for \(4\)-valent graphs in [13, Lemma 1.13]. **Lemma 2.3**.: _Let \(\Gamma\) be a connected cubic graph, let \(\alpha\) be a vertex of \(\Gamma\), let \(G\) be a vertex-transitive subgroup of \(\operatorname{Aut}(\Gamma)\) and let \(N\) be a semiregular normal subgroup of \(G\). Suppose \(G_{\alpha}\) is a non-identity \(2\)-group and that the normal quotient \(\Gamma/N\) is a cycle of length \(r\geq 3\), and denote by \(K\) the kernel of the action of \(G\) on the \(N\)-orbits on \(V\Gamma\). Then either_ 1. \(G_{\alpha}\) _has order_ \(2\) _and_ \(|K_{\alpha}|=1\)_, or_ 2. \(r\) _is even and_ \(G_{\alpha}=K_{\alpha}\) _is an elementary abelian_ \(2\)_-group of order at most_ \(2^{r/2}\)_._ Proof.: Let \(\Delta_{0},\Delta_{1},\ldots,\Delta_{r-1}\) be the orbits of \(N\) in its action on \(V\Gamma\). Since \(\Gamma/N\) is a cycle, we may assume that \(\Delta_{i}\) is adjacent to \(\Delta_{i-1}\) and \(\Delta_{i+1}\) with indices computed modulo \(r\). Moreover, without loss of generality, we suppose that \(\alpha\in\Delta_{0}\). As \(G_{\alpha}\) is a non-identity \(2\)-group, by a connectedness argument, \(G_{\alpha}\) induces a group of order \(2\) in its action on the neighbourhood of \(\alpha\). In particular, \(G_{\alpha}\) fixes a unique neighbour of \(\alpha\). As usual, for each \(\beta\in V\Gamma\), let \(\beta^{\prime}\) be the unique neighbour of \(\beta\) fixed by \(G_{\beta}\). Suppose that \(\{\alpha,\alpha^{\prime}\}\) is contained in an \(N\)-orbit. Since \(\alpha\in\Delta_{0}\), we deduce \(\alpha^{\prime}\in\Delta_{0}\). Let \(\beta\) and \(\gamma\) be the other two neighbours of \(\alpha\). As \(\Gamma/N\) is a cycle of length \(r\geq 3\), we have \(\beta\in\Delta_{1}\) and \(\gamma\in\Delta_{r-1}\). Since \(\operatorname{Aut}(\Gamma/N)\) is a dihedral group of order \(2r\) and since \(G_{\alpha}\) contains an element swapping \(\beta\) and \(\gamma\), we deduce \(|G_{\alpha}:K_{\alpha}|=2\). Now, \(K_{\alpha}\) fixes by definition each \(N\)-orbit and hence it fixes setwise \(\Delta_{1}\) and \(\Delta_{r-1}\). Therefore, \(K_{\alpha}\) fixes \(\beta\) and \(\gamma\), because \(\beta\) is the unique neighbour of \(\alpha\) in \(\Delta_{1}\) and \(\gamma\) is the unique neighbour of \(\alpha\) in \(\Delta_{r-1}\). This shows that \(K_{\alpha}\) fixes pointwise the neighbourhood of \(\alpha\); now, a connectedness argument shows that \(K_{\alpha}=1\). In particular, part (1) is satisfied. For the rest of the argument, we suppose that \(\{\alpha,\alpha^{\prime}\}\) is not contained in an \(N\)-orbit. This means that \(\alpha\) has two neighbours in an \(N\)-orbit, say \(\Delta_{1}\), and only one neighbour in the other \(N\)-orbit, say \(\Delta_{r-1}\). (Thus \(\alpha^{\prime}\in\Delta_{r-1}\) and \(\beta,\gamma\in\Delta_{1}\).) This implies that \(r\) is even and, for every \(i\in\{0,\ldots,r/2-1\}\), each vertex in \(\Delta_{2i}\) has two neighbours in \(\Delta_{2i+1}\) and only one neighbour in \(\Delta_{2i-1}\). Therefore, \(G/K\) is a dihedral group of order \(r\) when \(r\geq 8\) and \(G/K\) is elementary abelian of order \(4\) when \(r=4.\) Morever, \(G/K\) acts regularly on \(\Gamma/N\) and hence \(G_{\alpha}=K_{\alpha}\). It remains to show that \(K_{\alpha}\) is an elementary abelian \(2\)-group of order at most \(2^{r}\). Since \(N\) is normal in \(G\), the orbits of \(N\) on the edge-set \(E\Gamma\) form a \(G\)- invariant partition of \(E\Gamma\). We claim that, no two edges incident to a fixed vertex of \(\Gamma\) belong to the same \(N\)-edge-orbit. We argue by contradiction and we suppose that \(\alpha\) has two distinct neighbours \(v\) and \(w\) such that the edges \(\{\alpha,v\}\) and \(\{\alpha,w\}\) are in the same \(N\)-edge-orbit. In particular, there exists \(n\in N\) with \(\{\alpha,v\}^{n}=\{\alpha,w\}\). This gives \(\alpha^{n}=\alpha\) and \(v^{n}=w\), or \(\alpha^{n}=w\) and \(v^{n}=\alpha\). Since there are no edges inside an \(N\)-orbit, we cannot have \(\alpha^{n}=w\) and \(v^{n}=\alpha\). Therefore, \(\alpha^{n}=\alpha\) and \(v^{n}=w\). Since \(N\) acts semiregularly on \(V\Gamma\), we have \(n=1\) and hence \(v=v^{n}=w\), which is a contradiction. Since \(G\) is vertex-transitive, the edges between \(\Delta_{2i}\) and \(\Delta_{2i+1}\) are partitioned into precisely two \(N\)-edge-orbits, let's call these two orbits \(\Theta_{2i}\) and \(\Theta^{\prime}_{2i}\); whereas the edges between \(\Delta_{2i}\) and \(\Delta_{2i-1}\) form one \(N\)-edge-orbit, which we call \(\Theta^{\prime\prime}_{2i}\). An element of \(K\) (fixing setwise the sets \(\Delta_{2i}\) and \(\Delta_{2i+1}\)) can map an edge in \(\Theta_{2i}\) only to an edge in \(\Theta_{2i}\) or to an edge in \(\Theta^{\prime}_{2i}\). On the other hand, as \(G_{\alpha}\) is not the identity group, for every vertex \(v\in\Delta_{2i}\) there is an element \(g\in G_{v}\) which maps an edge of \(\Theta_{2i}\) incident to \(v\) to the edge of \(\Theta^{\prime}_{2i}\) incident to \(v\); and this element \(g\) is clearly an element of \(K\), because \(G/K\) acts semiregularly on \(\Gamma/N\). This shows that the orbits of \(K\) on \(E\Gamma\) are precisely the sets \(\Theta_{2i}\cup\Theta^{\prime}_{2i},\Theta^{\prime\prime}_{2i}\), \(i\in\{0,\ldots,r/2-1\}\). In other words, each orbit of the induced action of \(K\) on the set \(E\Gamma/N=\{e^{N}:e\in E\Gamma\}\) has length at most \(2\). Consequently, if \(X\) denotes the kernel of the action of \(K\) on \(E\Gamma\), then \(K/X\) embeds into \(\operatorname{Sym}(2)^{r/2}\) and is therefore an elementary abelian \(2\)-group of order at most \(2^{r/2}\). Let us now show that \(X=N\). Clearly, \(N\leq X\). Let \(v\in\Delta_{0}\). Since \(N\) is transitive on \(\Delta_{0}\), it follows that \(X=NX_{v}\). Suppose that \(X_{v}\) is non-trivial and let \(g\) be a non-trivial element of \(X_{v}\). Further, let \(w\) be a vertex which is closest to \(v\) among all the vertices not fixed by \(g\), and let \(v=v_{0}\sim v_{1}\sim\dots\sim v_{m}=w\) be a shortest path from \(v\) to \(w\). Then \(v_{m-1}\) is fixed by \(g\). Since \(g\) fixes each \(N\)-edge-orbit setwise and since every vertex of \(\Gamma\) is incident to at most one edge in each \(N\)-edge-orbit, it follows that \(g\) fixes all the neighbours of \(v_{m-1}\), thus also \(v_{m}\). This contradicts our assumptions and proves that \(X_{v}\) is a trivial group, and hence that \(X=N\). ### Praeger-Xu graphs To introduce the infinite family of split Praeger-Xu graphs \(\operatorname{sC}(r,s)\), we need two ingredients: the Praeger-Xu graphs and the splitting operation. This section is devoted to introduce the ubiquitous \(4\)-valent Praeger-Xu graphs \(\operatorname{C}(r,s)\) and their automorphism group. This infinite family was originally defined in [11], and it was studied in detail by Gardiner, Praeger and Xu in [11, 12], and more recently in [10]. Here, we introduce them through their directed counterparts defined in [13]. Let \(r\) be an integer, \(r\geq 3\). Then \(\vec{\operatorname{C}}(r,1)\) is the lexicographic product of a directed cycle of length \(r\) with an edgeless graph on \(2\) vertices. In other words, \(\operatorname{V\vec{\operatorname{C}}}(r,1)=\mathbb{Z}_{r}\times\mathbb{Z}_{2}\) with the out-neighbours of a vertex \((x,i)\) being \((x+1,0)\) and \((x+1,1)\). We will identify the \((s-1)\)-arc \[(x,\varepsilon_{0})\sim(x+1,\varepsilon_{1})\sim\dots\sim(x+s-1,\varepsilon_{ s-1})\] with the pair \((x;k)\) where \(k=\varepsilon_{0}\varepsilon_{1}\dots\varepsilon_{s-1}\) is a string in \(\mathbb{Z}_{2}\) of length \(s\). For \(s\geq 2\), let \(\operatorname{V\vec{\operatorname{C}}}(r,s)\) be the set of all \((s-1)\)-arcs of \(\vec{\operatorname{C}}(r,1)\), let \(h\) be a string in \(\mathbb{Z}_{2}\) of length \(s-1\) and let \(\varepsilon\in\mathbb{Z}_{2}\). The out-neighbours of \((x;\varepsilon h)\in\operatorname{V\vec{\operatorname{C}}}(r,s)\) are \((x+1;h0)\) and \((x+1;h1)\). The _Praeger-Xu graph_\(\operatorname{C}(r,s)\) is then defined as the underlying graph of \(\vec{\operatorname{C}}(r,s)\). We have that \(\operatorname{C}(r,s)\) is a connected \(4\)-valent graph with \(r2^{s}\) vertices (see [13, Theorem 2.8]). Let us now discuss the automorphisms of the graphs \(\operatorname{C}(r,s)\). Every automorphism of \(\vec{\operatorname{C}}(r,1)\) (\(\operatorname{C}(r,1)\), respectively) acts naturally as an automorphism of \(\vec{\operatorname{C}}(r,s)\) (\(\operatorname{C}(r,s)\), respectively) for every \(s\geq 2\). For \(i\in\mathbb{Z}_{r}\), let \(\tau_{i}\) be the transposition on \(\operatorname{V\vec{\operatorname{C}}}(r,1)\) swapping the vertices \((i,0)\) and \((i,1)\) while fixing every other vertex. This is clearly an automorphism of \(\vec{\operatorname{C}}(r,1)\), and thus also of \(\vec{\operatorname{C}}(r,s)\) for \(s\geq 2\). Let \[K:=\langle\tau_{i}\mid i\in\mathbb{Z}_{r}\rangle,\] and observe that \(K\cong C_{2}^{r}\). Further, let \(\rho\) and \(\sigma\) be the permutations on \(\operatorname{V\vec{\operatorname{C}}}(r,1)\) defined by \[(x,i)^{\rho}:=(x+1,i)\quad\text{and}\quad(x,i)^{\sigma}:=(x,-i).\] Then \(\rho\) is an automorphism of \(\vec{\operatorname{C}}(r,1)\) or order \(r\), and \(\sigma\) is an involutory automorphism of \(\operatorname{C}(r,1)\) (but not of \(\vec{\operatorname{C}}(r,1)\)). Observe that the group \(\langle\rho,\sigma\rangle\) normalises \(K\). Let \[H:=K\langle\rho,\sigma\rangle\quad\text{and}\quad H^{+}:=K\langle\rho\rangle.\] Then, for every \(r\geq 3\) and \(s\geq 1\), \[C_{2}\text{wr}D_{r}\cong H\leq\operatorname{Aut}(\operatorname{C}(r,s))\quad \text{and}\quad C_{2}\text{wr}C_{r}\cong H^{+}\leq\operatorname{Aut}(\vec{ \operatorname{C}}(r,s)).\] Moreover, \(H\) (\(H^{+}\), respectively) acts arc-transitively on \(\operatorname{C}(r,s)\) (\(\vec{\operatorname{C}}(r,s)\), respectively) whenever \(1\leq s\leq r-1\). With three exceptions, the groups \(H\) and \(H^{+}\) are in fact the full automorphism groups of \(\operatorname{C}(r,s)\) and \(\vec{\operatorname{C}}(r,s)\), respectively. **Lemma 2.4** ([12, Theorem 2.13] and [13, Theorem 2.8]).: _The automorphism group of a directed Praeger-Xu graph is_ \[\operatorname{Aut}(\vec{\operatorname{C}}(r,s))=H^{+},\] _and, if \(r\neq 4\), the automorphism group of a Praeger-Xu graph is_ \[\operatorname{Aut}(\operatorname{C}(r,s))=H.\] _Moreover,_ \[|\operatorname{Aut}(\operatorname{C}(4,1)):H|=9,\quad|\operatorname{Aut}( \operatorname{C}(4,2)):H|=3\] \[\text{and}\quad|\operatorname{Aut}(\operatorname{C}(4,3)):H|=2.\] The Praeger-Xu graphs also admit the following algebraic characterization. **Lemma 2.5** ([11, Lemma 1.11] or [1, Lemma 3.7]).: _Let \(\Gamma\) be a finite connected \(4\)-valent graph, let \(G\) be a vertex- and edge-transitive group of automorphisms of \(\Gamma\), and let \(N\) be a minimal normal subgroup of \(G\). If \(N\) is a \(2\)-group and \(\Gamma/N\) is a cycle of length at least \(3\), then \(\Gamma\) is isomorphic to a Praeger-Xu graph \(\operatorname{C}(r,s)\) for some positive integers \(r\leq 3\) and \(s\leq r-1\)._ For more details on Praeger-Xu graphs, we refer also to [1, 1, 10]. ### The splitting and merging operations The operation of _splitting_ were introduced in [11, Construction 11]. Let \(\Delta\) be a \(4\)-valent graph, let \(\mathcal{C}\) be a partition of \(E\Delta\) into cycles. By applying the splitting operation to the pair \((\Delta,\mathcal{C})\), we obtain the graph, denoted by \(\operatorname{s}(\Delta,\mathcal{C})\), whose vertices are \[V\operatorname{s}(\Delta,\mathcal{C}):=\{(\alpha,C)\in V\Delta\times\mathcal{C }\mid\alpha\in VC\},\] and such that two vertices \((\alpha,C)\) and \((\beta,D)\) are declared adjacent if either \(C\neq D\) and \(\alpha=\beta\), or \(C=D\) and \(\alpha\) and \(\beta\) are adjacent in \(\Delta\). Observe that, since \(\Delta\) is \(4\)-valent, there are precisely \(2\) cycles in \(\mathcal{C}\) passing through \(\alpha\), thus \(\operatorname{s}(\Delta,\mathcal{C})\) is cubic and \(|V\operatorname{s}(\Delta,\mathcal{C})|=2|V\Delta|\). Notice that, for any \(G\leq\operatorname{Aut}(\Delta)\) such that its action is \(\mathcal{C}\)-invariant, \(G\leq\operatorname{Aut}(\operatorname{s}(\Delta,\mathcal{C}))\). Moreover, if \(G\) is also arc-transitive on \(\Delta\) (in particular, the action of \(G_{\alpha}\) on the neighbourhood of \(\alpha\) is either the Klein four group, or the cyclic group of order \(4\), or the dihedral group of order \(8\)), then \(G\) is vertex-transitive on \(\operatorname{s}(\Delta,\mathcal{C})\). For any vertex \((\alpha,C)\in\operatorname{s}(\Delta,\mathcal{C})\), \[G_{(\alpha,C)}=G_{\alpha}\cap G_{\{C\}},\] where \(G_{\{C\}}\) is the setwise stabilizer of the cycle \(C\). In particular, whenever \(G\) is arc-transitive on \(\Delta\), as \(G_{\alpha}\) switches the two cycles passing through \(\alpha\), \(|G_{\alpha}:G_{(\alpha,C)}|=2\). Now, we introduce the tentative inverse of the splitting operator: the operation of _merging_ (see [11, Construction 7]). Let \(\Gamma\) be a connected cubic graph, and let \(G\leq\operatorname{Aut}(\Gamma)\) be a vertex-transitive group such that the action of \(G_{\alpha}\) on the neighbourhood of \(\alpha\) is cyclic of order \(2\). In particular, \(G_{\alpha}\) is a non-identity \(2\)-group. Hence, \(G_{\alpha}\) fixes a unique neighbour of \(\alpha\), which we denote by \(\alpha^{\prime}\). Observe that \((\alpha^{\prime})^{\prime}=\alpha\) and \(G_{\alpha}=G_{\alpha^{\prime}}\). Thus, the set \(\mathcal{M}:=\{\{\alpha,\alpha^{\prime}\}\mid\alpha\in V\Gamma\}\) is a complete matching of \(\Gamma\), while the edges outside \(\mathcal{M}\) form a \(2\)-factor, which we denote by \(\mathcal{F}\). The group \(G\) in its action on \(E\Gamma\) fixes setwise both \(\mathcal{F}\) and \(\mathcal{M}\), and acts transitively on the arcs of each of these two sets. Let \(\Delta\) be the graph with vertex-set \(\mathcal{M}\) and two vertices \(e_{1},e_{2}\in\mathcal{M}\) are declared adjacent if they are (as edges of \(\Gamma\)) at distance \(1\) in \(\Gamma\). We may also think of \(\Delta\) as being obtained by contracting all the edges in \(\mathcal{M}\). Let \(\mathcal{C}\) be the decomposition of \(E\Delta\) into cycles given by the connected components of the the \(2\)-factor \(\mathcal{F}\). The merging operation applied to the pair \((\Gamma,G)\) gives as a result the pair \((\Delta,\mathcal{C})\). Two infinite families of cubic graph have degenerate merged graphs, namely the circular and Mobius ladders. For any \(n\geq 3\), a _circular ladder graph_ is a graph isomorphic to the Cayley graph \[\operatorname{Cay}(\mathbb{Z}_{n}\times\mathbb{Z}_{2},\{(0,1),(1,0),(-1,0)\}),\] and, for any \(n\geq 2\), a _Mobius ladder graph_ is a graph isomorphic to the Cayley graph \[\operatorname{Cay}(\mathbb{Z}_{2n},\{1,-1,n\}).\] Observe that we consider the complete graph on \(4\) vertices to be a Mobius ladder graph. **Lemma 2.6**.: _Let \(\Lambda\) be a (circular or Mobius) ladder, and let \(G\leq\operatorname{Aut}(\Lambda)\) be a vertex-transitive group. Then either \(|V\Lambda|\leq 10\) or \(G\) contains a semiregular element of order at least \(6\)._ **Lemma 2.7**.: _Unless \(\Lambda\) is isomorphic to the skeleton of the cube or the complete graph on \(4\) vertices, the automorphism group of a (circular or Mobius) ladder \(\Lambda\) contains \(N\leq\operatorname{Aut}(\Lambda)\), a normal cyclic subgroup of order \(2\), such that the normal quotient \(\Lambda/N\) is a cycle._ **Remark 2.8**.: Let \(\Gamma\) be a connected cubic graph that is neither a circular nor a Mobius ladder, and let \(G\leq\operatorname{Aut}(\Gamma)\) be a vertex-transitive group such that the action of \(G_{\alpha}\) on the neighbourhood of \(\alpha\) is cyclic of order \(2\). Then [13, Lemma 9 and Theorem 10] imply that the merging operator applied to the pair \((\Gamma,G)\) gives a pair \((\Delta,\mathcal{C})\) such that \(\Delta\) is \(4\)-valent, and the action of \(G\) on \(\Delta\) is faithful, arc-transitive and \(\mathcal{C}\)-invariant. This result motivates the use of the word _degenerate_ when referring to the circular and Mobius ladders. In view of [13, Theorem 12], the merging operator is the right-inverse of the splitting one, or, more explicitly, unless \(\Gamma\) is a (circular or Mobius) ladder, splitting a pair \((\Delta,\mathcal{C})\) obtained via the merging operation on \((\Gamma,G)\) results in the starting pair. For our purposes, we need to show that the merging operator is also the left-inverse of the splitting one. **Theorem 2.9**.: _Let \(\Delta\) be a \(4\)-valent graph, let \(\mathcal{C}\) be a partition of \(E\Delta\) into cycles, and let \(G\leq\operatorname{Aut}(\Delta)\) be an arc-transitive and \(\mathcal{C}\)-invariant group. Then the merging operation can be applied to the pair \((\operatorname{s}(\Delta,\mathcal{C}),G)\) and it gives as a result \((\Delta,\mathcal{C})\)._ Proof.: Let \((\alpha,C)\) be a generic vertex of \(\operatorname{s}(\Delta,\mathcal{C})\), let \(D\in\mathcal{C}\) be the other cycle of the partition passing through \(\alpha\), and let \(\beta,\gamma\in V\Delta\) be the neighbours of \(\alpha\) in \(C\). Then, using the fact that \(G\) is arc-transitive on \(C\), \[(\alpha,D)^{G_{(\alpha,C)}}=\{(\alpha,D)\}\quad\text{and}\quad(\beta,C)^{G_{( \alpha,C)}}=(\gamma,C)^{G_{(\alpha,C)}}=\{(\beta,C),(\gamma,C)\}.\] Therefore, for any vertex \((\alpha,C)\in Vs(\Delta,\mathcal{C})\), \(G_{(\alpha,C)}\) acts on the neighbourhood of \((\alpha,C)\) as a cyclic group of order \(2\). Hence, we can apply the merging operation to the pair \((\operatorname{s}(\Delta,\mathcal{C}),G)\). Furthermore, we deduce that \[\mathcal{M}=\{\{(\alpha,C),(\alpha,D)\}\mid\alpha\in VC\cap VD\}\] is a complete matching for \((\operatorname{s}(\Delta,\mathcal{C}),G)\). Thus the connected components of the resulting \(2\)-factor \(\mathcal{F}=E\mathrm{s}(\Delta,\mathcal{C})\setminus\mathcal{M}\) can be identified with the cycles of \(\mathcal{C}\). Now, consider the map defined as \[\theta:\mathcal{M}\to V\Delta,\,\{(\alpha,C),(\alpha,D)\}\mapsto\alpha.\] Since a generic vertex \(\alpha\in V\Delta\) belongs to precisely two distinct cycles, \(\theta\) is bijective. Moreover, \(\beta\) is adjacent to \(\alpha\) in \(\Delta\) if, and only if, either \(\{(\alpha,C),(\beta,C)\}\) or \(\{(\alpha,D),(\beta,D)\}\) is an edge in \(\operatorname{s}(\Delta,\mathcal{C})\). In particular, \(\theta\) also induces the bijection \[\hat{\theta}:\mathcal{F}\to E\Delta,\,\{(\alpha,C),(\beta,C)\}\mapsto\{ \alpha,\beta\},\] which sends the connected components of \(\mathcal{F}\) into disjoint cycles of \(\mathcal{C}\). This shows that \(\theta\) is a graph isomorphism between \(\Delta\) and the \(4\)-valent graph obtained by merging the pair \((\operatorname{s}(\Delta,\mathcal{C}),G)\), and that the resulting cycle partition is isomorphic to \(\mathcal{C}\). **Corollary 2.10**.: _Let \(\Delta\) be a \(4\)-valent graph, let \(\mathcal{C}\) be a partition of \(E\Delta\) into cycles, and let \(G\leq\operatorname{Aut}(\Delta)\) be an arc-transitive and \(\mathcal{C}\)-invariant group (and so \(G\leq\operatorname{Aut}(\operatorname{s}(\Delta,\mathcal{C}))\)). Suppose that \(G\leq A\leq\operatorname{Aut}(\operatorname{s}(\Delta,\mathcal{C}))\) is a vertex-transitive group such that, for any vertex \(\alpha\in Vs(\Delta,\mathcal{C})\), the action of \(A_{\alpha}\) on the neighbourhood of \(\alpha\) is cyclic of order \(2\), then \(A\leq\operatorname{Aut}(\Delta)\)._ Proof.: Note that, as \(G\) is a subgroup of \(A\), the actions of \(G\) and \(A\) on the neighbourhood of any vertex \(\alpha\) coincide. In particular, applying the merging operation to the pair \((\operatorname{s}(\Delta,\mathcal{C}),A)\) yields the same result as doing it on the pair \((\operatorname{s}(\Delta,\mathcal{C}),G)\), that is, by Theorem 2.9, in both cases we obtain \((\Delta,\mathcal{C})\). The result follows by Remark 2.8. ### Split Praeger-Xu graphs In this section, we bring together the information of Sections 2.3 and 2.4 to define and study the split Praeger-Xu graphs. All the partitions of the edge set of a Praeger-Xu graph into disjoint cycles were classified in [1, Section 6]. Regardless of the choice of the parameters \(r\) and \(s\), there exists a decomposition into disjoint cycles of length \(4\) of the form \[(x;0h)\sim(x+1;h0)\sim(x;1h)\sim(x+1;h1)\] for some \(x\in\mathbb{Z}_{r}\), and for some string \(h\) in \(\mathbb{Z}_{2}\) of length \(s-1\). We denote this partition by \(\mathcal{S}\). Moreover, observe that the only two neighbours of \((x;0h)\) in the \(K\)-orbit containing \((x+1;h0)\) are \((x+1;h1)\) and \((x+1;h0)\), and similarly the only two neighbours of \((x+1;h0)\) in the \(K\)-orbit containing \((x;0h)\) are \((x;1h)\) and \((x;0h)\). Therefore, \(\mathcal{S}\) is the unique decomposition such that each cycle intersects exactly two \(K\)-orbits. **Definition 2.11**.: The _split Praeger-Xu graph_\(\mathrm{sC}(r,s)\) is the cubic graph obtained from the pair \((\mathrm{C}(r,s),\mathcal{S})\) by applying the splitting operation. **Lemma 2.12**.: _For some positive integers \(r\geq 3\) and \(s\leq r-1\), the automorphism group of the split Praeger-Xu graph is_ \[\mathrm{Aut}(\mathrm{sC}(r,s))=H,\] _and it acts transitively on \(V\mathrm{sC}(r,s)\)._ Proof.: Note that \(H\) acts on the set of \(K\)-orbits in \(V\mathrm{C}(r,s)\), thus each automorphism of \(H\) maps any cycle of \(\mathcal{S}\) to a cycle intersecting exactly two \(K\)-orbits, that is, to an element of \(\mathcal{S}\). Thus, \(H\) is \(\mathcal{S}\)-invariant, and so \(H\leq\mathrm{Aut}(\mathrm{sC}(r,s))\). We now show the opposite inclusion. Let \(\alpha\in V\mathrm{sC}(r,s)\) be a generic vertex, aiming for a contradiction we suppose that \(\mathrm{Aut}(\mathrm{sC}(r,s))_{\alpha}\) does not act on the neighbourhood of \(\alpha\) as a cycle of order \(2\). Let \(\alpha^{\prime},\beta,\gamma\) be the neighbours of \(\alpha\) where \(\alpha^{\prime}\) is fixed by the action of \(H_{\alpha}\), and let \(\delta\) be the unique vertex at distance \(1\) from both \(\beta\) and \(\gamma\). Since \(H_{\alpha}\leq\mathrm{Aut}(\mathrm{sC}(r,s))_{\alpha}\), our hypothesis implies that there exists an element \(g\in\mathrm{Aut}(\mathrm{sC}(r,s))_{\alpha}\) such that \(\beta^{g}=\alpha^{\prime}\) and \(\gamma^{g}=\gamma\). This yields a contradiction because \(\delta^{g}\) is ill-defined: in fact there is no vertex of \(\mathrm{sC}(r,s)\) at distance \(1\) from both \(\gamma^{g}\) and \(\delta^{g}\). Recall that, from Lemma 2.4, if \(r\neq 4\), then \(H=\mathrm{Aut}(\mathrm{C}(r,s))\), and so, by Corollary 2.10, \(\mathrm{Aut}(\mathrm{sC}(r,s))\leq H\). On the other hand, if \(r=4\), observe that \(H\) is vertex-transitive on \(\mathrm{sC}(r,s)\) and \(\mathrm{Aut}(\mathrm{sC}(r,s))_{\alpha}=H_{\alpha}\), hence the equality holds by Frattini's argument. **Lemma 2.13**.: _Let \(G\) be a vertex-transitive subgroup of \(\mathrm{Aut}(\mathrm{sC}(r,s))\). Then either \(G\) contains a semiregular element of order at least \(6\), or \((\mathrm{sC}(r,s),G)\) is one of the examples in Table 1 marked with the symbol \(\dagger\)._ Proof.: From Lemma 2.12, we have \(G\leq H=K\langle\rho,\sigma\rangle\). Observe that \(G/G\cap K\cong\langle\rho,\sigma\rangle\), otherwise \(G\) is not transitive on the vertices of the split graph \(\mathrm{sC}(r,s)\). From this, it follows that \(G=V\langle\rho f,\sigma g\rangle\), for some \(f,g\in K\), where \(V=G\cap K\). Since \(\rho\) has order \(r\), we get that \[(\rho f)^{r} =\rho f\rho\ldots(\rho f\rho)f\] \[=\rho f\rho\ldots(\rho^{2}\rho^{-1}f\rho)f\] \[=\rho f\rho\ldots\rho^{2}f^{\rho}f\] \[=\rho f\rho^{r-1}\ldots f^{\rho}f\] \[=f^{\rho^{r-1}}\ldots f^{\rho}f\] is an element of \(V\). Since \(V\) is an elementary abelian \(2\)-group, the element \(\rho f\) has order either \(r\) or \(2r\). Recalling that \(V\leq K\), \[(\rho f)^{r}=\prod_{i=0}^{r-1}\tau_{i}^{a_{i}}\] with \(a_{i}\in\{0,1\}\). Furthermore, \[(\rho f)^{r}\rho =\rho(f\rho\dots\rho f\rho f\rho)\] \[=\rho(ff^{\rho}\dots f^{\rho^{r-2}}f^{\rho^{r-1}})\] \[=\rho(f^{\rho^{r-1}}\dots f^{\rho}f)\] \[=\rho(\rho f)^{r}\] thus \(\rho\) centralizes \((\rho f)^{r}\). From this, and from the fact that \(\langle\rho\rangle\) acts transitively on \(\{\tau_{0},\dots,\tau_{r-1}\}\), we deduce that \[(\rho f)^{r}=\prod_{i=0}^{r-1}\tau_{i}^{a}\] where \(a\) is either \(0\) or \(1\). If \(a=0\), then \(\rho f\) is a semiregular element of order \(r\). In particular, either \(r\geq 6\), or the number of vertices of \(\operatorname{sC}(r,s)\) is \(r2^{s}\), which is bounded by \(5\cdot 2^{5}=160\), and we finish by Remark 1.3. On the other hand, if \(a=1\), \(\rho f\) has order \(2r\), and it corresponds to the so-called _super flip_ of the Praeger-Xu graph \(\operatorname{C}(r,s)\). Since \((\rho f)^{r}\) does not fix any vertex in \(\operatorname{C}(r,s)\), and since the vertex-stabilizers for a split graph has index \(2\) in the vertex-stabilizer of the starting graph, for any vertex \(\alpha\in V\operatorname{sC}(r,s)\), we obtain that \((\rho f)^{r}\notin G_{\alpha}\). Hence \(\rho f\) is semiregular of order \(2r\geq 6\). To conclude this section, we show a result mimicking Lemma 2.5 for cubic graphs. **Lemma 2.14**.: _Let \(\Gamma\) be a connected cubic vertex-transitive graph, let \(G\leq\operatorname{Aut}(\Gamma)\) be a vertex-transitive group such that the action of \(G_{\alpha}\) on the neighbourhood of \(\alpha\) is cyclic of order \(2\), and let \(N\) be a minimal normal subgroup of \(G\). If \(N\) is a \(2\)-group and \(\Gamma/N\) is a cycle of length at least \(3\), then \(\Gamma\) is isomorphic either to a circular ladder, or to a Mobius ladder, or to \(\operatorname{sC}(r,s)\), for some positive integers \(r\geq 3\) and \(s\leq r-1\)._ Proof.: We already know by Lemma 2.7 that both ladders admit a cyclic quotient graph, thus we can suppose that \(\Gamma\) is not isomorphic to a circulant ladder or to a Mobius ladder. By hypothesis, we can apply the merging operator to \((\Gamma,G)\), obtaining the pair \((\Delta,\mathcal{C})\). Since we have excluded the possibility of \(\Gamma\) being a ladder, by Remark 2.8, \(\Delta\) is \(4\)-valent, and the action of \(G\) on \(\Delta\) is faithful, arc-transitive and \(\mathcal{C}\)-invariant. Since the action of \(N\) cannot map edges in \(\mathcal{M}\) to edges in \(\mathcal{F}\), the quotient graph \(\Gamma/N\) retains a partition into two disjoint sets of edges, namely \(\mathcal{M}/N\) and \(\mathcal{F}/N\). Moreover, since \(\mathcal{M}\) is a complete matching, each edge in \(\mathcal{M}/N\) is adjacent to precisely two edges in \(\mathcal{F}/N\), and vice versa. This implies that the edges of \(\Delta/N\) coincide with the elements of \(\mathcal{F}/N\), two of which are adjacent if they share the same neighbour in \(\mathcal{M}/N\). If \(r\geq 6\), then \(\Delta/N\) is a cycle of length \(r/2\). From Lemma 2.5, we deduce that \(\Delta\) is isomorphic to \(\operatorname{C}(r,s)\), for some positive integers \(r\geq 3\) and \(s\leq r-1\). Observe that, as \(\mathcal{C}\) coincides with the connected components of \(\mathcal{F}\), each cycle in \(\mathcal{C}\) intersects precisely two \(K\)-orbits. This implies that \(\mathcal{C}=\mathcal{S}\), and so [13, Theorem 12] yields that \(\Gamma\) is isomorphic to \[\operatorname{s}(\Delta,\mathcal{C})=\operatorname{s}(\operatorname{C}(r,s), \mathcal{S})=\operatorname{sC}(r,s).\qed\] Now, suppose that \(r=4\). In this case, we have that \(G\) is a \(2\)-group, hence \(|N|=2\) and \(|V\Gamma|=8\), and so the only possibility is for \(\Gamma\) to be a (cirular or Mobius) ladder, which we already excluded. ## 3. Proof of Theorem 1.2 We aim to prove Theorem 1.2 by contradiction. In this section we will assume the following. **Hypothesis 3.1**.: Let \(\Gamma\) be a connected cubic graph, and let \(G\leq\operatorname{Aut}(\Gamma)\) such that the pair \((\Gamma,G)\) is a minimal counterexample to Theorem 1.2, first with respect to the cardinality of \(V\Gamma\), and then to the order of \(G\). From Remark 1.3, we have \(|V\Gamma|>1\,280\). Let \(\alpha\) be an arbitrary vertex of \(\Gamma\). Let \(N\) be a minimal normal subgroup of \(G\). Since \(\Gamma\) is connected, the stabilizer \(G_{\alpha}\) is a \(\{2,3\}\)-group. More generally, if \(\Delta\) is a connected \(d\)-regular graph, then no prime bigger than \(d\) divides the order of a vertex stabilizer (this follows from an elementary connectedness argument, see for instance [12, Lemma 3.1] or [13, Lemma 3.2]). Moreover, \(G\) must be a \(\{2,3,5\}\)-group, otherwise we can find derangements of prime order at least \(7\), hence semiregular elements. Since \(N\) is a minimal normal subgroup of \(G\), \(N\) is a direct product of simple groups, any two of which are isomorphic. Clearly, \(N\) is a \(\{2,3,5\}\)-group, and \(N_{\alpha}\) is a \(\{2,3\}\)-group. Thus \(N\) is a direct product \(S^{l}\), for some positive integer \(l\) and for some simple \(\{2,3,5\}\)-group \(S\). Using the Classification of Finite Simple Groups, we see that the collection of simple \(\{2,3,5\}\)-groups consists of \[C_{2},\,C_{3},\,C_{5},\,\text{Alt}(5),\,\text{Alt}(6),\,\text{PSp}(4,3),\] see for instance [16]. **Lemma 3.2**.: _Under Hypothesis 3.1, if \(N_{\alpha}\) is a \(2\)-group, then \(N\) is an elementary abelian \(p\)-group, for some prime \(p\in\{2,3,5\}\)._ Proof.: If \(N\) is abelian, then there is nothing to prove. Thus, suppose that \(N=S^{l}\), where \(S\in\{\text{Alt}(5),\text{Alt}(6),\text{PSp}(4,3)\}\) and \(l\geq 1\). Assume \(l\geq 2\). Let \(S\) and \(T\) be two distinct direct factors of \(N\). Then \(S_{\alpha}\) and \(T_{\alpha}\) are \(2\)-groups, because so is \(N_{\alpha}\). Thus, by Lemma 2.1, all the \(3\)- and \(5\)-elements of \(S\) and \(T\) are semiregular. Applying Lemma 2.2, we obtain that \(S\times T\), contains a semiregular element of order \(15\). Thus \(G\) contains a semiregular element of order exceeding \(6\), contradicting Hypothesis 3.1. Assume \(l=1\). If \(N=\text{PSp}(4,3)\), then Lemma 2.1 implies that the \(3\)-elements in \(N\) are semiregular. As \(\text{PSp}(4,3)\) contains elements of order \(9\), \(G\) contains a semiregular element of order \(9\), contradicting Hypothesis 3.1. Thus, \(N\) is either \(\text{Alt}(5)\) or \(\text{Alt}(6)\). We claim that \(G\) is almost simple, that is, \(N\) is the unique minimal normal subgroup of \(G\). Aiming for a contradiction, let \(M\) be a minimal normal subgroup of \(G\) distinct from \(N\). If \(\Gamma/M\) is a cubic graph, then \(M_{\alpha}=1\), and hence each element of \(M\) is semiregular. Since \([N,M]=1\), by Lemma 2.2, \(G\) contains a semiregular element of order at least \(10\), against Hypothesis 3.1. On the other hand, suppose that \(\Gamma/M\) is not cubic. Regardless of the valency of \(\Gamma/M\), the group that \(G\) induces in its action on the vertices of \(\Gamma/M\) is a subgroup of a dihedral group, hence it is a soluble group. In particular, as \(N\) is a non-abelian simple group, \(N\) acts trivially on the vertices of \(\Gamma/M\). This means that \(N\) fixes setwise each \(M\)-orbit. If \(M\) is abelian, then \(M\) acts regularly on each of its orbits. However, as \(N\) commutes with \(M\) and fixes each \(M\)-orbit, this contradicts the fact that \(N\) is non-abelian.1 Therefore, \(M\) is not abelian. In particular, there is a prime \(p\geq 5\) that divides the order of \(M\), and the elements of \(M\) of order \(p\) are semiregular. As before, applying Lemma 2.2, we get that \(NM\) contains a semiregular element of order \(3p\), a contradiction. We conclude that \(N\) is the unique minimal normal subgroup of \(G\). Footnote 1: Recall that, if \(X\leq\text{Sym}(\Omega)\) is an abelian group and \(X\) acts regularly on \(\Omega\), then \(X=\mathbf{C}_{\text{Sym}(\Omega)}(X)\). Notice that \(\text{Alt}(5)\leq G\leq\text{Sym}(5)\) or \(\text{Alt}(6)\leq G\leq\text{Aut}(\text{Alt}(6))\). A computer computation in each of these cases shows that, if \(G\leq\text{Aut}(\Gamma)\) has no semiregular elements of order at least \(6\), then \(|V\Gamma|\in\{30,60,90,180,360\}\), which contradicts Hypothesis 3.1. From here on, we divide the proof in five cases: * \(G_{\alpha}=1\); * \(G_{\alpha}\neq 1\) and \(N\) is transitive on \(V\Gamma\); * \(G_{\alpha}\neq 1\) and \(N\) has two orbits on \(V\Gamma\); * \(G_{\alpha}\neq 1\) and \(\Gamma/N\) is a cycle of length at least \(3\); * \(G_{\alpha}\neq 1\) and \(\Gamma/N\) is a cubic graph. ### \(G_{\alpha}=1\) In this case \(\Gamma\) is a Cayley graph over \(G\). This means that there exists an inverse-closed subset \(I\) of \(G\) with \(\Gamma\cong\operatorname{Cay}(G,I)\). We recall that \(\operatorname{Cay}(G,I)\) is the graph having vertex set \(G\) where two vertices \(x\) and \(y\) are declared to be adjacent if and only if \(yx^{-1}\in I\). Since \(\Gamma\) has valency \(3\), we have \(|I|=3\). Moreover, since \(\Gamma\) is connected, we have \(G=\langle I\rangle\). In particular, \(G\) is generated by at most \(3\) elements. More precisely, either \(I\) consists of three involutions or \(I\) consists of an involution and an element of order greater than \(2\) together with its inverse. In what follows we say that a finite group \(X\) satisfies \(\mathcal{P}\) if \(X\) is generated by either three involutions, or by an involution and by an element of order greater than \(2\). In particular, \(G\) satisfies \(\mathcal{P}\). Since each element of \(G\) is semiregular and since \(G\) has no semiregular elements of order at least \(6\), we deduce that each element of \(G\) has order at most \(5\). As customary, we let \[\omega(G):=\{o(g)\mid g\in G\}\] be the spectrum of \(G\). Observe that \[\{1,2\}\subseteq\omega(G)\subseteq\{1,2,3,4,5\}.\] Since \(G\) is generated by at most \(3\) elements, we deduce from Zelmanov's solution of the restricted Burnside problem that \(|G|\) is bounded above by an absolute constant. We divide the proof depending on \(\omega(G)\). Assume \(\omega(G)=\{1,2\}\). In this case, \(G\) is elementary abelian and, since \(G\) is generated by at most \(3\) elements, we deduce \(|G|\leq 8\), which contradicts Hypothesis 3.1. Assume \(\omega(G)=\{1,2,4\}\). Here, either \(G\) is generated by an element of order \(2\) and an element of order \(4\), or \(G\) is generated by three involutions. We resolve these two cases with a computer computation. Suppose first that \(G\) is generated by an involution and by an element of order \(4\). We have constructed the free group \(F:=\langle x,y\rangle\) and we have constructed the set \(W\) of words in \(x,y\) of length at most \(6\). Then, we have constructed the finitely presented group \(\bar{F}:=\langle F|x^{2},\{w^{4}:w\in W\}\rangle\). We use the "bar" notation for the projection of \(F\) onto \(\bar{F}\). Now, \(\bar{x}\) has order \(2\) and \(\bar{y}\) has order \(4\). Furthermore, each element of \(\bar{F}\) that can be written as a word in \(\bar{x}\) and \(\bar{y}\) of length at most \(6\) has order at most \(4\). (The number \(6\) was chosen arbitrarily but large enough to guarantee to get an upper limit on the cardinality of \(G\).) A computer computation shows that \(\bar{F}\) has order \(64\) and exponent \(4\). This proves that the largest group of exponent \(4\) and generated by an involution and by an element of order \(4\) has order \(64\). Now, \(G\) is a quotient of \(\bar{F}\) and hence \(|G|\leq|\bar{F}|\leq 64\), which contradicts Hypothesis 3.1. Next, suppose that \(G\) is generated by three involutions. The argument here is very similar. We have considered the free group \(F=\langle x,y,z\rangle\), and we have considered the set \(W\) of words in \(x,y,z\) of length at most \(6\). We have verified that \(\langle F|x^{2},y^{2},z^{2},\{w^{4}:w\in W\}\rangle\) has order \(1024\) and exponent \(4\). This shows that \(|G|\leq 1\,024\), which contradicts Hypothesis 3.1. Assume \(\omega(G)=\{1,2,3\}\). The groups having spectrum \(\{1,2,3\}\) are classified in [14]. Routine computations in the list of groups \(X\) classified in [14, Theorem] show that, if \(X\) satisfies \(\mathcal{P}\), then \(|X|\leq 18\), which contradicts Hypothesis 3.1. Assume \(\omega(G)=\{1,2,5\}\). The groups having spectrum \(\{1,2,5\}\) are classified in [14]. As above, since \(G\) satisfies \(\mathcal{P}\), we deduce from a case-by-case analysis in the groups appearing in [14] that \(|G|\leq 80\), which contradicts Hypothesis 3.1. Assume \(\omega(G)=\{1,2,3,4\}\). The groups having spectrum \(\{1,2,3,4\}\) are classified in [13]. As above, since \(G\) satisfies \(\mathcal{P}\), we deduce from a case-by-case analysis in the groups appearing in [13, Theorem] that \(|G|\leq 96\), which contradicts Hypothesis 3.1. Assume \(\omega(G)=\{1,2,4,5\}\). The groups having spectrum \(\{1,2,4,5\}\) are classified in [12]. This case is sligthly more involved and hence we do give more details. We have three cases to consider: 1. \(G=T\rtimes D\) where \(T\) is a non-trivial elementary abelian normal \(2\)-subgroup and \(D\) is a non-abelian group of order \(10\), 2. \(G=F\rtimes T\) where \(F\) is an elementary abelian normal \(5\)-subgroup and \(T\) is isomorphic to a subgroup of a quaternion group of order \(8\), 3. \(G\) contains a normal \(2\)-subgroup \(T\) which is nilpotent of class at most \(6\) such that \(G/T\) is a \(5\)-group. Suppose that (1) holds. Clearly, \(D\) is the dihedral group of order \(10\) and \(T\) is a module for \(D\) over the field \(\mathbb{F}_{2}\) of cardinality \(2\). The dihedral group \(D\) has two irreducible modules over \(\mathbb{F}_{2}\) up to equivalence: the trivial module and a \(4\)-dimensional module \(W\). Since \(G\) has no elements of order \(10\), we deduce \(V\cong W^{\ell}\), for some \(\ell\geq 1\). We have verified with a computer computation that \(W^{3}\rtimes D\) does not satisfy \(\mathcal{P}\) and hence \(G\cong W^{\ell}\rtimes D\) with \(\ell\leq 2\). We deduce that \(|G|=|V\Gamma|\in\{10\cdot 16,10\cdot 16^{2}\}=\{160,2\,560\}\). From Hypothesis 3.1, we have \(|V\Gamma|>1\,280\) and hence \(G\cong W^{2}\rtimes D\). We have constructed all connected cubic Cayley graphs over \(W^{2}\rtimes D\) and we have found only one (up to isomorphism), therefore we obtain the example in Table 1. Suppose that (2) holds. Since \(G\) satisfies \(\mathcal{P}\), while the quaternion group of order \(8\) does not, we deduce that \(T\) is cyclic of order \(4\). Thus \(G=F\rtimes\langle x\rangle\), for some \(x\) having order \(4\). As \(G\) satisfies \(\mathcal{P}\), this means that \(G=\langle x,y\rangle\), for some involution \(y\). Clearly, \(y=fx^{2}\) for some \(f\in F\). As \(G=\langle x,y\rangle=\langle x,fx^{2}\rangle=\langle x,f\rangle\), we have \(F=\langle f,fx^{x},fx^{x^{3}}\rangle\). Since \(y=fx^{2}\) has order \(2\) and \(x\) has order \(4\), we deduce \[1=y^{2}=fx^{2}fx^{2}=ff^{x^{2}},\] that is, \(f^{x^{2}}=f^{-1}\). Now, \(F=\langle f,f^{x},f^{x^{2}},f^{x^{3}}\rangle=\langle f,f^{x},f^{-1},(f^{x})^{- 1}\rangle=\langle f,f^{x}\rangle\). Thus \(|F|\leq 25\) and hence \(|G|\leq 100\), which contradicts Hypothesis 3.1. Suppose that (3) holds. Since \(G\) satisfies \(\mathcal{P}\), we deduce that \(G/T\) is cyclic of order \(5\). Thus \(G=T\rtimes\langle x\rangle\), for some \(x\) having order \(5\). This means that \(G=\langle x,y\rangle\), for some involution \(y\). Clearly, \(y\in T\). From Hypothesis 3.1, we have \(|G|=|V\Gamma|>1\,280\). Let \(N\) be a minimal normal subgroup of \(G\). We have \(N\leq T\) and \(N\) is an irreducible \(\mathbb{F}_{2}\langle x\rangle\)-module. The cyclic group of order \(5\) has two irreducible modules over \(\mathbb{F}_{2}\) up to equivalence: the trivial module and a \(4\)-dimensional module. Since \(G\) has no elements of order \(10\), \(x\) does not centralize \(N\) and hence \(N\) is the irreducible \(4\)-dimensional module for the cyclic group of order \(5\). In particular, \(|N|=2^{4}\). Consider \(\bar{G}:=G/N\). Now, \[\{1,2,5\}\subseteq\omega(\bar{G})\subseteq\omega(G)=\{1,2,4,5\}.\] Assume \(\omega(\bar{G})=\{1,2,5\}\). From the discussion above (regarding the finite groups having spectrum \(\{1,2,5\}\) and satisfying \(\mathcal{P}\)), we have \(|\bar{G}|\leq 80\) and hence \(|G|=|G:N||N|\leq 80\cdot 16=1\,280\), which is a contradiction. Therefore, \(\omega(\bar{G})=\{1,2,4,5\}\). Since \((\Gamma,G)\) was chosen minimal in Hypothesis 3.1, we have \(|\bar{G}|\leq 1\,280\). Therefore \((\Gamma/N,\bar{G})\) appears in Table 1. An inspection on the groups appearing in this table shows that there is only one group having spectrum \(\{1,2,4,5\}\) and is the group of order \(1\,280\). Thus we know precisely \(\bar{G}\). Now, the group \(G\) is an extension of \(\bar{G}\) by \(N\) and hence it can be computed with the cohomology package in the computer algebra system magma. We have computed all the extensions \(E\) of \(\bar{G}\) via \(N\) and we have verified that none of the extensions \(E\) has the property that \(\omega(E)=\{1,2,4,5\}\) and with \(E\) satisfying \(\mathcal{P}\). Assume \(\omega(G)=\{1,2,3,5\}\). The groups having spectrum \(\{1,2,3,5\}\) are classified in [10]. We deduce from [10] that \(G\cong A_{5}\), which contradicts Hypothesis 3.1. Assume \(\omega(G)=\{1,2,3,4,5\}\). The groups having spectrum \(\{1,2,3,4,5\}\) are classified in [1]. We deduce from [1, Theorem] that either \(G\cong A_{6}\) or \(G\cong V^{\ell}\rtimes A_{5}\) where \(V\) is a \(4\)-dimensional natural module over the finite field of size \(2\) for \(A_{5}\cong\operatorname{SL}_{2}(4)\) and \(\ell\geq 1\). The group \(V^{2}\rtimes A_{5}\) does not satisfy \(\mathcal{P}\) (this can be verified with a computer computation). Therefore, either \(G\cong A_{6}\) or \(G\cong V\rtimes A_{5}\). Thus \(|G|=|V\Gamma|\leq 960\), which contradicts Hypothesis 3.1. ### \(G_{\alpha}\neq 1\) and \(N\) is transitive on \(V\Gamma\) By Hypothesis 3.1, \((\Gamma,G)\) is a minimal counterexample. This minimality and the fact that \(N\) is transitive on \(V\Gamma\) imply that \(G=N\). As \(N\) is a minimal normal subgroup of \(G\), \(G\) is simple. Thus \(G\in\{\operatorname{Alt}(5),\operatorname{Alt}(6),\operatorname{PSp}(4,3)\}\). A computer computation in each of these cases shows that, if \(G\leq\operatorname{Aut}(\Gamma)\) has no semiregular elements of order at least \(6\), then \(|V\Gamma|\in\{10,20,30,60,90,180,360\}\), which contradicts Hypothesis 3.1. ### \(G_{\alpha}\neq 1\) and \(N\) has two orbits on \(V\Gamma\) Suppose \(N\) is abelian. By [13, Lemma 1.15], either \(\Gamma\) is complete bipartite, or \(\Gamma\) is a bi-Cayley graph over \(N\) and the minimal number of generators of \(N\) is at most \(4\). (Here, it is not really relevent to introduce the definition of bi-Cayley graph, however, what is really relevant is the fact that \(N\) is generated by at most \(4\) elements.) Recalling that \(N\) is a \(\{2,3,5\}\)-group, it follows that \(|V\Gamma|=2|N|\leq 2\cdot 5^{4}=1\,250\), and the equality is realized for \(N=C_{5}^{4}\). In particular, this contradicts Hypothesis 3.1. Suppose \(N\) is not abelian. By Lemma 3.2, 3 divides the order of \(N_{\alpha}\). A fortiori, 3 divides the order of \(G_{\alpha}\), hence \(G\) acts arc-transitively on \(\Gamma\). We can extract information on the local action of \(G\) by consulting the amalgams in [10, Section 4]. In particular, with a direct inspection (on a case-by-case basis) on these amalgams, it can be verified that, for any edge \(\{\alpha,\beta\}\) of \(\Gamma\), \(G\) contains an element \(y\) that swaps \(\alpha\) and \(\beta\) and its order is either \(2\) or \(4\). As \(\alpha\) and \(\beta\) belong to distinct \(N\)-orbits, \(y\) maps \(\alpha^{N}\) to \(\beta^{N}\). Moreover, as \(N\) has two orbits on \(V\Gamma\), the subgroup \(N\langle y\rangle\) is vertex-transitive on \(\Gamma\). Therefore, by minimality of \(G\), we have \(G=N\langle y\rangle\). Assume \(o(y)=2\). Thus \(|G:N|=2\). As \(N=S^{l}\) is a minimal normal subgroup of \(G\), \(l\in\{1,2\}\). If \(l=1\), then \(G\) is an almost simple group whose socle is either \(\operatorname{Alt}(5)\), \(\operatorname{Alt}(6)\) or \(\operatorname{PSp}(4,3)\). A computer computation shows that \((\Gamma,G)\) satisfies Theorem 1.2, a contradiction. If \(l=2\), then \(\langle y\rangle\) permutes transitively the two simple direct factors of \(N\). Let \(s\in N\) be a \(5\)-element in a simple direct factor of \(N\), and notice that \(t:=s^{y}\) is a \(5\)-element in the other simple direct factor of \(N\). Thus \([s,t]=1\). We claim that \(ys\) is a semiregular element of order \(10\). We get \[(ys)^{2}=ysys=ts\in N,\] \[(ys)^{5}=ysysysysys=ys(ts)^{2}\in yN.\] We have that \((ys)^{2}\) is a \(5\)-element in \(N\), thus semiregular, and that \((ys)^{5}\) has order \(2\) and, being an element of \(yN=Ny\), it has no fixed points, hence it is semiregular. Therefore \(ys\) is a semiregular element of order \(10\), contradicting Hypothesis 3.1. Assume \(o(y)=4\). As \(|G:N|=4\) and \(N\) is a minimal normal subgroup of \(G\), \(l\in\{1,2,4\}\). Observe that a Sylow \(3\)-subgroup of \(G_{\alpha}\) has order \(3\), because \(\Gamma\) is cubic and \(G\) is arc-transitive. Let \(x\in G_{\alpha}\) be an element of order \(3\). As \(|G:N|=4\), we have \(x\in N\cap G_{\alpha}=N_{\alpha}\leq S^{l}\). In particular, we may write \(x=(s_{1},\ldots,s_{l})\), with \(s_{i}\in S\). Let \(\kappa\) be the number of coordinates of \(x\) different from \(1\), we call \(\kappa\) the type of \(x\). Since \(\langle x\rangle\) is a Sylow \(3\)-subgroup of \(G_{\alpha}\), from Sylow's theorem, we deduce that each element of order \(3\) in \(G\) fixing some vertex of \(\Gamma\) has type \(\kappa\). Let \(s\in S\) be an element of order \(3\) and let \(t\in S\) be an element of order \(5\). Suppose \(l=4\). If \(\kappa\neq 1\), then \(g=(s,t,1,1)\) has order \(15\) and is semiregular because \(g^{5}=(s^{5},1,1,1)\) has order \(3\) but it is not of type \(\kappa\). Similarly, if \(l=4\) and \(k=1\), then \(g=(s,s,t,1)\) has order \(15\) and is semiregular. Analogously, when \(l=2\), if \(\kappa\neq 1\), then \(g=(s,t)\) has order \(15\) and is semiregular. When \(l=2\), \(\kappa=1\) and \(S=\operatorname{PSp}(4,3)\), the group \(S\) contains an element \(r\) having order \(9\) and hence \(g=(r,r)\) is a semiregular element having order \(9\). Summing up, from these reductions, we may suppose that either \(l=1\), or \(l=2\) and \(S\in\{\operatorname{Alt}(5),\operatorname{Alt}(6)\}\). These cases can be dealt with a computer computation: indeed, the invaluable help of a computer shows that no counterexample to Theorem 1.2 arises. ### \(G_{\alpha}\neq 1\) and \(\Gamma/N\) is a cycle of length \(r\geq 3\) The full automorphism group of \(\Gamma/N\) is the dihedral group of order \(2r\). Let \(K\) be the kernel of the action of \(G\) on the \(N\)-orbits. The quotient \(G/K\) acts faithfully on \(\Gamma/N\), that is, it is a transitive subgroup of the dihedral group of order \(2r\). We claim that \[G/K\text{ is regular in its action on the vertices of }\Gamma/N. \tag{3.1}\] Assume \(G/K\) acts on the vertices of \(\Gamma/N\) transitively but not regularly. In particular, \(G/K\) is isomorphic to the dihedral group of order \(2r\). Thus \(G\) has an index \(2\) subgroup \(M\) such that \(M\) is vertex-transitive and \(M/K\) is isomorphic to the cyclic group of order \(r\). By minimality of \(G\), we have \(G=M\), which goes against the choice of \(M\). Hence \(G/K\) is regular. In particular, either \(G/K\) is isomorphic to the cyclic group of order \(r\), or \(r\) is even and \(G\) is isomorphic to the dihedral group of order \(r\). Later in this proof we resolve this ambiguity and we prove that \(r\) is even and \(G/K\) is dihedral of order \(r\), see (3.5). As \(G/K\) acts regularly on the vertices of \(\Gamma/N\), we have \[1_{G/K}=\left(\frac{G}{K}\right)_{\alpha^{N}}=\frac{G_{\alpha}K}{K}.\] Therefore \[K_{\alpha}=K\cap G_{\alpha}=G_{\alpha}. \tag{3.2}\] Assume \(G\) is arc-transitive. Let \(\beta\) be a neighbour of \(\alpha\) and observe that \(\alpha^{N}\neq\beta^{N}\). Since \(\Gamma\) is connected, we have \[G=\langle G_{\alpha},G_{\{\alpha,\beta\}}\rangle=\langle K_{\alpha},G_{\{ \alpha,\beta\}}\rangle\leq\langle K,G_{\{\alpha,\beta\}}\rangle=KG_{\{\alpha, \beta\}},\] and hence \(G=KG_{\{\alpha,\beta\}}\). Recalling that \(K\) fixes all the \(N\)-orbits, \[|G:K|=|KG_{\{\alpha,\beta\}}:K|=|G_{\{\alpha,\beta\}}:K_{\{\alpha,\beta\}}|=| G_{\{\alpha,\beta\}}:G_{\alpha\beta}|=2.\] Thus \(G/K\cong C_{2}\) and \(r=2\), which is a contradiction. Therefore \[G\] is not arc-transitive. This implies that \(G_{\alpha}\) does not act transitively on the neighbourhood of \(\alpha\), hence \(G_{\alpha}\) is a \(2\)-group. By (3.2), we deduce \(G_{\alpha}=K_{\alpha}\) is a \(2\)-group. Actually, Lemma 2.3 shows that \[G_{\alpha}=K_{\alpha} \tag{3.3}\] is an elementary abelian \(2\)-group. If \(N\) is an elementary abelian \(2\)-group, then, by Lemma 2.14, \(\Gamma\) is either a circular ladder, or a Mobius ladder, or a split Praeger-Xu graph \(\mathrm{sC}(r/2,s)\). Now, in the former cases, the proof follows from Lemma 2.6, while, in the latter one, we conclude by Lemma 2.13. In particular, for the rest of the proof we may suppose that \(N\) is not an elementary abelian \(2\)-group. For any minimal normal subgroup \(M\) of \(G\), \(M_{\alpha}=M\cap G_{\alpha}\) is also a \(2\)-group. Thus, in view of Lemma 3.2, \(M\) is an elementary abelian \(p\)-group, for some \(p\in\{2,3,5\}\). This is true, in particular, for \(N\). Let \(M\) be a minimal normal subgroup distinct from \(N\). Since \([N,M]=1\), Lemma 2.2 gives a contradiction unless \(N\) and \(M\) are both \(p\)-groups for the same prime \(p\). Thus, \[\text{the socle of $G$ is an elementary abelian $p$-group, for some $p\in\{3,5\}$.} \tag{3.4}\] Before going any further, we need some extra information on the local action of \(G\) on \(\Gamma\). Since \(G_{\alpha}\) is a non-identity \(2\)-group, there exists a unique vertex \(\alpha^{\prime}\in V\Gamma\) adjacent to \(\alpha\) that is fixed by the action of \(G_{\alpha}\). It follows that \(\{\alpha,\alpha^{\prime}\}\) is a block of imprimitivity for the action of \(G\) on the vertices. Hence, \[G_{\alpha}\leq G_{\{\alpha,\alpha^{\prime}\}}\quad\text{and}\quad|G_{\{\alpha, \alpha^{\prime}\}}:G_{\alpha}|=2.\] We obtain that, for any \(\beta\in V\Gamma\), neighbour of \(\alpha\) distinct from \(\alpha^{\prime}\), \[|G_{\{\alpha,\alpha^{\prime}\}}:G_{\alpha\beta}|=4\quad\text{and}\quad|G_{\{ \alpha,\beta\}}:G_{\alpha\beta}|=2.\] Let \(\{\alpha^{\prime},\beta,\gamma\}\) be the neighbourhood of \(\alpha\). Assume \(G/K\) is cyclic of order \(r\). As \(\Gamma/N\) is a cycle of length \(r\), this means that \(G/K\) acts transitively on the vertices and on the edges of \(\Gamma/N\). Now, \(\beta\) and \(\gamma\) are in the same \(K\)-orbit because \(K_{\alpha}=G_{\alpha}\) and \(G_{\alpha}\) acts transitively on \(\{\beta,\gamma\}\). In particular, each element in \(\alpha^{N}\) has two neighbours in \(\beta^{N}\). As \(G/K\) is transitive on edges, we reach a contradiction because each element in \(\alpha^{N}\) would have two neighbours in \({\alpha^{\prime}}^{N}\), contradicting the fact that \(\alpha\) has valency \(3\). Thus \[r \tag{3.5}\] is even and \[G/K\] is dihedral of order \[r\] Recall that \(N\) is an elementary abelian \(p\)-group with \(p\in\{3,5\}\). Thus \(N\) is semiregular. We consider \({\bf C}_{K}(N)\). Since \(N\leq{\bf C}_{K}(N)\) and since \(K=K_{\alpha}N\), we deduce \({\bf C}_{K}(N)=N\times Q\), for some subgroup \(Q\) of \(K_{\alpha}\). As \(K_{\alpha}\) is a \(2\)-group, so is \(Q\). Therefore, \(Q\) is characteristic in \(N\times Q={\bf C}_{K}(N)\) and hence \(Q\unlhd G\). Since \(G_{\alpha}\) is a core-free subgroup of \(G\), we get \(Q=1\) and \({\bf C}_{K}(N)=N\). Since \(N\) is a minimal normal subgroup of \(G\), \(G\) acts irreducibly by conjugation on it, that is, \(N\) is an irreducible \({\mathbb{F}}_{p}G\)-module. As \(K\unlhd G\), by Clifford's Theorem, \(N\) is a completely reducible \({\mathbb{F}}_{p}K\)-module. As \(K=NG_{\alpha}\) and \(N\) is abelian, \(N\) is a completely reducible \({\mathbb{F}}_{p}G_{\alpha}\)-module. As \(G_{\alpha}\) is abelian, by Schur's Lemma, \(G_{\alpha}\) induces on each irreducible \({\mathbb{F}}_{p}G_{\alpha}\)-submodule a cyclic group action. However, since \(G_{\alpha}\) has exponent \(2\), we deduce that each irreducible \({\mathbb{F}}_{p}G_{\alpha}\)-submodule has dimension \(1\) and \(G_{\alpha}\) induces on each irreducible \({\mathbb{F}}_{p}G_{\alpha}\)-submodule the scalars \(\pm 1\). Therefore, \(G_{\alpha}\) acts on \(N\) by conjugation as a group of diagonal matrices having eigenvalues in \(\{\pm 1\}\). In other words, there exists a basis \((n_{1},\ldots,n_{e})\) of \(N\) as a vector space over \({\mathbb{F}}_{p}\) such that, \[\text{for each $g\in G_{\alpha}$ and for each $n_{i}$, \text{ we have }$n_{i}^{g}\in\{n_{i},n_{i}^{-1}\}$.} \tag{3.6}\] Furthermore, the action of \(G\) by conjugation on \(N\) preserves the direct product decomposition \(N=\langle n_{1}\rangle\times\cdots\times\langle n_{e}\rangle\). We claim that \[{\bf C}_{G_{\{\alpha,\beta\}}}(N) =1,\] \[{\bf C}_{G_{\{\alpha,\alpha^{\prime}\}}}(N) =1. \tag{3.7}\] In other words, \(G_{\{\alpha,\beta\}}\) and \(G_{\{\alpha,\alpha^{\prime}\}}\) both act faithfully by conjugation on \(N\). Let \(\gamma\in\{\alpha^{\prime},\beta\}\) and suppose, arguing by contradiction, that \({\bf C}_{G_{\{\alpha,\gamma\}}}(N)\neq 1\). Since \({\bf C}_{K}(N)=1\) and \(|G_{\{\alpha,\gamma\}}:K\cap G_{\{\alpha,\gamma\}}|=2\), we deduce \({\bf C}_{G_{\{\alpha,\gamma\}}}(N)=\langle x\rangle\), where \(x\) is an involution. Since \(x\notin K\), \(x\) acts semiregularly on \(\Gamma/N\) and hence \(x\) acts semiregularly on \(\Gamma\). From this and from the fact that \(x\) centralizes \(N\), we deduce that \(G\) contains semiregular elements of order \(2p\geq 6\), which contradicts Hypothesis 3.1. Thus (3.7) is proven. Observe that (3.7) implies that an element of \(G_{\{\alpha,\alpha^{\prime}\}}\) or of \(G_{\{\alpha,\beta\}}\) is the identity if and only it its action on \(N\) by conjugation is trivial. We show that \[G_{\{\alpha,\beta\}}\setminus G_{\alpha\beta}\text{ contains an involution.} \tag{3.8}\] Let \(H\) be the permutation group induced by \(G_{\{\alpha,\alpha^{\prime}\}}\) in its action on the four right cosets of \(G_{\alpha\beta}\) in \(G_{\{\alpha,\alpha^{\prime}\}}\). Since \(H\) is a \(2\)-group, \(H\) is isomorphic to either \(C_{4}\), or \(C_{2}\times C_{2}\), or to the dihedral group of order \(8\). In the first two cases, \(G_{\alpha\beta}\) is a normal subgroup of both \(G_{\{\alpha,\alpha^{\prime}\}}\) and \(G_{\{\alpha,\beta\}}\). As \(G_{\alpha\beta}\) is core-free in \(G\) and \[G=\langle G_{\{\alpha,\alpha^{\prime}\}},G_{\{\alpha,\beta\}}\rangle,\] we have that \(G_{\alpha\beta}=1\). In particular, \(G_{\{\alpha,\beta\}}\) is cyclic of order \(2\), hence it contains an involution and (3.8) follows in this case. In the latter case, using the notation and the terminology in [10], we have that the triple \((G_{\{\alpha,\alpha^{\prime}\}},G_{\alpha\beta},G_{\{\alpha\,\beta\}})\) is a locally dihedral faithful group amalgam of type \((4,2)\) and \(G\) is one of its realizations. Indeed, from the classification in [10], we see that either \(G_{\{\alpha,\alpha^{\prime}\}}\setminus G_{\alpha}\) or \(G_{\{\alpha,\beta\}}\setminus G_{\alpha\beta}\) contains an involution. If \(G_{\{\alpha,\beta\}}\setminus G_{\alpha\beta}\) contains an involution, then (3.8) holds true also in this case. Therefore we suppose \(\tau_{1}\in G_{\{\alpha,\alpha^{\prime}\}}\setminus G_{\alpha}\) is an involution. We investigate the action by conjugation of \(\tau_{1}\) on \(N\). By (3.1), \(\tau_{1}\) is a semiregular automorphism of \(\Gamma/K\), because \(\tau_{1}\notin K\). Therefore, \(\tau_{1}\) is a semiregular automorphism of \(\Gamma\). Since no semiregular involution commutes with a non-identity element of \(N\), \(\tau_{1}\) acts by conjugation on \(N\) without fixed points, that is, for any \(n\in N\), \(n^{\tau_{1}}=n^{-1}\). It follows from (3.6) that \(\tau_{1}\) commutes with \(G_{\alpha}\) and hence \(G_{\{\alpha,\alpha^{\prime}\}}=\langle G_{\alpha},\tau_{1}\rangle\) is an elementary abelian \(2\)-group. Now, as \(G_{\alpha\beta}\) is normal in both \(G_{\{\alpha,\alpha^{\prime}\}}\) and \(G_{\{\alpha,\beta\}}\), we can conclude, as before, that \(G_{\{\alpha,\beta\}}\) is cyclic of order \(2\), hence it contains an involution. Therefore, in any case, (3.8) holds true. Let \(e\) be the positive integer such that \(N=C_{p}^{e}\). We aim to show that \[e\in\{1,2\}. \tag{3.9}\] Let \(\tau_{2}\in G_{\{\alpha,\beta\}}\setminus G_{\alpha\beta}\) be an involution: the existence of \(\tau_{2}\) is guaranteed by (3.8). Now, we look at the action by conjugation of \(\tau_{2}\) on \(N\). Observe \(\tau_{2}\notin K\) and hence \(\tau_{2}\) is a semiregular automorphism of \(\Gamma\). Therefore, arguing as in the previous paragraph (with the involution \(\tau_{1}\) replaced by \(\tau_{2}\)), we deduce that \(n^{\tau_{2}}=n^{-1}\) for every \(n\in N\). Let \(L:=\langle\tau_{2}^{g}\ |\ g\in G\rangle\). Since \(G/K\) is a dihedral group and \(\tau_{2}\) is an involution, we deduce that \(|G/K:LK/K|\leq 2\), that is, \(|G:LK|\leq 2\). Observe now that, for any \(n\in N\), \(n^{\tau_{2}^{g}}=n^{-1}\). Therefore, the group induced by the action by conjugation of \(L\) on \(N\) has order \(2\). This and (3.6) shows that the subgroup \(LK\) of \(G\) preserves the direct sum decomposition \(N=\langle n_{1}\rangle\times\cdots\times\langle n_{e}\rangle\). However, since \(G\) acts irreducibly on \(N\) and since \(|G:LK|\leq 2\), we finally obtain \(e\leq 2\), as claimed in (3.9). Observe that from this it follows that \(|N|=p^{e}\in\{3,9,5,25\}\). We are now ready to conclude this case. Observe that \(G_{\alpha}\) contains an element \(x\) with \(n^{x}=n^{-1}\) for every \(n\in N\). This is immediate from (3.6) when \(e=1\), or when \(e=2\) and \(|G_{\alpha}|=4\). When \(e=2\) and \(|G_{\alpha}|<4\), we have \(|G_{\alpha}|=2\) and hence the non-identity element of \(G_{\alpha}\) acts by conjugation on \(N\) inverting each of its elements. Now, \(x\) and \(\tau_{2}\) both induce the same action by conjugation on \(N\), contradicting (3.7). This final contradiction has concluded the analysis of this case. ### \(G_{\alpha}\neq 1\) and \(\Gamma/N\) is a cubic graph Under this assumption, any two distinct neighbours of \(\alpha\) are in distinct \(N\)-orbits, thus \(N_{\alpha}=1\). In particular, Lemma 3.2 gives that \(N\) is elementary abelian. Set \(\bar{\Gamma}:=\Gamma/N\), \(\bar{G}:=G/N\) and \(\bar{\alpha}:=\alpha^{N}\). Since \(|V\bar{\Gamma}|<|V\Gamma|\), by Hypothesis 3.1 the pair \((\bar{\Gamma},\bar{G})\) is not a counterexample to Theorem 1.2 and hence \((\bar{\Gamma},\bar{G})\) is one of the pairs appearing in Table 1. Moreover, since \(G_{\alpha}\neq 1\), we have the additional information that a vertex-stabilizer \(\bar{G}_{\bar{\alpha}}\cong G_{\alpha}\) is not the identity. We have resolved this case with a computer computation. Since this computer computation is quite involved, we give some details. Let \((\bar{\Gamma},\bar{G})\) be any pair in Table 1, except for the last row. For each prime \(p\in\{2,3,5\}\), we have constructed all the irreducible modules of \(\bar{G}\) over the field \(\mathbb{F}_{p}\) having \(p\) elements. Let \(V\) be one of these irreducible modules. This module \(V\) corresponds to the putative minimal normal subgroup \(N\) of \(G\). We have constructed all the distinct extensions of \(\bar{G}\) via \(V\). Let \(E\) be one of these extensions and let \(\pi:E\to\bar{G}\) be the natural projection with \(\operatorname{Ker}(\pi)=V\). This extension \(E\) corresponds to the putative abstract group \(G\). For each such extension, we have computed all the subgroups \(H\) of \(E\) with the property that \(\pi_{|H}\) is an isomorphism between \(H\) and \(\bar{G}_{\bar{\alpha}}\). This subgroup \(H\) is our putative vertex-stabilizer \(G_{\alpha}\). This computation can be performed in \(\pi^{-1}(\bar{G}_{\bar{\alpha}})\). Next, we have constructed the permutation representation \(E_{p}\) of \(E\) acting on the right cosets of \(H\) in \(E\). This permutation group \(E_{p}\) is our putative permutation group \(G\). If \(E_{p}\) has semiregular elements of order at least \(6\), then we have discarded \(E\) from further consideration. For each permutation group \(E_{p}\) as above, we have verified, by considering the orbital graphs of \(E_{p}\), whether \(E_{p}\) acts on a connected cubic graph. This is our putative graph \(\Gamma\). This step is by far the most expensive step in the computation. This whole process had to be applied repeatedly starting with the pairs arising from the census of connected cubic graphs having at most \(1\,280\) vertices. For instance, the graphs having \(65\,610\) vertices were found by applying this procedure starting with the graph having \(810\) vertices and its transitive group of automorphisms having \(1\,620\) elements: here the elementary abelian cover \(N\) has cardinality \(81=3^{4}\). Incidentally, we have found only one pair up to isomorphism. Next, by applying this procedure to this pair, we found no new examples. We give some further details of the computation when we applied the procedure with \(\bar{\Gamma}\) having \(1\,250=2\cdot 5^{4}\) vertices and with its corresponding vertex-transitive subgroup \(\tilde{G}\) having order \(2\,500=2^{2}\cdot 5^{4}\). When we applied this procedure, we have obtained graphs having \(2\cdot 5^{5}=6\,250\) vertices and admitting a group of automorphisms having \(2^{2}\cdot 5^{5}=12\,500\) elements. Actually, in this step, we have found only one pair up to isomorphism. We have repeated this procedure two more times, obtaining graphs having \(2\cdot 5^{6}=31\,250\) and \(2\cdot 5^{7}=156\,250\) vertices. We were not able to push this computation further. Therefore to complete the proof of Theorem 1.2, we need to show that any new pair \((\Gamma,G)\) has the property that \(|V\Gamma|=2\cdot 5^{\ell}\) and \(|G|=4\cdot 5^{\ell}\), with \(\ell\leq 34\). From the discussion above we may suppose that \(|V\bar{\Gamma}|=2\cdot 5^{\ell}\) and \(|\bar{G}|=4\cdot 5^{\ell}\) with \(\ell\leq 34\). Moreover, \(\bar{\Gamma}\) is a regular cover of the graph, say \(\Delta\), having \(1\,250\) vertices and \(\bar{G}\) is a quotient of the group of automorphisms of \(\Delta\), say \(H\), with \(|H|=2\,500\). In particular, a Sylow \(2\)-subgroup of \(\tilde{G}\) is cyclic and \(\tilde{G}\) has a normal Sylow \(5\)-subgroup. (This information can be extracted from the analogous properties of \(H\).) Let \(\bar{P}\) be a Sylow \(5\)-subgroup of \(\bar{G}\) and observe that every non-identity element of \(\bar{P}\) has order \(5\) because every semiregular element of \(\tilde{G}\) has order at most \(6\). Let \(P\) be the subgroup of \(G\) with \(G/N=\bar{P}\). Assume \(N\) is not an elementary abelian \(5\)-group. Then \(N\) is an elementary abelian \(p\)-group for some \(p\in\{2,3\}\). Let \(Q\) be a Sylow \(5\)-subgroup of \(P\) and observe that \(P=N\rtimes Q\). The elements in \(P\) are semiregular and hence each element of \(P\) has order at most \(6\). This implies that the elements of \(P\) have order \(1\), \(5\) or \(p\). This implies that the action, by conjugation, of \(Q\) on \(N\) is fixed-point-free and \(P\) is a Frobenius group with Frobenius kernel \(N\) and Frobenius complement \(Q\). The structure theorem of Frobenius complements gives that \(Q\) is cyclic and hence \(|Q|=5\), which is a contradiction. This contradiction has shown that \(N\) is an elementary abelian \(5\)-group and hence \(P\) is a Sylow \(5\)-subgroup of \(G\). Moreover, \(G=P\rtimes\langle x\rangle\), where \(\langle x\rangle\) is a cyclic group of order \(4\). We have shown that \(|V\Gamma|=2\cdot 5^{\ell^{\prime}}\) and \(|G|=2^{2}\cdot 5^{\ell^{\prime}}\). Therefore, it remains to show that \(\ell^{\prime}\leq 34\). Since \(|G_{\alpha}|=2\), \(G_{\alpha}\) fixes a unique neighbour of \(\alpha\). Let us call \(\alpha^{\prime}\) this neighbour. Now, \(G_{\{\alpha,\alpha^{\prime}\}}\) has order \(4\) because \(\{\alpha,\alpha^{\prime}\}\) is a block of imprimitivity for the action of \(G\) on \(V\Gamma\). Therefore, by Sylow's theorem, we may suppose that \[G_{\{\alpha,\alpha^{\prime}\}}=\langle x\rangle.\] In particular, \(G_{\alpha}=\langle x^{2}\rangle\). Let \(\beta\) and \(\gamma\) be the neighbours of \(\alpha\) with \(\beta\neq\alpha^{\prime}\neq\gamma\). Clearly, \(|G_{\{\alpha,\beta\}}|=2\) and hence, by Sylow's theorem, \[G_{\{\alpha,\beta\}}=\langle(x^{2})^{y}\rangle,\] for some \(y\in P\). Since \(\Gamma\) is connected, we have \[G=\langle G_{\{\alpha,\alpha^{\prime}\}},G_{\{\alpha,\beta\}}\rangle=\langle x,(x^{2})^{y}\rangle=\langle x,y^{-1}y^{x^{2}}\rangle.\] As \(P\unlhd G\) and \(o(x)=4\), we deduce \[P=\langle y^{-1}y^{x^{2}},(y^{-1}y^{x^{2}})^{x},(y^{-1}y^{x^{2}})^{x^{2}},(y^{- 1}y^{x^{2}})^{x^{3}}\rangle.\] Now, \[(y^{-1}y^{x^{2}})^{x^{2}}=(y^{x^{2}})^{-1}y^{x^{4}}=(y^{x^{2}})^{-1}y=(y^{-1}y ^{x^{2}})^{-1}.\] Therefore, \(P=\langle y^{-1}y^{x^{2}},(y^{-1}y^{x^{2}})^{x}\rangle\) is a \(2\)-generated group of exponent \(5\). In view of the restricted Burnside problem (see [10] and [21]), the order of \(P\) is at most \(5^{34}\) and hence \(\ell^{\prime}\leq 34\).
2309.14333
Quantum-Enhanced Parameter Estimation Without Entanglement
Entanglement is generally considered necessary for achieving the Heisenberg limit in quantum metrology. We construct analogues of Dicke and GHZ states on a single $N+1$ dimensional qudit that achieve precision equivalent to symmetrically entangled states on $N$ qubits, showing that entanglement is not necessary for going beyond the standard quantum limit. We define a measure of non-classicality based on quantum Fisher information and estimate the achievable precision, suggesting a close relationship between non-classical states and metrological power of qudits. Our work offers an exponential reduction in the physical resources required for quantum-enhanced parameter estimation, making it accessible on any quantum system with a high-dimensional Hilbert space.
Pragati Gupta
2023-09-25T17:57:45Z
http://arxiv.org/abs/2309.14333v1
# Quantum-Enhanced Parameter Estimation Without Entanglement ###### Abstract Entanglement is generally considered necessary for achieving the Heisenberg limit in quantum metrology. We construct analogues of Dicke and GHZ states on a single \(N+1\) dimensional qudit that achieve precision equivalent to symmetrically entangled states on \(N\) qubits, showing that entanglement is not necessary for going beyond the standard quantum limit. We define a measure of non-classicality based on quantum Fisher information and estimate the achievable precision, suggesting a close relationship between non-classical states and metrological power of qudits. Our work offers an exponential reduction in the physical resources required for quantum-enhanced parameter estimation, making it accessible on any quantum system with a high-dimensional Hilbert space. The precise measurement of physical quantities, from electromagnetic fields, to temperature and pressure, is important, both for applications like detection of gravitational waves, and for fundamental aspects like phase sensitivity in interferometry [1]. The standard quantum limit (SQL), which arises from the discrete nature of quantum measurements, describes the limit of precision achievable in parameter estimation using a system of \(N\) independent quantum probes. In contrast to uncorrelated probes, correlated many-body systems such as squeezed states can improve measurement precision beyond the SQL by increasing the precision of a chosen quantum observable at the expense of the uncertainty of another conjugate observable [2; 3; 4]. Quantum sensing assisted by entangled probes, like the Greenberger-Horne-Zeilinger (GHZ) states, \(N00N\) states or Dicke states can achieve the Heisenberg limit [5; 6; 7; 8]. Quantum Fisher information (QFI), the quantum analogue of the classical Fisher information, plays a central role in quantum metrology, where its inverse determines the achievable precision, given by the quantum Cramer-Rao bound [7]. The value of QFI for a quantum system scales with the effective number of sub-systems, motivating its use as measure of macroscopicity, as well as a quantifier for the degree of separation between two components of a macroscopic Schrodinger cat state, using the so-called the relative QFI [9; 10]. Recently, QFI was established as a measure of non-classicality of a state in continuous variable systems, suggesting that metrological power of a system is related to the degree of macroscopicity [11]. Operational resource-theoretic definitions of metrological power, non-classicality and macroscopicity can also be obtained based on QFI, making it a powerful tool for experimental investigations [12]. Further, QFI universally captures resourcefulness of any quantum state, regardless of the specific unitary parameter encoding, hence, is important for general quantum resource theories [13]. While entangled states can maximize QFI, they are difficult to create and maintain, due to the high-degree of required control and their susceptibility to decoherence [14]. High-dimensional systems offer an alternate hardware-efficient way of using a multi-level structure, naturally available on most quantum systems, for storing and processing of information and reducing circuit complexity [15]. The high-dimensional Hilbert space available within a harmonic oscillator can also be utilized to encode quantum information on non-classical states and perform error-correction [16]. Motivated by the importance of qudits in above examples, and the fact that many quantum sensing technologies such as superconducting systems [17] and NV centres [18] host natural high-dimensional Hilbert spaces, we ask if qudits can serve as a resource for quantum metrology. We introduce "qudit-assisted" protocols for parameter estimation that attain an enhanced precision--scaling linearly with the dimension--without using entanglement. The quantum Fisher information (QFI) for a \(d\)-dimensional qudit is equivalent to that of a \(N=d-1\) multi-qubit system under exchange symmetry [19]. We define resource theoretic measures of metrological gain and non-classicality for a qudit, relating them to the gain in precision and analyze the effect of decoherence as the system size increases. We construct analogues of Dicke states and GHZ state on a qudit--respectively the orthonormal basis states and the superposition of the lowest and highest eigenvalue state--and analytically show that metrological protocols with these states attain the Heisenberg limit. Our work suggests that an exponential reduction in the state space dimension--from \(2^{N}\) to \(N+1\)--is achievable in metrology using qudits, which could make quantum-enhanced parameter estimation accessible with independent probes. This paper is organized as follows. In Sec. I, we discuss quantum Fisher information of a qudit, its equivalence to a multi-qubit state with permutation symmetry, and define measures for non-classicality and metrological power. In Sec. II, we describe metrological protocols and analyze the achieved precision with Dicke-like states (II.1) and GHZ-like states (II.2) on a qudit. We analyze the effects of system size on decoherence rates in Sec. III and finally, give concluding remarks in Sec. IV. Theoretical measures for a qudit ### Quantum Fisher information Quantum Fisher information (QFI) is the quantum analogue of classical Fisher information and used to quantify the rate of change of a state's density matrix \(\rho\) with respect to an unknown parameter \(\theta\). Quantum Fisher information, \(F_{\mathrm{Q}}\), is defined in terms of the symmetric logarithmic derivative (SLD) \(\hat{L}_{\theta}\), which is a Hermitian operator given by \[\partial_{\theta}\rho:=\frac{1}{2}\{\rho,\hat{L}_{\theta}\}, \tag{1}\] where \(\partial_{\theta}\equiv\nicefrac{{\partial}}{{\partial\theta}}\) and \(\{.,.\}\) denotes the anticommutator. In terms of the SLD operator, \(F_{\mathrm{Q}}\) is given by \[F_{\mathrm{Q}}:=\mathrm{Tr}\left(\rho\hat{L}_{\theta}^{2}\right)=\mathrm{Tr} \left[(\partial_{\theta}\rho)\hat{L}_{\theta}\right], \tag{2}\] which defines QFI without explicit diagonalization of the density matrix. We calculate the QFI of a \(d\)-level system using the generalized Bloch sphere representation [19]. We use the generalized Gell-Mann matrices \(\mathbf{E}=\{E_{i}\}_{i=1}^{d^{2}-1}\) that generate the Lie algebra corresponding to the special unitary group \(\mathrm{SU}(d)\)[20] and the Bloch vector \(\mathbf{\omega}\in\mathbb{R}^{d^{2}-1}\) for denoting the density matrix as \[\rho=\frac{1}{d}\mathbb{I}_{d}+\frac{1}{2}\mathbf{\omega}\cdot\mathbf{E}, \tag{3}\] where \(\mathbb{I}_{d}\) is a \(d\)-dimensional identity matrix. For pure states, \(\rho=\rho^{2}\), and we can write \[\partial_{\theta}\rho=\partial_{\theta}\rho^{2}=\rho\left(\partial_{\theta} \rho\right)+\left(\partial_{\theta}\rho\right)\rho. \tag{4}\] Comparing the above equation with (1), we get \[\hat{L}_{\theta}=2\partial_{\theta}\rho, \tag{5}\] for pure states. This can be substituted in (2) to get \[F_{\mathrm{Q}}=\mathrm{Tr}\left[(\partial_{\theta}\rho)2\partial_{\theta} \rho\right]=|\partial_{\theta}\mathbf{\omega}|^{2}, \tag{6}\] where the second equality is obtained from (3) and the relation \(\mathrm{Tr}\left[(\mathbf{a}\cdot\mathbf{E})(\mathbf{b}\cdot\mathbf{E})\right]=2\mathbf{a}\cdot \mathbf{b}\). Equation 6 holds for a two-level system, i.e. \(d=2\), where the density matrix is expressed in terms of Pauli matrices \(\mathbf{\sigma}=\{\hat{\sigma}_{x},\hat{\sigma}_{y},\hat{\sigma}_{z}\}\), which generate the Lie algebra for \(\mathrm{SU}(2)\) group. QFI (Eq. 2) can also be expressed in terms of the eigenvalues \(\lambda_{i}\) and eigenvectors \(\psi_{i}\) of the density matrix the observable \(\hat{A}\) as \[F_{\mathrm{Q}}[\rho,\hat{A}]=2\sum_{i\neq j}\frac{(\lambda_{i}-\lambda_{j})^{ 2}}{\lambda_{i}+\lambda_{j}}|\langle\psi_{i}|\hat{A}|\psi_{j}\rangle|^{2}, \tag{7}\] where \(\lambda_{i}+\lambda_{j}>0\). For pure states, the density matrix \(\rho=\ket{\psi}\bra{\psi}\) and the eigenvalues are all zero except for the state \(\ket{\psi}\). The above equation can be written as \[F_{\mathrm{Q}}= 2\sum_{\lambda_{i}=0,\lambda_{j}\neq 0}\frac{(0-1)^{2}}{0+1}| \langle\psi_{i}|\hat{A}|\psi\rangle|^{2} \tag{8}\] \[+2\sum_{\lambda_{i}\neq 0,\lambda_{j}=0}\frac{(1-0)^{2}}{1+0}| \langle\psi|\hat{A}|\psi_{i}\rangle|^{2}\] \[= 4\sum_{\lambda_{i}=0}|\langle\psi_{i}|\hat{A}|\psi\rangle|^{2}=4 \sum_{\lambda_{i}=0}\langle\psi|\hat{A}|\psi_{i}\rangle\langle\psi_{i}|\hat{A} |\psi\rangle.\] Using \(\sum_{\lambda_{i}=0}|\psi_{i}\rangle\langle\psi_{i}|=\mathbb{I}-\ket{\psi} \langle\psi|\), the QFI reduces to \[F_{\mathrm{Q}}(\psi,\hat{A})=4\left[\langle\psi|\hat{A}^{2}|\psi\rangle-| \langle\psi|\hat{A}|\psi\rangle|^{2}\right]. \tag{9}\] Thus, for pure states, the QFI is four times the variance of the state \(\psi\) with respect to observable \(\hat{A}\). Using convexity of QFI [12], \[F_{\mathrm{Q}}[\rho,\hat{A}]\leq\sum_{i}\lambda_{i}F_{\mathrm{Q}}[\psi_{i}, \hat{A}], \tag{10}\] where \(\lambda_{i}\) and \(\psi_{i}\) are eigenvalues and eigenstates of the density matrix, as above. For mixed states, QFI is \[F_{\mathrm{Q}}[\rho,\hat{A}]=\min_{\{\lambda_{i},\psi_{i}\}}\sum_{i}\lambda_{ i}F_{\mathrm{Q}}(\psi_{i},\hat{A}), \tag{11}\] which is four times the convex roof of variance, minimized over eigen decompositions of the density matrix. Equivalence to symmetrically entangled states:Quantum Fisher information of a multi-qubit system with permutational symmetry is equivalent to that of a qudit. To see this, we express a multi-qubit operations using the collective spin operators \(\mathbf{J}=\{\hat{J}_{x},\hat{J}_{y},\hat{J}_{z}\}\), \[\hat{J}_{i}=\sum_{n=1}^{N}\frac{\sigma_{i}^{n}}{2};\{\hat{J}_{i},\hat{J}_{j}\}= i\epsilon_{ijk}\hat{J}_{k}, \tag{12}\] where \(N\) is the total number of qubits, \(i\in\{x,y,z\}\) and \(\sigma_{i}^{n}\) acts on the \(n^{\mathrm{th}}\) qubit and \(\hat{J}_{i}\) are the \(N+1\) dimensional representations of the \(\mathrm{SU}(2)\) group. Since, \(\mathrm{SU}(2)\) is a subgroup of the \(\mathrm{SU}(N+1)\) group, collective operations on permutationally symmetric qubits can be mapped to operations on a \(d\)-dimensional qudit. Similarly, the \(N+1\) levels of a qudit correspond to multi-qubit Dicke states in the \(\hat{J}_{z}\) basis [21], which are symmetrically entangled states, and this mapping can be used to construct analogous metrological probes on a qudit. ### Non-classicality Quantum Fisher information is closely related to the macroscopicity of a spin state [9; 10]. For multi-qubit systems, QFI can be used to quantify the effective number of systems forming a quantum state \[N_{\mathrm{eff}}(\rho)=\max_{\hat{A}}\frac{F_{\mathrm{Q}}(\rho,\hat{A})}{N}, \tag{13}\] which has a range \(1\leq N_{\text{eff}}(\rho)\leq N\) for pure states. A multi-qubit state is called macroscopic if \(N_{\text{eff}}\) is linear in the system size. Macroscopicity of a qudit can similarly be defined as the effective number of degrees of freedom in a high-dimensional state \[d_{\text{eff}}(\rho)=\max_{\hat{A}}\frac{F_{\text{Q}}(\rho,\hat{A})}{d-1}, \tag{14}\] where \(d\) is the dimension of the qudit, and \(\hat{A}\) is any linear operator. We note that for pure states, \(1\leq d_{\text{eff}}(\rho)\leq d-1\) under a linear parameter encoding. The notion of macroscopicity is closely related to the non-classicality of a state. To define non-classicality \(\mathcal{N}(\rho)\) from a resource theory perspective, we note that any such measure should satisfy four conditions [11; 12]: non-negativity, weak monotonicity, strong monotonicity and convexity. Here, non-negativity means that \(\mathcal{N}(\rho)\geq 0\) and the equality is satisfied if and only if \(\rho\) is a classical state. Weak monotonicity means that \(\mathcal{N}\) cannot be increased by the application of a classical operation. In the context of a system comprising of qubits, classical operations correspond to linear rotations of one or more qubits around the Bloch sphere. For qudits, classical operations could be defined as linear operations that transform a SU(d) coherent state into another SU(d) coherent state [22]. Strong monotonicity means that \(\mathcal{N}\) should not increase when a subset of the system is measured, but, this condition is not relevant for the discussion considering only a single qudit, and convexity denotes. Based on the above requirements, we define non-classicality of a qudit to be \[\mathcal{N}(\rho)=\min_{\{\lambda_{i},\psi_{i}\}}\left[\max_{\hat{A}}\left( \sum_{i}\lambda_{i}\text{Var}(\psi,\hat{A})\right)\right]-(d-1), \tag{15}\] where, and the minimization is over different decompositions of the density matrix. An entanglement measure for permutationally symmetric multi-qubits can be obtained by substituting \(d=N+1\), where \(N\) is the number of qubits. ### Metrological power The quantum Cramer-Rao bound is the quantum analogue of the classical Cramer-Rao bound and determines the precision \(\Delta\theta\) achievable with a quantum probe, \[(\Delta\theta)^{2}\geq\frac{1}{mF_{\text{Q}}[\rho,\hat{A}]}, \tag{16}\] where \(m\) is the number of measurements. For a pure two-level system with \(F_{\text{Q}}=4\), or, \(N\) uncorrelated two-level systems with \(F_{\text{Q}}=4N\), the above equation gives the well-known standard quantum limit. Entangled systems go beyond this limit and have \(F_{\text{Q}}\leq 4N^{2}\), with GHZ states saturating the bound. From a resource theory perspective, a measure of metrological power, \(\mathcal{W}(\rho)\), should be non-negative and \(\mathcal{W}(\rho)>0\) should imply precision beyond that standard quantum limit. Similar to the approach used for continuous-variable systems [12], we define metrological power \(\mathcal{W}(\rho)\) as the amount by which the QFI of a state is greater than the maximum QFI for any classical state \[\mathcal{W}(\rho)=\max\left[d_{\text{eff}}-1,0\right]. \tag{17}\] Metrological power is related to the achievable precision with a state, in that it captures its ability to go beyond the standard quantum limit. It is bounded by non-classicality \[\mathcal{W}(\rho)\leq\mathcal{N}(\rho), \tag{18}\] where the equality holds only for pure states. It is worth noting that not all non-classical states have metrological power, as mixed states could have \(d_{\text{eff}}<1\). ## II Qudit-assisted metrological protocols In this section, we show explicit examples of quantum-enhanced metrology using non-classical states on a qudit. A metrological protocol consists of the following general scheme [5]: (i) probe preparation: a quantum system is initialized to a desired state, (ii) parameter encoding: the state is manipulated using a an operator that depends on an unknown parameter \(\theta\), (iii) readout: the probe is measured in a way that the expectation value of the observable carries information about the encoded parameter and (iv) estimation: calculating the parameter from measurements. The uncertainty \(\Delta\theta\) for a protocol crucially depends on all the four operations and is given by the error propagation formula [23] \[(\Delta\theta)^{2}:=\frac{(\Delta\hat{A})^{2}}{|\partial_{\theta}\langle\hat{A }\rangle|^{2}}, \tag{19}\] where \(\langle\hat{A}\rangle\) is the expectation value of operator \(\hat{A}\) and \((\Delta\hat{A})^{2}\) is the variance. ### Dicke-like state Many-body entangled states are generally expressed in the Dicke basis, which offers a collective way of treating the \(2^{N}\) degrees of freedom of \(N\) qubits using only a \(N+1\) dimensional Hilbert space. Dicke states are denoted in the spin representation as \(\left|J,m_{J}\right\rangle\), where \(J=\nicefrac{{N}}{{2}}\) is the collective spin and \(m_{J}\in[-J,J]\), such that \(\hat{J}_{z}\left|J,m_{J}\right\rangle=m_{J}\left|J,m_{J}\right\rangle\). In resource theoretic terms, Dicke states represent \(k\)-entangled states, where \(k=m_{J}\) for small \(m_{J}\), denoting symmetric superpositions of particles in the excited state. They also called spin number states as they are the spin analogues to the number states. For qudits, the basis states \[\ket{i}\equiv\ket{J_{d},-J_{d}+i};J_{d}=\nicefrac{{(d-1)}}{{2}}, \tag{20}\] where \(i\in[0,d-1]\). From a metrological perspective, Dicke states with \(m_{J}=0\) for integer spins or \(m_{J}=\pm\nicefrac{{1}}{{2}}\) for half-integer spins can lead to achieving the Heisenberg limit [24]. Here, we consider the analogous state \(\ket{\nicefrac{{(d-1)}}{{2}}}\) for odd dimensional qudit and \(\ket{\nicefrac{{4}}{{2}}}\) and \(\ket{\nicefrac{{4}}{{2}}-1}\) for even dimensional qudits. _State preparation:_ Dicke states are generally hard to achieve on entangled systems, but, are much simpler for qudits: any basis state \(\ket{i}\) with \(i\in[1,d-2]\) is a non-classical Dicke-like state. State preparation can be achieved either by initializing a qudit directly to one of these states or starting from a classical state. When the initial state is a coherent state, \(\ket{0}\) or \(\ket{d-1}\), a non-classical Dicke-like state can be prepared by applying Givens rotations, that selectively address a particular transition, between the initial state and the target state. This can be achieved with multi-photon transitions or by applying a series of single-photon transitions that lead to the final state. However, we note that a linear operation alone is not sufficient to initialize to a non-classical Dicke-like state, as it would simply rotate the coherent state to another coherent state. For Givens rotations, a large non-linearity is needed to make the transitions individually addressable. _Phase encoding and readout:_ Since a Dicke state is unchanged under \(z\) rotations, we use a spin rotation about an axis perpendicular to the state, as shown in Fig. 1(a). The quantum Fisher information is given by \[F_{\mathrm{Q}}(\ket{i},\hat{J}_{x})=2i(d-1)-2i+\frac{(d-1)}{2}, \tag{21}\] which shows the standard quantum limit if \(i\in\{0,d-1\}\) and shows a gain in precision for other values, being maximum when \(i=\nicefrac{{(d-1)}}{{2}}\). The rotation angle cannot be read out from measuring \(\hat{J}_{z}\) due to the symmetric nature of the Dicke state. Instead, we use the expectation value of \(\hat{J}_{z}^{2}\), which increases as the state is rotated about the \(x\) axis, until a \(\frac{\pi}{2}\) rotation and decreases thereafter. _Parameter estimation:_ We calculate the expectation value of the observable \(\hat{J}_{z}^{2}\) to estimate the parameter \(\theta\), as shown in Fig. 1(b). For calculating the precision, first, we calculate the variance and the rate of change of the expectation value with respect to the parameter \(\theta\) (Fig. 1(c)), and then use Eq. 19, with results shown in Fig. 1(d). We note that precision diverges as \(\partial_{\theta}\langle\hat{J}_{z}^{2}\rangle\to 0\) and \(\Delta\theta^{2}\) decreases quadratically with the qudit dimension, demonstrating Heisenberg limited scaling. The example of a Dicke-like state explictly uses an SU(2) symmetry for encoding a parameter on a qudit; next, we show an example which works without this assumption. ### GHZ-like state _State preparation:_ The analogous of the GHZ state on a qudit is \(\ket{0}+\ket{d-1}\) (unit norm implied), similar to the superposition of the highest and the lowest weight state in the Dicke basis, given by \(\ket{J,-J}+\ket{J,J}\), which can also be thought of as a spin cat state. In multi-particle systems, GHZ states can be prepared using two-qubit gates that entangle two atoms at a time and can be sequentially applied for multi-qubit entanglement. However, unlike multi-particle systems, a qudit cannot be partitioned into such subsystems; hence, two-qubit gates are not accessible. Instead, Givens rotations that selectively address a two-level subspace within a larger Hilbert space offer a solution for control of qudits. To prepare a GHZ-like state on a qudit, we start with the qudit in the classical state \(\ket{0}\) and apply a \(\frac{\pi}{2}\)-rotation directly coupling the \(\ket{0}\) level to \(\ket{d-1}\), resulting in \((\ket{0}+\ket{d-1})/\sqrt{2}\). Such a transition can be achieved either by multi-photon driving or using a sequence of pulses: one \(\pi/2\) pulse between states \(\ket{0}\) and \(\ket{1}\), and then, \(\pi\)-pulses that shift the population from the \(\ket{1}\) state to the \(\ket{d-1}\) state, resulting in the GHZ-like state \(\ket{0}+\ket{d-1}\). In presence of non-linearities, this state can also be prepared using one-axis twisting. _Phase encoding:_ We encode an unknown parameter \(\theta\) into the relative phase of the GHZ-like state of a qudit. This could take form of interaction with a the magnetic or electric field of unknown strength \(\beta\) for time \(\tau\), that encodes \(\theta=\beta\tau\). The phase encoding operator \(\hat{P}\) is rep Figure 1: Parameter estimation using Dicke-like state on a qudit: a) A parameter \(\theta\) is encoded by rotating a Dicke-like state about an axis perpendicular to the state. b) Expectation value of \(\hat{J}_{z}^{2}\). c) Rate of change of the expectation value with respect to the parameter \(\theta\). d) Precision calculated using the error propagation formula, which diverges as \(\partial_{\theta}\langle\hat{J}_{z}^{2}\rangle\to 0\). We note that \(\Delta\theta\) decreases with the qudit dimension, and is proportional to \(\nicefrac{{1}}{{d^{2}}}\), as can be seen by comparing \(d=4\) and \(d=8\). resented by \[\hat{P}=\text{diag}(0,1,\ldots,d-1), \tag{22}\] such that applying \(e^{i\theta\hat{P}}\) causes the transformation \(\ket{0}+\ket{d-1}\rightarrow\ket{0}+e^{i\theta(d-1)}\ket{d-1}\). We can note that the relative phase picked up by the GHZ-like state is enhanced by a factor of \((d-1)\), that would lead to metrological gain. Readout:The unknown parameter \(\theta\) cannot be measured by directly measuring \(\hat{P}\), which only reveals the average energy level, we describe an interferometric scheme for inducing phase-dependent population shifts to readout the relative phase. In interferometry, the information about the relative phase is accessed by superimposing two states, that combine in a constructive or destructive way depending on the phase difference, which can be inferred from population measurements of the final state. To recombine the two components of the GHZ-like state of a qudit, we apply a \(\pi\) rotation between the \(\ket{d-1}\) and \(\ket{1}\) states, such that \(\ket{0}+e^{i\theta(d-1)}\ket{d-1}\rightarrow\ket{0}+e^{i\theta(d-1)}\ket{1}\). Such a transformation can be obtained either by multi-photon transition between the two levels or a sequence of \(\pi\) pulses coupling the two levels via intermediate levels. After recombining the two components of the GHZ-like state, the qudit is a coherent state, shifted by an angle \(\theta(d-1)\) from the \(x\) axis of the Bloch sphere of the two-level subspace \(\ket{0}\) and \(\ket{1}\). We apply a \(\nicefrac{{\pi}}{{2}}\) rotation between \(\ket{0}\) and \(\ket{1}\) that transforms the state as \[\begin{split}\ket{0}+& e^{i\theta(d-1)}\ket{1} \rightarrow\\ &\quad\left(\ket{0}+i\ket{1}\right)+ie^{i\theta(d-1)}\left(\ket{0 }-i\ket{1}\right)\\ =&\left(1+e^{i\theta(d-1)}\right)\ket{0}+i\left(1-e ^{i\theta(d-1)}\right)\ket{1}.\end{split} \tag{23}\] Substituting \(x=\theta(d-1)\), the coefficient of \(\ket{0}\) is \[\begin{split}(1+e^{ix})&=(2\cos^{2}\frac{x}{2}+2i \sin\frac{x}{2}\cos\frac{x}{2})\\ &=2\cos\frac{x}{2}\left(\cos\frac{x}{2}+i\sin\frac{x}{2}\right) =2e^{i\frac{x}{2}}\cos\frac{x}{2}.\end{split} \tag{24}\] Similarly, the coefficient of \(\ket{1}\) is \(i(1-e^{ix})=2e^{i\frac{x}{2}}\sin\frac{x}{2}\). Thus, the final state after normalization is given by \[\psi=\left(\cos\frac{\theta(d-1)}{2}\ket{0}+\sin\frac{\theta(d-1)}{2}\ket{1} \right), \tag{25}\] ignoring the global phase. The population of the final state depends on the relative phase of the GHZ-like probe, which allows us to estimate the unknown parameter \(\theta\). Specifically, the population of \(\ket{0}\) oscillates as \(\cos^{2}\left((d-1)\theta/2\right)\), where the periodicity increases with the qudit dimension \(d\), which leads to enhanced estimation of \(\theta\). Alternatively, in metrological protocols with GHZ states, the phase is inferred by measuring the qubit states in a complementary basis [5] or using parity oscillations, which could also be done for qudits. Parameter estimation:We calculate the expectation value of the observable \(\hat{P}\) to estimate the parameter \(\theta\), which acts on eigenstates as \(\hat{P}\ket{i}=i\ket{i}\) for \(i\in[0,d-1]\). The mean value \(\langle\hat{P}\rangle=\sin^{2}\frac{\theta(d-1)}{2}\) for the state in Eq. 25. Figure 2(a) shows the oscillation of \(\langle\hat{J}_{z}\rangle=\langle\hat{P}\rangle+\nicefrac{{(d-1)}}{{2}}\), where the increasing frequency of oscillation indicates the enhanced phase sensitivity as the qudit dimension increases. For calculating the precision, first, we calculate the variance using \(\langle\hat{P}^{2}\rangle=\sin^{2}\frac{(d-1)\theta}{2}\) and \((\Delta\hat{P})^{2}:=\langle\hat{P}^{2}\rangle-(\langle\hat{P}\rangle)^{2}\), such that \[(\Delta\hat{P})^{2}=\frac{\sin^{2}((d-1)\theta)}{4}, \tag{26}\] using \(\sin^{2}(a)-\sin^{4}(a)=\sin^{2}(2a)/4\). The above expression forms the numerator for the estimation precision Eq. 19. For the denominator, we calculate the rate of change of \(\langle\hat{P}\rangle\) with respect to the phase \(\theta\), such that \[|\partial_{\theta}\langle\hat{P}\rangle|^{2}=\left(\frac{(d-1)}{2}\sin(d-1) \theta\right)^{2}. \tag{27}\] Substituting Eqs. 26 and 27 into Eq. 19, we find the precision to be \[(\Delta\theta)^{2}=\frac{\sin^{2}((d-1)\theta)}{(d-1)^{2}\sin^{2}((d-1)\theta) }\xrightarrow{\theta\to 0}\frac{1}{(d-1)^{2}}, \tag{28}\] where we can note that the precision increases linearly with \(d-1\). Figure 2(b) shows the precision \(\Delta\theta\) vs \(\theta\) obtained from numerical simulations, which diverges as \(2I\theta\to n\pi\) for \(n\in\mathbb{Z}\). ## III Effect of decoherence We study the scaling of the coherence of a GHZ-like state with respect to the Hilbert space dimension of a qudit and consider dephasing of the state \(\ket{0}+\ket{d-1}\) for different values of \(d\). The density matrix of such a Figure 2: Parameter estimation using a GHZ-like state on a qudit, where a parameter \(\theta\) is encoded by in the relative phase of the two components. a) Expectation value of \(\hat{J}_{z}\) following a generalized interferometry protocol. d) Precision calculated using the error propagation formula, which diverges as \(\partial_{\theta}\langle\hat{J}_{z}^{2}\rangle\to 0\). We note that \(\Delta\theta\) decreases with the qudit dimension, and is proportional to \(\nicefrac{{1}}{{d^{2}}}\), as can be seen by comparing \(d=4\) and \(d=8\). state consists only of four elements: two diagonal elements corresponding to the population in \(\ket{0}\) or \(\ket{d-1}\) and two off-diagonal components that quantify the coherence of the superposition state. We write density matrix of a GHZ-like state under decoherence to be \[\begin{split}\rho^{\mathrm{GHZ}}=\frac{1}{2}\Big{[}& \ket{0}\bra{0}+\ket{d-1}\bra{d-1}\\ &+\ket{0}\bra{d-1}e^{-(d-1)^{2}\gamma t}\cos(d-1)\theta\\ &+\ket{d-1}\bra{0}e^{-(d-1)^{2}\gamma t}\sin(d-1)\theta\Big{]}, \end{split} \tag{29}\] where \(\gamma\) is the rate of dephasing and \(t\) the interaction time of the probe. Thus, QFI can be calculated used Eq. 11 as \[F_{\mathrm{Q}}^{\mathrm{GHZ}}=(d-1)^{2}e^{-(d-1)^{2}\gamma t}. \tag{30}\] We can note that QFI decreases with the dimension \(d\) and results from a linear decrease in coherence time as the dimension of a qudit increases. Such a trade-off between lifetime and the macroscopicity of a GHZ-like state is similarly found in other systems such as harmonic oscillators [11] and maximally entangled GHZ states [14]. Metrological power is given by \[\mathcal{W}(\rho^{\mathrm{GHZ}})=\max\left[(d-1)e^{-(d-1)^{2}\gamma t}-1,0 \right], \tag{31}\] which is positive for small values of \(d\). Since metrological power is bounded by non-classicality (Eq. 17), a non-zero \(\mathcal{W}\) also serves as a witness of non-classicality. ## IV Conclusion We introduced metrological schemes for parameter estimation that achieve the Heisenberg limit using a single \(d\)-dimensional qudit, with precision scaling as \(d-1\), and show in a linear advantage in using qudits. The key advantage in our scheme arises from the use of non-classical states on a qudit, which have a large quantum Fisher information, and can serve as a resource for metrology, shown explicitly through measures of non-classicality and metrological gain introduced here. Our work opens the possibility for realizing quantum-enhanced parameter estimation with independent probes, without the use of entanglement, that could be utilized on several platforms such as nuclear spins [25], atomic spins [26], NV centres [18] and superconducting circuits [17], that host a high-dimensional Hilbert space. Possible extensions of this work include multi-parameter estimation [27] using qudits, and protecting the encoded parameter from decoherence through information scrambling [28]. ## Acknowledgements I thank Abhijeet Alase, Barry C. Sanders and Andrea Morello for useful discussions, and NSERC for funding this work.
2309.04120
Boltzmann sampling with quantum annealers via fast Stein correction
Despite the attempts to apply a quantum annealer to Boltzmann sampling, it is still impossible to perform accurate sampling at arbitrary temperatures. Conventional distribution correction methods such as importance sampling and resampling cannot be applied, because the analytical expression of sampling distribution is unknown for a quantum annealer. Stein correction (Liu and Lee, 2017) can correct the samples by weighting without the knowledge of the sampling distribution, but the naive implementation requires the solution of a large-scale quadratic program, hampering usage in practical problems. In this letter, a fast and approximate method based on random feature map and exponentiated gradient updates is developed to compute the sample weights, and used to correct the samples generated by D-Wave quantum annealers. In benchmarking problems, it is observed that the residual error of thermal average calculations is reduced significantly. If combined with our method, quantum annealers may emerge as a viable alternative to long-established Markov chain Monte Carlo methods.
Ryosuke Shibukawa, Ryo Tamura, Koji Tsuda
2023-09-08T04:47:10Z
http://arxiv.org/abs/2309.04120v1
# Boltzmann sampling with quantum annealers via fast Stein correction ###### Abstract Despite the attempts to apply a quantum annealer to Boltzmann sampling, it is still impossible to perform accurate sampling at arbitrary temperatures. Conventional distribution correction methods such as importance sampling and resampling cannot be applied, because the analytical expression of sampling distribution is unknown for a quantum annealer. Stein correction (Liu and Lee, 2017) can correct the samples by weighting without the knowledge of the sampling distribution, but the naive implementation requires the solution of a large-scale quadratic program, hampering usage in practical problems. In this letter, a fast and approximate method based on random feature map and exponentiated gradient updates is developed to compute the sample weights, and used to correct the samples generated by D-Wave quantum annealers. In benchmarking problems, it is observed that the residual error of thermal average calculations is reduced significantly. If combined with our method, quantum annealers may emerge as a viable alternative to long-established Markov chain Monte Carlo methods. _Introduction._--Boltzmann sampling of the Ising model is central in the studies of critical phenomena [1; 2] and machine learning [3; 4]. Although Markov chain Monte Carlo (MCMC) methods can generate samples according to Boltzmann distribution, they often fall short for large models due to slow mixing [5]. To efficiently perform Boltzmann sampling, various improvement techniques such as exchange Monte Carlo and population annealing have been proposed in statistical physics [6; 7; 8]. As an alternative, quantum annealers (QAs) [9] have been expected to work as a means to achieve accurate Boltzmann sampling [10; 11; 12; 13; 14; 15]. Theoretically, it has been shown that the distribution of quantum annealing samples deviates from Boltzmann distribution [16]. Nevertheless, scientific discussion is still unsettled about whether QA samples can be used as samples of a Boltzmann distribution in practical terms. Recently, Nelson et al. argued that the D-Wave quantum annealer works as an accurate sampler at certain temperature [10], but it does not work well at arbitrary temperatures. Conventional distribution correction techniques such as importance sampling and resampling [5] cannot be applied, because the sampling distribution of a quantum annealer has not been analytically described so far. Liu and Lee proposed a "black-box" distribution correction method based on Stein statistics, where the analytical form of the original distribution is not needed [17]. The original samples are assigned the weights to fit to the target distribution via quadratic programming. In the original paper, the theoretical properties are largely unsolved, but Hodgkinson et al. showed the convergence of Stein correction for samples generated by a Markov chain [18]. This letter investigates how well this method, called Stein correction, works in the distribution correction of QA samples to a Boltzmann distribution with a given temperature. First, we develop a fast approximate algorithm of Stein correction, because \(O(n^{3})\) computational cost of quadratic programming for the number of samples, \(n\), is prohibitive for a large number of samples. In benchmarking studies, we observed that the estimation error of internal energy, magnetic susceptibility, and Binder cumulant of some Ising models decreased in a large extent by Stein correction. This result implies that Stein correction is useful for improving sample quality for applications such as critical phenomena and machine learning. It can also be applied to general quantum computers including NISQ, where distributional error is unavoidable due to environmental noise [19]. _Fast Stein correction._--Denote by \(p(\mathbf{x}),q(\mathbf{x})\) two distributions in \(\mathbf{x}\in\{x_{i}=-1,1\}^{d}\). Let \(\neg_{i}\) denote the sign flip of the \(i\)-th variable. Kernelized Stein discrepancy [20] quantifies the difference between the two distributions as \[S(p,q)=\mathbb{E}_{\mathbf{x},\mathbf{x}^{\prime}\sim q}[k_{p}(\mathbf{x},\mathbf{x}^{\prime})], \tag{1}\] where \(k_{p}(\mathbf{x},\mathbf{x}^{\prime})\) is called Stein kernel that depends on \(p(\mathbf{x})\) and the base kernel \[k(\mathbf{x},\mathbf{x}^{\prime})=\exp\left(-\frac{\sum_{i=1}^{d}\mathbb{I}\{x_{i}\neq x ^{\prime}_{i}\}}{d}\right). \tag{2}\] The function of \(\mathbb{I}\{x_{i}\neq x^{\prime}_{i}\}\) shows 1 for \(x_{i}\neq x^{\prime}_{i}\) and 0 for the others, respectively. Let us define the difference operator as \[\nabla_{\mathbf{x}}f(\mathbf{x})=(f(\mathbf{x})-f(\neg_{1}\mathbf{x}),\ldots,f(\mathbf{x})-f( \neg_{d}\mathbf{x})). \tag{3}\] In addition, the score function \(\mathbf{s}_{p}(\mathbf{x})\in\mathbb{R}^{d}\) is defined as \[[\mathbf{s}_{p}(\mathbf{x})]_{i}=1-p(\neg_{i}\mathbf{x})/p(\mathbf{x}),\ \ (i=1,\ldots,d).\] The Stein kernel is then defined as \[k_{p}(\mathbf{x}_{i},\mathbf{x}_{j}) =\mathbf{s}_{p}(\mathbf{x}_{i})^{\top}k(\mathbf{x}_{i},\mathbf{x}_{j})\mathbf{s}_{p}(\bm {x}_{j})\] \[-\mathbf{s}_{p}(\mathbf{x}_{i})^{\top}\nabla_{\mathbf{x}_{j}}k(\mathbf{x}_{i},\mathbf{x }_{j})\] \[-\mathbf{s}_{p}(\mathbf{x}_{j})^{\top}\nabla_{\mathbf{x}_{i}}k(\mathbf{x}_{i},\mathbf{ x}_{j})\] \[+\text{Tr}(\nabla_{\mathbf{x}_{i},\mathbf{x}_{j}}k(\mathbf{x}_{i},\mathbf{x}_{j})). \tag{4}\] Notably, \(k_{p}(\mathbf{x}_{i},\mathbf{x}_{j})\) depends on \(p(\mathbf{x})\) only through the score functions. Therefore, when \(p(\mathbf{x})\) is the Boltzmann distribution, the Stein kernel does not depend on the normalization constant. The Stein discrepancy defined by Eq. (1) is always nonnegative and zero if \(p(\mathbf{x})\) and \(q(\mathbf{x})\) are identical [18]. Given the \(n\) samples from \(q(\mathbf{x})\), \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\), the discrepancy is approximated as \[\hat{S}(p,q)=\sum_{i,j=1}^{n}w_{i}w_{j}k_{p}(\mathbf{x}_{i},\mathbf{x}_{j}), \tag{5}\] where \(w_{1},\ldots,w_{n}\) are the weight for each sample. In Stein correction [17], the weights are adjusted such that the discrepancy is minimized, \[\hat{\mathbf{w}}=\operatorname*{argmin}_{\mathbf{w}}\left\{\mathbf{w}^{\top}K_{p}\mathbf{w} \text{ s.t. }w_{i}\geq 0,\sum_{i=1}^{n}w_{i}=1\right\}, \tag{6}\] where \(K_{p}\) is the \(n\times n\) matrix where elements are \(k_{p}(\mathbf{x}_{i},\mathbf{x}_{j})\), which is called Stein kernel matrix. A naive implementation of Stein correction requires \(O(n^{2})\) space and \(O(n^{3})\) time. We reduce the complexity by introducing random feature map [21] and exponentiated gradient descent [22]. Using the random feature map, the base kernel defined by Eq. (2) is approximated as the inner product \(k(\mathbf{x},\mathbf{x}^{\prime})\approx\phi(\mathbf{x})^{\top}\phi(\mathbf{x}^{\prime})\) where the feature map \(\phi(\mathbf{x}):\{-1,1\}^{d}\rightarrow\mathbb{R}^{\ell}\) is computed as follows. Let us draw \(\ell\) samples \(\mathbf{\omega}_{1},\ldots,\mathbf{\omega}_{\ell}\) from \[h(\mathbf{\omega})=\prod_{i=1}^{d}\frac{1}{\pi(1+\omega_{i}^{2})}, \tag{7}\] where \(\mathbf{\omega}\) is the \(d\)-dimensional vector with each component \(\omega_{i}\). Also, \(b_{1},\ldots,b_{\ell}\) are sampled from the uniform distribution over \([0,2\pi]\). Then, the feature map is defined as \[\phi(\mathbf{x})=\left(z_{\mathbf{\omega}_{1},b_{1}}\left(\frac{\mathbf{x}+\mathbf{1}}{2d} \right),\ldots,z_{\mathbf{\omega}_{\ell},b_{\ell}}\left(\frac{\mathbf{x}+\mathbf{1}}{2d} \right)\right)^{\top}, \tag{8}\] where \(z_{\mathbf{\omega},b}(\mathbf{x})=\sqrt{\frac{2}{\ell}}\cos(\mathbf{\omega}^{\top}\mathbf{x}+b)\) and \(\mathbf{1}\) is the \(d\)-dimensional vector of \((1,\ldots,1)\). Since the Stein kernel is a linear function of the base kernel, it can also be approximated as \(k_{p}(\mathbf{x},\mathbf{x}^{\prime})\approx\phi_{p}(\mathbf{x})^{\top}\phi_{p}(\mathbf{x})\), where \(\phi_{p}(\mathbf{x})\) is the concatenation of the following vectors: \[\theta_{k}(\mathbf{x})=\frac{p(\neg_{k}\mathbf{x})}{p(\mathbf{x})}\phi(\mathbf{x})-\phi(\neg _{k}\mathbf{x}),\quad(k=1,\ldots,d). \tag{9}\] Using the random feature map, the optimization problem defined by Eq. (6) is rewritten as follows: \[\hat{\mathbf{w}}=\operatorname*{argmin}_{\mathbf{w}}\left\{f(\mathbf{w})\text{ s.t. }w_{i}\geq 0,\sum_{i=1}^{n}w_{i}=1\right\}, \tag{10}\] where \(f(\mathbf{w})=\|\sum_{i=1}^{n}w_{i}\phi_{p}(\mathbf{x}_{i})\|^{2}\). This is a convex optimization problem with nonnegativity and normalization constraints. When the standard gradient descent algorithm is applied, the constraints are violated every time the parameters are updated. In this case, exponentiated gradient descent is known to work well [22], because constraint violation never happens. The update is described as \[w_{t+1,i}=\frac{w_{t,i}\exp(-\eta[\nabla f(\mathbf{w}_{t})]_{i})}{Z_{t}}, \tag{11}\] where \[Z_{t}=\sum_{i=1}^{n}w_{t,i}\exp(-\eta[\nabla f(\mathbf{w}_{t})]_{i}), \tag{12}\] and \(\eta\) is the learning rate. The modification shown above reduces the space requirement to \(O(n\ell)\). Each update takes only \(O(n)\) time, enabling us to deal with a large number of samples. The implementation of a fast Stein correction can be found on GitHub [https://github.com/tsudalab/fast-stein-correction](https://github.com/tsudalab/fast-stein-correction). _Boltzmann sampling.--_We are engaged in sampling from the Boltzmann distribution, \(p(\mathbf{x})\sim\exp[-\beta H_{\text{Ising}}(\mathbf{x})]\), \(\mathbf{x}\in\{-1,1\}^{d}\), where the Hamiltonian is described as \[H_{\text{Ising}}(\mathbf{x})=-\sum_{i,j\in E}J_{ij}x_{i}x_{j}-\sum_{i\in V}h_{i}x_{i}. \tag{13}\] \(\beta\) denotes the inverse temperature, \(V\subseteq[1,d]\) and \(E\subseteq[1,d]\times[1,d]\). Here, we assume that the parameters \(J_{ij}\) and \(h_{i}\) are in the range of \(-1\) and \(1\). The thermal average of observable \(\mathcal{O}(\mathbf{x})\) is defined by \[\langle\mathcal{O}(\mathbf{x})\rangle_{\beta}=\frac{\text{Tr}\mathcal{O}(\mathbf{x}) \exp[-\beta H_{\text{Ising}}(\mathbf{x})]}{\text{Tr}\exp[-\beta H_{\text{Ising}}(\bm {x})]}. \tag{14}\] Since the trace calculation is impossible for large models, this trace is approximately replaced to the average of some samples. go When \(\beta H_{\text{Ising}}(\mathbf{x})\) is solved by QA with short annealing time, the distribution of samples is more wide spread around the ground state. If we consider that this distribution is Boltzmann distribution, the thermal average is calculated as \[\langle\mathcal{O}(\mathbf{x})\rangle_{\beta}^{\text{QA}}=\frac{1}{n}\sum_{i=1}^{n} \mathcal{O}(\mathbf{x}_{i}), \tag{15}\] where \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\) are the samples by QA. But, the distribution by QA deviates from the Boltzmann distribution, and the correction is needed. By performing the Stein correction where \(p(\mathbf{x})\) and \(q(\mathbf{x})\) are the Boltzmann distribution with \(\beta\) and the distribution by QA, respectively, weights \(\hat{w}_{i}\) for each sample are evaluated. Using the weights, the thermal average is approximately obtained by \[\langle\mathcal{O}(\mathbf{x})\rangle_{\beta}^{\text{SC}}=\sum_{i=1}^{n}\hat{w}_{i} \mathcal{O}(\mathbf{x}_{i}). \tag{16}\] _Results._--In our experiments, we employ 16-bit Ising Hamiltonians proposed by Nelson et al.[10] for benchmarking. They are called GSD_X and the number X indicates the number of degenerated ground states. The Hamiltonians are designed to fit the Chimera topology of D-Wave 2000Q systems, but here the samples are generated by Advantage System 6.2, because 2000Q is already out of service in their cloud platform. Hamiltonians are embedded in the Pegasus topology of Advantage using minorminer python package [23]. First, we evaluate the efficiency and approximation error of fast Stein correction using GSD_8. The residual error of the Stein kernel matrix \(K_{p}\) is defined as \(\|K_{p}-K_{p}^{*}\|/\|K_{p}^{*}\|\), where \(K_{p}^{*}\) is the exact matrix. Throughout the letter, the learning rate is \(\eta=10^{-5}\) and the number of updates is \(3,000\). An exact correction was performed by cvxopt [24]. Figure 1a shows the computation time of Stein correction depending on the number of samples when \(\ell=5,000\). The fast correction was orders of magnitude faster than the exact one against the number of samples. The residual error by the random feature map decreased as the number of features was increased (see Fig. 1b), showing that the more random features are preferred for accurate approximation. In the following analysis, calculations are performed with \(\ell=5,000\), indicating a sufficiently better residual error and taking computational time into account. Next, we observed how fast Stein correction improves the accuracy of estimating observables on GSD_8, GSD_38, and GSD_F_6. For GSD_F_6, the finite fields \(h_{i}\) are imposed. The calculated thermal averages of observable are internal energy \(E(\beta)\), magnetic susceptibility \(\chi(\beta)\), and Binder cumulant \(U_{4}(\beta)\) defined by \[E(\beta) =\langle H_{\text{Ising}}(\mathbf{x})\rangle_{\beta}, \tag{17}\] \[\chi(\beta) =\beta((\sum_{i=1}^{d}x_{i})^{2})_{\beta},\] (18) \[U_{4}(\beta) =1-\langle(\sum_{i=1}^{d}x_{i})^{4}\rangle_{\beta}/3\langle(\sum_ {i=1}^{d}x_{i})^{2}\rangle_{\beta}^{2}, \tag{19}\] where \(d\) is the system size. For each observable, we compute the residual error as \(\|y-y^{*}\|/y^{*}\), where \(y\) is the thermal average of observables calculated by Eqs. (15) and (16) and \(y^{*}\) is the exact value computed via brute-force enumeration. For each \(\beta\), 10,000 samples are generated by a D-Wave quantum annealer. In addition, the Metropolis method, a basic MCMC method, was also applied. Here, the first 8,000 samples were used for burn-in and the remaining ones for estimation. The results at annealing time \(5\mu s\) over a range of inverse temperatures are shown in Fig. 2. For each case, 5 independent runs are conducted and the mean values of residual error are evaluated. The accuracy of fast Stein correction outperformed the original quantum annealing samples in all cases, showing the effectiveness of our approach. Furthermore, the error by fast Stein correction was consistently smaller than that by MCMC. These results show that fast Stein correction has the potential to expand the applicablity range of quantum annealers significantly and may replace MCMC in diverse tasks of Figure 1: Results for Stein correction on GSD_8. (a) Computational time of exact and fast Stein correction depending on the number of samples \(n\) when \(\ell=5,000\). The inset is the log scale figure. (b) Residual error of the Stein kernel matrix \(K_{p}\) against the number of random features \(\ell\) when \(n=1,000\). Five independent runs are performed, and the mean and standard deviation are plotted as lines and error bars, respectively. discrete sampling. _Conclusion.--_In summary, we demonstrated that fast Stein correction is a helpful companion of quantum annealers and fundamentally enhance their usability. The advantage of quantum samples is that they are not locally concentrated, whereas MCMC samples have difficulty to cover the whole space. Although Stein correction cannot bring the distributional error to zero, it would be particularly useful to sample from highly constrained spaces [25], where global mixing by MCMC is extremely hard. Our future work involves the application of our method to machine learning and statistical physics and other highly scalable Ising machines such as coherent Ising machines [26] and GPU-based algorithms [27]. RS thanks participants at AQC2023 in Albuquerque, New Mexico for fruitful discussions. This work is supported by AIP Kasoku JPMJCR21U2, JST CREST JPMJCR21O2, JST ERATO JPMJER1903, KAKENHI 19H05819 and MEXT JPMXP1122712807.
2303.18234
Weyl Gravity in Covariant Hamiltonian Formalism
We find covariant canonical formalism for Weyl invariant gravity. We discuss constraint structure of this theory and its gauge fixed form.
J. Kluson, B. Matous
2023-03-31T17:48:31Z
http://arxiv.org/abs/2303.18234v1
###### Abstract ###### Abstract We find covariant canonical formalism for Weyl invariant gravity. We discuss constraint structure of this theory and its gauge fixed form. **Weyl Gravity in Covariant Hamiltonian Formalism** J. Kluson\({}^{\dagger}\) and B. Matous\({}^{\dagger\ddagger}\)1 Footnote 1: Email addresses: J. Kluson: [email protected], B. Matous: [email protected] \({}^{\dagger}\) _Department of Theoretical Physics and Astrophysics, Faculty of Science, Masaryk University, Kotlarska 2, 611 37, Brno, Czech Republic_ \({}^{\ddagger}\) _North-Bohemian Observatory and Planetarium in Teplice,_ _Kopernikova 3062, 415 01, Teplice, Czech Republic_ ## 1 Introduction and Summary It is well known that theories with reduced diffeomorphism invariance are far less restricted than diffeomorphism invariant theories, striking example is famous Horava-Lifschitz gravity [1, 2]. Another example of theories with restricted diffeomorphism are theories invariant under transverse diffeomorphisms and Weyl transformations [3, 4, 5]. These theories offer very interesting alternative to General Relativity (GR) and they firstly emerged with the observation that theory of self-interacting gravitons does not need to be General Relativity. Instead such alternatives could be Weyl transverse gravities (WTG) or unimodular gravities, for recent review, see for example [6, 7, 8]. It can be shown that classical solutions of WTG and GR equations of motions are equivalent however WTG or their gauge fixed version which is unimodular gravity imply that cosmological constant is radiative stable [11], for recent extended discussion see [7]. Another interesting check of the consistency of WTG was given in [9, 10], where Noether charge formalism for these theories was developed. We would like again stress that this is non-trivial result due to the restricted diffeomorphism invariance of WTG theories. Since WTG theory possesses many interesting properties we mean that it is natural to study WTG from different point of view. In this paper we focus on covariant canonical formulation, known as Weyl-De Donder formalism [12, 13], of this theory. Main advantage of Weyl-De Donder formalism is that it treats all partial derivatives as equivalent when we define conjugate momenta which is especially useful in case of manifestly diffeomorphism invariant theories. This alternative treatment of the canonical formalism of field theories was further developed for example in [14, 15, 16], for review, see [17]2. Footnote 2: For another interesting applications of covariant canonical formalism, see for example [18, 19]. In order to find covariant canonical formalism of WTG theory we should proceed in similar way as in case of Einstein-Hilbert action [20, 23] when we split Lagrangian into bulk part and boundary part. In case of WTG theory we should be very careful due to absence of the determinant of metric in the action and we find that corresponding form of bulk Lagrangian is different from Einstein-Hilbert action. Then we proceed to the definition of conjugate momenta. Following very careful analysis presented in [20, 22] we introduce new variable \(f^{ab}\) instead of \(g^{ab}\) that is related to \(g^{ab}\) by point transformation \(f^{ab}=\sqrt{-g}g^{ab}\). An importance of this variable was already stressed in [24, 25, 26]. As was argued in [20] canonical form of Einstein Hilbert action has remarkable simple form expressed with the help of variables \((f^{ab},N^{c}_{ab})\) and it is also independent on square root of \(f\). In case of WTG gravity the situation is slightly different when introducing \(f^{ab}\) and conjugate momentum \(N^{c}_{ab}\) again simplifies canonical form of the action significantly however the Hamiltonian still depends on the polynomial of the determinant of matrix \(f^{ab}\). On the other hand we will show that this fact is crucial for the preservation of the primary constraints \({\cal G}^{c}\equiv f^{ab}N^{c}_{ab}\) whose presence is reflection of the invariance of the action under Weyl rescaling of metric. In fact, in terminology of Dirac constrains system it is natural to call \({\cal G}^{c}\) as the first class constraint. Then we show that this gauge symmetry can be naturally fixed by introducing unimodular constraint \(\sqrt{-f}=K\) where \(K\) is constant. In other words we reproduce using covariant canonical formalism that gauge fixed version of WTG is unimodular gravity. Again this is rather non-trivial result due to the fact that it is not completely clear how to deal with constraint systems in covariant canonical gravity. As the next step we perform covariant canonical analysis of Weyl gravity which is formulated without auxiliary metric 3. We again perform splitting of the Lagrangian into bulk and boundary term. Then we introduce new variable \(f^{ab}=(-g)^{\alpha}g^{ab}\) where \(\alpha\) is arbitrary parameter. We choose general \(\alpha\) in order to analyze possible dependence of the Hamiltonian on \(\alpha\). Surprisingly we find that the Hamiltonian does not depend on \(\alpha\) at all. This is very remarkable result. Then we identify corresponding Hamiltonian and primary constraints and we show that they have exactly the same form as in case of the WTG theory formulated in terms of auxiliary metric. Finally we express the boundary Lagrangian as function of canonical variables and we show that it can be derived from the bulk part of the Lagrangian which is in agreement with the holographic relation between bulk and boundary Lagrangians as was shown for example in [21]. We mean that this is again non-trivial result due to the fact that WTG theory is not invariant under full diffeomorphism. Let us outline our results and suggest possible extension of this work. We found covariant canonical formalism for WTG gravity. We identified primary constraint which is generator of Weyl transformation. We also found corresponding equations of motion and we argued that this gauge freedom can be fixed by unimodular constraint. On the other hand there is an important problem in this analysis which is the fact that the equations of motion of gauge fixed WTG gravity do not reproduce equations of motion of unimodular gravity that were derived recently in [27]. Unfortunately we are not able to identify origin of non-equivalence of these two formulations. It is possible that they are hidden in the basic structure of covariant canonical formalism or our approach how we deal with the constraints in covariant canonical gravity is too naive and more powerful techniques, as for example one developed by Kanatchikov in [15] could be more appropriate for this analysis. We hope to return to this problem in future. We also found covariant Hamiltonian for WTG theory formulated without auxiliary metric and we determined the boundary term as function of canonical variables. We also shown that this boundary term can be expressed with the variation of the bulk term with respect to the derivative of canonical variable which is in agreement with the holographic interpretation of WTG gravity. This paper is organized as follows. In the next section (2) we introduce WTG gravity formulated with the auxiliary metric and we determine corresponding covariant Hamiltonian. Then in section (3) we perform the same analysis in case of WTG gravity formulated in terms of physical metric and we again find corresponding Hamiltonian and primary constraints. Weyl Invariant Theory of Gravity in Covariant Formalism In this section we present basic facts about Weyl invariant gravity and we find its covariant form. The natural formulation of Weyl invariant gravity is based on an introduction of auxiliary metric \[\tilde{g}_{ab}=\left(\frac{\omega^{2}}{-\det g}\right)^{\frac{1}{n}}g_{ab} \tag{1}\] that is manifestly invariant under rescaling \[g^{\prime}_{ab}(x)=\Omega(x)g_{ab}(x). \tag{2}\] Note that \(n\) labels number of space-time dimensions. Further, \(\omega(x)\) can be generally \(n\) dimensional volume form. For simplicity we will presume that \(\omega\) is constant. Then we can write an action for Weyl gravity in the form [4, 5] \[S=\int d^{n}x{\cal L}\,\quad{\cal L}=\frac{1}{16\pi}\omega\tilde{R}(\tilde{g}). \tag{3}\] In order to find covariant Hamiltonian formulation of Weyl gravity it is natural to split Lagrangian into bulk and boundary term. Recall that \(\tilde{R}\) can be written as \[\tilde{R}=\tilde{Q}_{k}^{\ \small{mnl}}\tilde{R}^{k}_{\ \small{mnl}}\,\] \[\tilde{R}^{k}_{\ \small{mnl}}=\partial_{n}\tilde{\Gamma}^{k}_{lm}- \partial_{l}\tilde{\Gamma}^{k}_{\ nm}+\tilde{\Gamma}^{k}_{np}\tilde{\Gamma}^{p }_{lm}-\tilde{\Gamma}^{k}_{lp}\tilde{\Gamma}^{p}_{mn}\,\] \[\tilde{Q}_{k}^{\ \small{mnl}}=\frac{1}{2}(\tilde{g}^{ml} \delta^{n}_{k}-\tilde{g}^{mn}\delta^{l}_{k})\.\] From the definition of \(\tilde{Q}\) we get that it is anti-symmetric in the two last indices \(\tilde{Q}^{mnl}_{k}=-\tilde{Q}^{mln}_{k}\). Then we can write the scalar curvature as \[\tilde{R}=2\partial_{n}(\tilde{Q}_{k}^{\ \small{mnl}}\tilde{\Gamma}^{k}_{lm})-2 \tilde{\Gamma}^{k}_{lm}\partial_{n}\tilde{Q}_{k}^{\ \small{mnl}}+2\tilde{Q}_{k}^{\ \small{mnl}}\tilde{\Gamma}^{k}_{np}\tilde{\Gamma}^{p}_{lm}\,, \tag{5}\] from this we immediately get both parts of Lagrangians. The boundary part \[{\cal L}_{bound}=\frac{\omega}{16\pi}\partial_{n}(2\tilde{Q}_{k}^{\ \small{mnl}}\tilde{\Gamma}^{k}_{lm})=\frac{\omega}{16\pi} \partial_{n}(\tilde{g}^{ml}\tilde{\Gamma}^{n}_{lm}-\tilde{g}^{mn}\tilde{ \Gamma}^{l}_{lm})\,, \tag{6}\] and the bulk part \[\mathcal{L}_{bulk}=\frac{\omega}{8\pi}\tilde{Q}_{k}^{\;\;mnl}\tilde{ \Gamma}_{np}^{k}\tilde{\Gamma}_{lm}^{p}-\frac{\omega}{8\pi}\tilde{\Gamma}_{lm}^{ k}\partial_{n}\tilde{Q}_{k}^{\;mnl}=\] \[=\frac{\omega}{16\pi}\left(\tilde{g}^{mn}\tilde{\Gamma}_{np}^{l} \tilde{\Gamma}_{lm}^{p}-\tilde{g}^{mn}\tilde{\Gamma}_{ml}^{l}\tilde{\Gamma}_{ np}^{p}\right)\,\] where we have used \[2\partial_{n}\tilde{Q}_{k}^{\;\;mnl}=\delta_{k}^{l}(\tilde{\Gamma}_{np}^{m} \tilde{g}^{pn}+\tilde{\Gamma}_{np}^{n}\tilde{g}^{mp})-\tilde{\Gamma}_{kp}^{m} \tilde{g}^{pl}-\tilde{\Gamma}_{kp}^{l}\tilde{g}^{mp}. \tag{8}\] Before we proceed to the covariant canonical formalism we should stress one important point which is the fact that \(\tilde{\Gamma}_{ra}^{r}\) vanishes identically. In more details, writing \(\tilde{g}_{mn}\) as \(\tilde{g}_{mn}=\Omega g_{mn}\,\Omega=\frac{\omega^{\frac{2}{n}}}{(-g)^{ \frac{1}{n}}}\) we get \[\tilde{\Gamma}_{ri}^{r}=\frac{1}{2}\tilde{g}^{rm}\partial_{i} \tilde{g}_{mr}=\frac{1}{2}\frac{\partial_{i}g}{g}+\frac{n}{2\Omega}\partial_{i }\Omega=0 \tag{9}\] as follows from the fact that \[\partial_{i}\Omega=-\frac{\Omega}{n}\frac{\partial_{i}g}{g}. \tag{10}\] Then using the condition \(\tilde{\Gamma}_{ri}^{r}=0\) the Lagrangian simplifies considerably \[\mathcal{L}=\mathcal{L}_{bound}+\mathcal{L}_{bulk}\,\] \[\mathcal{L}_{bulk}=\frac{\omega}{16\pi}\tilde{\Gamma}_{nk}^{m} \tilde{g}^{kl}\tilde{\Gamma}_{lm}^{n}\,\quad\mathcal{L}_{bound}=\frac{\omega}{16\pi} \partial_{n}\left[\tilde{g}^{ml}\tilde{\Gamma}_{lm}^{n}\right]\.\] Now we are ready to find covariant canonical formulation of WTG gravity. As the first step we introduce suitable canonical variables. Recall that the theory is formulated with the help of auxiliary metric (1). At first sight we should select \(g_{mn}\) as the canonical variable. On the other hand it was argued by Padmanabhan in many places, see for example [20], that natural variable for the study of dynamics of gravity should be chosen \(f^{ab}\) that is defined as \[f^{ab}=\sqrt{-g}g^{ab}. \tag{12}\] In fact, an importance of this object was already stressed in [26, 24, 25]. Then it is natural to find direct relation between \(\tilde{g}_{mn}\) and \(f^{mn}\). First of all from (12) we obtain \[f=\det f^{ab},\quad(-f)=(-g)^{\frac{n-2}{2}},\quad(-g)=(-f)^{\frac{2}{n-2}} \tag{13}\] Then after some manipulation we get direct relation between \(\tilde{g}_{ab}\) and \(f^{ab}\) in the form \[\tilde{g}^{mn}=\left(\frac{1}{-\omega^{2}f}\right)^{\frac{1}{n}}f^{mn},\quad \det\tilde{g}^{mn}=-\frac{1}{\omega^{2}}\, \tag{14}\] where \(\tilde{g}^{mn}\) is inverse to \(\tilde{g}_{mn}\),\(\tilde{g}_{mn}\tilde{g}^{nk}=\delta^{k}_{m}\). Clearly (14) is point transformation. Having selected \(f^{ab}\) as canonical variable we are ready to determine corresponding conjugate momenta as \[N^{c}_{ab}=\frac{\partial\mathcal{L}_{bulk}}{\partial(\partial_ {c}f^{ab})}=\frac{\partial\mathcal{L}_{bulk}}{\partial(\partial_{k}\tilde{g}_ {mn})}\frac{\partial(\partial_{k}\tilde{g}_{mn})}{\partial(\partial_{c}f^{ab })}=\] \[=-M^{kmn}\left(\tilde{g}_{mr}\frac{\partial(\partial_{k}\tilde{g}^ {rs})}{\partial(\partial_{c}f^{ab})}\tilde{g}_{sn}\right)\,\] where \(M^{kmn}\) is defined as \[M^{kmn}=\frac{\partial\mathcal{L}_{bulk}}{\partial(\partial_{k}\tilde{g}_{mn })}=\frac{\omega}{8\pi}\frac{\partial\tilde{\Gamma}^{r}_{ps}}{\partial( \partial_{k}\tilde{g}_{mn})}\tilde{g}^{sl}\tilde{\Gamma}^{p}_{lr}=\frac{ \omega}{16\pi}\tilde{g}^{mr}\tilde{\Gamma}^{k}_{rl}\tilde{g}^{ln}\, \tag{16}\] as follows from definition of \(\mathcal{L}_{bulk}\) given in (11) and where we also used following variation \[\frac{\partial\tilde{\Gamma}^{r}_{ps}}{\partial(\partial_{k}\tilde{g}_{mn})} =\frac{1}{4}\tilde{g}^{rt}(\delta^{k}_{p}(\delta^{m}_{t}\delta^{n}_{s}+ \delta^{n}_{t}\delta^{m}_{s})+\delta^{k}_{s}(\delta^{m}_{t}\delta^{n}_{p}+ \delta^{n}_{t}\delta^{m}_{p})-\delta^{k}_{t}(\delta^{m}_{p}\delta^{n}_{s}+ \delta^{n}_{p}\delta^{m}_{s}). \tag{17}\] Since (14) is point transformation we obtain \[\frac{\partial(\partial_{k}\tilde{g}^{rs})}{\partial(\partial_{ c}f^{ab})}=\delta^{c}_{k}\frac{\partial\tilde{g}^{rs}}{\partial f^{ab}}=\] \[=\delta^{c}_{k}\frac{1}{\omega^{\frac{2}{n}}}(-f)^{-\frac{1}{n}} \left[\frac{1}{2}(\delta^{r}_{a}\delta^{s}_{b}+\delta^{s}_{a}\delta^{r}_{b})- \frac{1}{n}f_{ab}f^{rs}\right]\equiv\delta^{c}_{k}B^{rs}_{ab}\.\] Then the momentum conjugate to \(f^{ab}\) is equal to \[N^{c}_{ab}=-\frac{\omega}{16\pi}\tilde{\Gamma}^{c}_{rs}\frac{1}{ \omega^{\frac{2}{n}}}(-f)^{-\frac{1}{n}}[\frac{1}{2}(\delta^{r}_{a}\delta^{s}_{b }+\delta^{s}_{a}\delta^{r}_{b})-\frac{1}{n}f_{ab}f^{rs}]=\] \[=-\frac{\omega^{\frac{n-2}{n}}}{16\pi}(-f)^{-\frac{1}{n}}[\tilde{ \Gamma}^{c}_{ab}-\frac{1}{n}\tilde{\Gamma}^{c}_{rs}f^{rs}f_{ab}]\.\] From (19) we immediately obtain \(N^{c}_{ab}f^{ab}=0\) and hence we have \(n\) primary constraints \[{\cal G}^{c}\equiv N^{c}_{ab}f^{ab}\approx 0. \tag{20}\] As the next step we determine bare Hamiltonian that is defined as \[{\cal H}_{B}=\partial_{c}f^{ab}N^{c}_{ab}-{\cal L}_{bulk}. \tag{21}\] Since \({\cal L}_{bulk}\) is function of \(\tilde{g}_{mn}\) instead of \(f^{ab}\) it is natural to perform following manipulation \[\partial_{c}f^{ab}N^{c}_{ab}=-\partial_{c}f^{ab}M^{kmn}\tilde{g}_{mr}\delta^{ c}_{k}B^{rs}_{ab}\tilde{g}_{sn}=\partial_{k}\tilde{g}_{mn}M^{kmn} \tag{22}\] using the fact that \[\partial_{k}\tilde{g}_{mn}=-\tilde{g}_{mr}\partial_{k}\tilde{g}^{rs}\tilde{g} _{sn}=-\tilde{g}_{mr}\frac{\delta\tilde{g}^{rs}}{\delta f^{ab}}\partial_{k}f^ {ab}\tilde{g}_{sn}=-\tilde{g}_{mr}B^{rs}_{ab}\partial_{k}f^{ab}\tilde{g}_{sn}. \tag{23}\] Then the Hamiltonian density has the form \[{\cal H}_{B}=\partial_{c}f^{ab}N^{c}_{ab}-{\cal L}_{bulk}=\partial_{k}\tilde{g }_{mn}M^{kmn}-{\cal L}_{bulk}=\frac{\omega}{16\pi}\tilde{\Gamma}^{a}_{cm} \tilde{g}^{mb}\tilde{\Gamma}^{c}_{ab}. \tag{24}\] Finally we should express Hamiltonian density as function of canonical variables. This is slightly problematic due to the fact that the relation between \(\tilde{\Gamma}^{a}_{ab}\) and \(N^{c}_{ab}\) is not invertible. For that reason let us calculate following combination \[N^{c}_{ab}\tilde{g}^{ad}N^{b}_{dc}={\bf A}^{2}[\tilde{\Gamma}^{c}_{ab}\tilde{ g}^{ad}\tilde{\Gamma}^{b}_{dc}+\frac{1}{n^{2}}\tilde{\Gamma}^{c}_{rs}\tilde{g}^{ rs}\tilde{g}_{cb}\tilde{\Gamma}^{b}_{mn}\tilde{g}^{mn}]\, \tag{25}\] where \({\bf A}=-\frac{\omega^{\frac{n-2}{n}}}{16\pi}(-f)^{-\frac{1}{n}}\). We further have \[N^{r}_{ra}\tilde{g}^{ab}N^{t}_{tb}=\frac{{\bf A}^{2}}{n^{2}}\tilde{\Gamma}^{a} _{rs}\tilde{g}^{rs}\tilde{g}_{ab}\tilde{\Gamma}^{b}_{mn}\tilde{g}^{mn}. \tag{26}\] Collecting these terms together we obtain that the Hamiltonian density is equal to \[{\cal H}_{B}=\frac{16\pi}{\omega^{\frac{n-2}{n}}}(-f)^{\frac{1}{n}}[N^{c}_{ab}f^{ ad}N^{b}_{dc}-N^{r}_{ra}f^{ab}N^{t}_{tb}]\.\] Then the canonical form of the action has the form \[S=\int d^{n}x(\partial_{c}f^{ab}N^{c}_{ab}-{\cal H}_{B}-\Lambda_{c}{\cal G}^{c} )\,\] where we included primary contraints \({\cal G}^{c}\approx 0\) multiplied by Lagrange multipliers \(\Lambda_{c}\). Note that we treat \(\Lambda_{c}\) as independent variables which should be varied when we search for extrema of the action. Explicitly, the variation of the action has the form \[\delta S=\int d^{n}x\left(\partial_{c}\delta f^{ab}N^{c}_{ab}+ \partial_{c}f^{ab}\delta N^{c}_{ab}-\frac{\delta{\cal H}_{B}}{\delta f^{ab}} \delta f^{ab}-\frac{\delta{\cal H}_{B}}{\delta N^{c}_{ab}}\delta N^{c}_{ab}-\right.\] \[\left.-\delta\Lambda_{c}{\cal G}^{c}-\Lambda_{d}\frac{\delta{ \cal G}^{d}}{\delta f^{ab}}\delta f^{ab}-\Lambda_{d}\frac{\delta{\cal G}^{d}} {\delta N^{c}_{ab}}\delta N^{c}_{ab}\right)=0\] that gives following equations of motion \[\partial_{c}f^{ab}-\frac{\delta{\cal H}_{B}}{\delta N^{c}_{ab}}- \Lambda_{d}\frac{\delta{\cal G}^{d}}{\delta N^{c}_{ab}}=0\,\] \[\partial_{c}N^{c}_{ab}+\frac{\delta{\cal H}_{B}}{\delta f^{ab}}+ \Lambda_{d}\frac{\delta{\cal G}^{d}}{\delta f^{ab}}=0\,\] \[{\cal G}^{c}=0\,\] or explicitly \[\partial_{c}f^{ab}-\Lambda_{c}f^{ab}-\] \[-\frac{16\pi}{\omega^{\frac{n-2}{n}}}(-f)^{\frac{1}{n}}\left(f^{ db}N^{a}_{cd}+f^{da}N^{b}_{cd}-\delta^{a}_{c}f^{bs}N^{m}_{ms}-\delta^{b}_{c}f^{ as}N^{m}_{ms}\right)=0\,\] \[\partial_{c}N^{c}_{ab}+\Lambda_{c}N^{c}_{ab}+\] \[+\frac{16\pi}{n\omega^{\frac{n-2}{n}}}(-f)^{\frac{1}{n}}f_{ab}(N^ {m}_{cd}f^{dn}N^{c}_{mn}-N^{m}_{ms}f^{sr}N^{n}_{nr})+\] \[+\frac{16\pi}{\omega^{\frac{n-2}{n}}}(-f)^{\frac{1}{n}}(N^{m}_{ na}N^{n}_{mb}-N^{m}_{ma}N^{n}_{nb})=0\,\quad N^{c}_{ab}f^{ab}=0\.\] Let us now return to the constraint \({\cal G}^{c}\approx 0\) and study its time evolution. From the equations of motion above we get \[N^{c}_{ab}\partial_{c}f^{ab}=\Lambda_{c}N^{c}_{ab}f^{ab}+\frac{32\pi}{\omega^{ \frac{n-2}{n}}}(-f)^{\frac{1}{n}}\left(f^{db}N^{a}_{cd}N^{c}_{ab}-N^{c}_{cb}f^{ bs}N^{m}_{ms}\right)\, \tag{32}\] \[f^{ab}\partial_{c}N^{c}_{ab}=-\Lambda_{c}N^{c}_{ab}f^{ab}-\frac{16 \pi}{\omega^{\frac{n-2}{n}}}(-f)^{\frac{1}{n}}\left(f^{db}N^{a}_{cd}N^{c}_{ab}- N^{c}_{cb}f^{bs}N^{m}_{ms}\right)-\] \[-\frac{16\pi}{\omega^{\frac{n-2}{n}}}(-f)^{\frac{1}{n}}(N^{m}_{ma }f^{ab}N^{n}_{mb}-N^{m}_{ma}f^{ab}N^{n}_{nb})\.\] If we combine these two equations together we get \[\partial_{c}(N^{c}_{ab}f^{ab})=0 \tag{34}\] that shows that \({\cal G}^{c}\approx 0\) is conserved during time evolution without any restriction on the value of Lagrange multiplier \(\Lambda_{c}\). In other words \({\cal G}^{c}\approx 0\) can be interpreted as the first class constraint. Then Lagrange multipliers \(\Lambda_{c}\) will be determined by following way. Firstly we contract the first equation in (31) with \(f_{ab}\) and we obtain \[f_{ab}\partial_{c}f^{ab}=\Lambda_{c}n\,\] so that we can express \(\Lambda_{c}\) as \[\Lambda_{c}=\frac{1}{nf}\partial_{c}\det f. \tag{36}\] The situation simplifies even more when we impose the condition \[{\cal F}\equiv-\det f-K=0\, \tag{37}\] where \(K\) is constant. This constraint determines the value of the determinant of matrix \(f^{ab}\) and it is known as unimodular constraint. Then from (36) we immediately get \(\Lambda_{c}=0\). On the other hand we should interpret \({\cal F}\) as gauge fixing constraint. Such a constraint has to be added into action multiplied by appropriate Lagrange multiplier \(\Omega\) in order to be consistently included into dynamics. Since \({\cal F}\) does not depend on \(N^{c}_{ab}\) it is clear that the variation of \({\cal F}\) only contributes to the equations of motion for \(N^{c}_{ab}\) by factor \(\Omega\frac{\delta{\cal F}}{\delta f^{ab}}\). Explicitly, the equations of motion for \(N^{c}_{ab}\) are modified by following way \[0=\partial_{c}N^{c}_{ab}+\Lambda_{c}N^{c}_{ab}+\] \[+\frac{16\pi}{n\omega^{\frac{n-2}{n}}}(-f)^{\frac{1}{n}}f_{ab}(N^{ m}_{cd}f^{dn}N^{c}_{mn}-N^{m}_{ms}f^{sr}N^{n}_{nr})+\] \[+\frac{16\pi}{\omega^{\frac{n-2}{n}}}(-f)^{\frac{1}{n}}(N^{m}_{na }N^{n}_{mb}-N^{m}_{ma}N^{n}_{nb})+\Omega f_{ab}(-f)\.\] If we multiply this equation with \(f^{ab}\) and use the gauge fixing function \({\cal F}\) we obtain \[0=\partial_{c}N^{c}_{ab}f^{ab}+\Lambda_{c}N^{c}_{ab}f^{ab}+\frac {16\pi}{\omega^{\frac{n-2}{n}}}(-f)^{\frac{1}{n}}(N^{m}_{cd}f^{dn}N^{c}_{mn}- N^{m}_{ms}f^{sr}N^{n}_{nr})+\] \[+\frac{16\pi}{\omega^{\frac{n-2}{n}}}(-f)^{\frac{1}{n}}(N^{m}_{ na}f^{ab}N^{n}_{mb}-N^{m}_{ma}f^{ab}N^{n}_{nb})+\Omega n(-f)\.\] However if we combine this equation with \[0=N^{c}_{ab}\partial_{c}f^{ab}-\Lambda_{c}N^{c}_{ab}f^{ab}-\frac{32\pi}{ \omega^{\frac{n-2}{n}}}(-f)^{\frac{1}{n}}\left(f^{db}N^{a}_{cd}N^{c}_{ab}-N^{ c}_{cb}f^{bs}N^{m}_{ms}\right)\, \tag{40}\] we get \[0=\partial_{c}{\cal G}^{c}+\Omega nK \tag{41}\] so that the requirement of the preservation of the constraint \({\cal G}^{c}\approx 0\) implies \(\Omega=0\). In summary, the gauge fixed equations of motion have the form \[0=\partial_{c}N^{c}_{ab}+\frac{16\pi}{n\omega^{\frac{n-2}{n}}}(- f)^{\frac{1}{n}}f_{ab}(N^{x}_{cd}f^{dy}N^{c}_{xy}-N^{m}_{ms}f^{sr}N^{n}_{nr})+\] \[+\frac{16\pi}{\omega^{\frac{n-2}{n}}}(-f)^{\frac{1}{n}}(N^{m}_{ na}N^{n}_{mb}-N^{m}_{ma}N^{n}_{nb})\,\] \[\partial_{c}f^{ab}-\frac{16\pi}{\omega^{\frac{n-2}{n}}}(-f)^{\frac {1}{n}}\left(f^{db}N^{a}_{cd}+f^{da}N^{b}_{cd}-\delta^{a}_{c}f^{bs}N^{m}_{ms}- \delta^{b}_{c}f^{as}N^{m}_{ms}\right)=0\.\] These equations of motion should correspond to the equations of motion of unimodular gravity that were derived recently in [27]. Unfortunately these two set of equations do not agree. In more details, we showed in [27] that consistency of the unimodular gravity in covariant formalism implies the presence of the secondary constraint \(N^{r}_{ra}=0\) while in Weyl gravity there is a primary constraint \(N^{c}_{ab}f^{ab}=0\). Further, the way how we determined Lagrange multiplier in [27] is not exactly in the spirit of the analysis of the constraint systems due to the fact that in the covariant canonical formalism it is not possible to solve equations \(\partial_{c}N^{d}_{ab}\) since the equations of motion of covariant canonical formalism determine \(\partial_{c}N^{c}_{ab}\) only. It is possible that the proper treatment of this problem could be in the powerful method developed by Kanatchikov in [28, 30]. We hope to return to this analysis in near future. ### Relation Between Surface and Bulk Lagrangians So far, we were concerned with the bulk part of Lagrangian. Now, we will focus on the surface part and find whether, there is a connexion between it and the bulk part. Such connexion can be found for Lanczos-Lovelock models as shown in [21]. In \(F(R)-\)Gravity, the connexion is not present [31]. Let us start with the boundary Lagrangian, which has the form of \[\mathcal{L}_{bound}=\frac{\omega}{16\pi}\partial_{n}\left(\tilde{g}^{ml} \tilde{\Gamma}^{n}_{ml}\right)\;. \tag{43}\] We would like to connect it with the canonical momentum. From (19) we find its contracted form as \[N^{k}_{ka}=\frac{\omega^{\frac{n-2}{n}}}{16\pi n(-f)^{\frac{1}{n}}}\tilde{ \Gamma}^{k}_{rs}\tilde{g}^{rs}\tilde{g}_{ak}\;, \tag{44}\] there is clearly visible the similarity between the surface Lagrangian and the contracted momentum. With little care one easily arrives to the relation \[\mathcal{L}_{bound}=\partial_{b}\left(nN^{k}_{ka}f^{ab}\right)\;. \tag{45}\] We will discuss this relation in more details in the next section. Covariant Canonical Formalism for Weyl Gravity Formulated without Auxiliary Metric In this section we develop covariant Hamiltonian formalism for Weyl gravity that is formulated using the physical metric \(g_{mn}\) instead of the metric \(\tilde{g}_{mn}\). To do this we review basic facts about Weyl transformed metric in \(n\) dimensions \[g^{\prime}_{ij}=\Omega g_{ij}\, \tag{46}\] where \(\Omega\) is general function of space time. It is easy to see that under this transformation Ricci scalar \(R^{\prime}(g^{\prime})\) is related to \(R(g)\) through following formula \[R^{\prime}=\frac{1}{\Omega}R+\frac{(1-n)}{\Omega}\left(-\frac{1} {\Omega^{2}}\partial_{i}\Omega g^{ij}\partial_{j}\Omega+\frac{1}{\Omega}\frac {1}{\sqrt{-g}}\partial_{i}[\sqrt{-g}g^{ij}\partial_{j}\Omega]\right)\] \[+\frac{1}{4\Omega^{3}}(n-2)(1-n)\partial_{i}\Omega g^{ij}\partial _{j}\Omega. \tag{47}\] In case of Weyl gravity we have \(\Omega=(\frac{\omega^{2}}{-\det g})^{\frac{1}{n}}\) so that \[\partial_{i}\Omega=\frac{1}{n}\Omega\frac{\partial_{i}g}{-g}. \tag{48}\] Then using (47) we get \[R^{\prime}=\frac{1}{\Omega}\left(R+\frac{(1-n)}{4n^{2}g^{2}}(5n-2)\partial_{i }gg^{ij}\partial_{j}g+\frac{n-1}{ng}\frac{1}{\sqrt{-g}}\partial_{i}[\sqrt{-g}g ^{ij}\partial_{j}g]\right)\,, \tag{49}\] so that the action has the form 4 Footnote 4: This action was analysed recently in [9, 29]. \[S=\frac{1}{16\pi\omega^{\frac{n-2}{n}}}\int d^{n}x(-g)^{\frac{1 }{n}}\left[R+\frac{(1-n)}{4n^{2}g^{2}}(5n-2)\partial_{i}gg^{ij}\partial_{j}g+\right.\] \[\left.+\frac{n-1}{ng}\frac{1}{\sqrt{-g}}\partial_{i}[\sqrt{-g}g^{ ij}\partial_{j}g]\right]\.\] Now we would like to express this action in the form that is suitable for covariant canonical formalism. First of all we use the fact that \(\nabla_{i}g_{kl}=0\) that implies \[\partial_{i}g_{kl}=\Gamma^{m}_{ik}g_{ml}+\Gamma^{m}_{il}g_{mk}\;, \tag{51}\] that multiplied with \(g^{kl}\) gives \[\partial_{i}g=2\Gamma^{k}_{ik}g. \tag{52}\] Using this result we find \[(-g)^{\frac{1}{n}}\left(\frac{(1-n)}{4n^{2}g^{2}}(5n-2)\partial_{i }gg^{ij}\partial_{j}g+\frac{n-1}{ng}\frac{1}{\sqrt{-g}}\partial_{i}[\sqrt{-g} g^{ij}\partial_{j}g]\right)=\] \[=(-g)^{\frac{1}{n}}\frac{(1-n)(2-n)}{n^{2}}\Gamma^{m}_{mi}g^{ij} \Gamma^{n}_{nj}+\frac{2(n-1)}{n}\partial_{i}[(-g)^{\frac{1}{n}}g^{ij}\Gamma^{ k}_{kj}]\.\] Now we return to the first term in the action (50) and perform the same manipulation as in previous section to obtain \[(-g)^{\frac{1}{n}}R=(-g)^{\frac{1}{n}}Q_{k}^{\;\;mnl}R^{k}_{\;\;mnl }=2(-g)^{\frac{1}{n}}Q_{k}^{\;\;mnl}[\partial_{n}\Gamma^{k}_{lm}+\Gamma^{k}_{ np}\Gamma^{p}_{lm}]=\] \[=(-g)^{\frac{1}{n}}\left(\Gamma^{m}_{nk}g^{kl}\Gamma^{n}_{lm}-(1- \frac{2}{n})\Gamma^{n}_{nk}g^{km}\Gamma^{l}_{lm}-\frac{2}{n}\Gamma^{m}_{nk}g^ {kn}\Gamma^{l}_{lm}\right)+\] \[+2\partial_{n}[(-g)^{\frac{1}{n}}Q_{k}^{\;\;mnl}\Gamma^{k}_{lm}]\,\] where \[R^{k}_{\;\;mnl} =\partial_{n}\Gamma^{k}_{lm}-\partial_{l}\Gamma^{k}_{nm}+\Gamma^{ k}_{np}\Gamma^{p}_{lm}-\Gamma^{k}_{lp}\Gamma^{p}_{mn}\,\] \[Q_{k}^{\;\;mnl} =\frac{1}{2}(g^{ml}\delta^{n}_{k}-g^{mn}\delta^{l}_{k})\,\] and we used the fact that \[\partial_{i}(-g)^{\frac{1}{n}}=\frac{2}{n}\Gamma^{k}_{ki}(-g)^{\frac{1}{n}}\.\] If we then combine (53) with (54) we find that the action (50) can be written as \[S=\frac{1}{16\pi}\int d^{n}x(-g)^{\frac{1}{n}}(\Gamma^{m}_{nk}g^{ kl}\Gamma^{n}_{lm}+\frac{2-n}{n^{2}}\Gamma^{n}_{nk}g^{km}\Gamma^{l}_{lm}- \frac{2}{n}\Gamma^{m}_{nk}g^{kn}\Gamma^{l}_{lm})+\] \[+\frac{1}{16\pi}\int d^{n}x\partial_{n}[(-g)^{\frac{1}{n}}(g^{lm} \Gamma^{n}_{lm}+\frac{(n-2)}{n}g^{nm}\Gamma^{l}_{lm})]\equiv\] \[\equiv\int d^{n}x({\cal L}_{bulk}+{\cal L}_{bound})\,\] where for simplicity we set \(\omega=1\). Now we are ready to find conjugate momenta. We firstly define \(M^{kmn}\) as \[M^{kmn}=\frac{\partial{\cal L}_{bulk}}{\partial(\partial_{k}g_{ mn})}=\frac{1}{16\pi}(-g)^{\frac{1}{n}}\left[g^{mt}\Gamma^{k}_{tp}g^{pn}+ \frac{(2-n)}{n^{2}}g^{kp}\Gamma^{s}_{sp}g^{mn}-\right.\] \[-\left.\frac{1}{n}\Gamma^{k}_{st}g^{st}g^{mn}-\frac{1}{n}(g^{kn} g^{mr}\Gamma^{p}_{pr}+g^{nr}g^{mk}\Gamma^{p}_{pr}-g^{kr}\Gamma^{p}_{pr}g^{mn}) \right]\.\] Now using (58) we obtain \[M^{kmn}g_{mn}=\frac{1}{16\pi}(-g)^{\frac{1}{n}}(g^{pt}\Gamma^{k}_ {tp}+\frac{(2-n)}{n}g^{kp}\Gamma^{s}_{sp}-\] \[\left.-\Gamma^{k}_{st}g^{st}-\frac{1}{n}(2g^{kn}g^{mr}\Gamma^{p}_{ pr}g_{mn}-ng^{kr}\Gamma^{p}_{pr})=0\right., \tag{59}\] that implies an existence of primary constraints \(M^{kmn}g_{mn}\approx 0\). Then clearly it is possible to find covariant canonical formulation of this theory with canonical variables \(g_{mn}\) and \(M^{mn}\). However we rather introduce variable \(f^{ab}\) as in previous section where now we will be more general and consider following definition \[f^{ab}=(-g)^{\alpha}g^{ab}\, \tag{60}\] where \(\alpha\) is arbitrary number. Then conjugate momenta \(N^{c}_{ab}\) are defined as \[N^{c}_{ab}=\frac{\partial{\cal L}_{quad}}{\partial(\partial_{c}f^{ ab})}=\frac{\partial{\cal L}_{quad}}{\partial(\partial_{k}g_{mn})}\frac{ \partial(\partial_{k}g_{mn})}{\partial(\partial_{c}f^{ab})}=\] \[=M^{kmn}\delta^{c}_{k}\frac{\delta g_{mn}}{\delta f^{ab}}=M^{kmn} \delta^{c}_{k}(-g_{mr}\frac{\delta g^{rs}}{\delta f^{ab}}g_{sn})=\] \[=-M^{cmn}g_{mr}g_{ns}(\frac{1}{2}(\delta^{r}_{a}\delta^{s}_{b}+ \delta^{r}_{b}\delta^{s}_{a})-\frac{\alpha}{n\alpha-1}f^{rs}f_{ab})(-f)^{- \frac{\alpha}{n\alpha-1}}=\] \[=-\frac{1}{16\pi}(-f)^{-\frac{1}{n}}[\Gamma^{c}_{ab}-\frac{1}{n} \Gamma^{c}_{mn}f^{mn}f_{ab}+\frac{2}{n^{2}}f^{cm}\Gamma^{p}_{pm}f_{ab}-\frac{ 1}{n}(\delta^{c}_{a}\Gamma^{p}_{pb}+\delta^{c}_{b}\Gamma^{p}_{pa})]\;, \tag{61}\] using the fact that \[(-g)=(-f)^{\frac{1}{n\alpha-1}}\,g^{ab}=f^{ab}(-f)^{-\frac{\alpha}{n\alpha -1}}\;, \tag{62}\] that also implies \[\frac{\delta g^{rs}}{\delta f^{ab}}=\left(\frac{1}{2}(\delta^{r}_{a}\delta^{s }_{b}+\delta^{r}_{b}\delta^{s}_{a})-\frac{\alpha}{n\alpha-1}f^{rs}f_{ab} \right)(-f)^{-\frac{\alpha}{n\alpha-1}}. \tag{63}\] It is remarkable that the conjugate momentum \(N^{c}_{ab}\) does not depend on \(\alpha\). Further, from (61) we get \[N^{c}_{ab}f^{ab}=-\frac{1}{16\pi}(-f)^{-\frac{1}{n}}[\Gamma^{c}_{ab}f^{ab}- \frac{1}{n}\Gamma^{c}_{mn}f^{mn}n+\frac{2}{n^{2}}f^{cm}\Gamma^{p}_{pm}n-\frac {2}{n}f^{cb}\Gamma^{p}_{pb}]=0\;, \tag{64}\] that implies set of primary constraints \[{\cal G}^{c}\equiv N^{c}_{ab}f^{ab}\approx 0\, \tag{65}\] which are the same as in previous section. Then the bare Hamiltonian is equal to \[{\cal H}_{B}=\partial_{c}f^{ab}N^{c}_{ab}-{\cal L}_{bulk}=-\partial_{c}f^{ab }M^{cmn}g_{mr}\frac{\delta g^{rs}}{\delta f^{ab}}g_{ns}-{\cal L}_{bulk}=\] \[=\partial_{k}g_{mn}M^{kmn}-{\cal L}_{bulk}=(\Gamma^{p}_{km}g_{pn}+ \Gamma^{p}_{kn}g_{nm})M^{kmn}-{\cal L}_{bulk}=\] \[=\frac{1}{16\pi}(-g)^{\frac{1}{n}}(\Gamma^{m}_{nk}g^{kl}\Gamma^{n} _{lm}+\frac{2-n}{n^{2}}\Gamma^{n}_{nk}g^{km}\Gamma^{l}_{lm}-\frac{2}{n}\Gamma^ {m}_{nk}g^{kn}\Gamma^{l}_{lm}). \tag{66}\] In order to find Hamiltonian as function of canonical variables we again calculate \[N^{c}_{ab}g^{ad}N^{b}_{cd}={\bf A}^{2}[\Gamma^{c}_{ab}g^{ad}\Gamma^ {b}_{c}+\frac{-2n^{2}+2n-4}{n^{3}}\Gamma^{m}_{mr}\Gamma^{r}_{ts}g^{gs}+\] \[+\frac{(3n^{2}-n^{3}-4n+4)}{n^{4}}\Gamma^{m}_{ma}g^{ab}\Gamma^{n} _{nb}+\frac{1}{n^{2}}\Gamma^{c}_{mn}g^{mn}g_{cb}\Gamma^{b}_{pq}g^{pq}]\;, \tag{67}\] where \({\bf A}=-\frac{1}{16\pi}(-f)^{-\frac{1}{n}}\). We further have \[N^{r}_{ra}g^{ab}N^{t}_{tb}=\] \[={\bf A}^{2}[\frac{(2-n)^{2}}{n^{4}}\Gamma^{p}_{pa}g^{ab}\Gamma^{ s}_{sb}-2\frac{(2-n)}{n^{3}}\Gamma^{p}_{pa}\Gamma^{a}_{rs}g^{rs}+\frac{1}{n^{2}} \Gamma^{a}_{st}g^{st}g_{ab}\Gamma^{b}_{mn}g^{mn}]\;. \tag{68}\] Collecting these terms together we find final form of the bare Hamiltonian \[{\cal H}_{B}=16\pi(-f)^{\frac{1}{n}}[N^{c}_{ab}f^{ad}N^{b}_{cd}-N^{r}_{ra}f^{ ab}N^{t}_{tb}]\;, \tag{69}\] which has exactly the same form as the Hamiltonian density derived in previous section. We would like however stress one important point which is the fact that we used generalized form of the variable \(f^{ab}=(-g)^{\alpha}g^{ab}\) and that the theory does not depend on \(\alpha\) at all. Finally we return to the boundary term that is equal to \[{\cal L}_{bound}=\frac{1}{16\pi}\partial_{n}[(-g)^{\frac{1}{n}}(g^{lm}\Gamma^ {n}_{lm}+\frac{(n-2)}{n}g^{nm}\Gamma^{l}_{lm})]\;. \tag{70}\] Since \(N^{r}_{ra}\) is equal to \[N^{r}_{ra}=\frac{1}{16\pi}(-f)^{-\frac{1}{n}}[\frac{(n-2)}{n^{2}}\Gamma^{p}_{ pa}+\frac{1}{n}\Gamma^{c}_{mn}f^{mn}f_{ca}]\;, \tag{71}\] we again find that the surface term has the form \[{\cal L}_{bound}=n\partial_{n}[f^{nm}N^{r}_{rm}]\;, \tag{72}\] that agrees with the result derived in previous section. Further, this expression can be written as \[{\cal L}_{bound}=n\partial_{n}\left[f^{nm}\delta^{r}_{c}\frac{\partial{\cal L }_{bulk}}{\partial(\partial_{c}f^{rm})}\right]\;, \tag{73}\] that has the form of holographic relation between bulk and boundary action with agreement with the general discussion presented in [21]. **Acknowledgement:** The work of JK is supported by the grant "Dualitites and higher order derivatives" (GA23-06498S) from the Czech Science Foundation (GACR).
2309.12024
Topological degree for Chern-Simons Higgs models on finite graphs
Let $(V,E)$ be a finite connected graph. We are concerned about the Chern-Simons Higgs model $$\Delta u=\lambda e^u(e^u-1)+f, \quad\quad\quad\quad\quad\quad{(0.1)}$$ where $\Delta$ is the graph Laplacian, $\lambda$ is a real number and $f$ is a function on $V$. When $\lambda>0$ and $f=4\pi\sum_{i=1}^N\delta_{p_i}$, $N\in\mathbb{N}$, $p_1,\cdots,p_N\in V$, the equation (0.1) was investigated by Huang, Lin, Yau (Commun. Math. Phys. 377 (2020) 613-621) and Hou, Sun (Calc. Var. 61 (2022) 139) via the upper and lower solutions principle. We now consider an arbitrary real number $\lambda$ and a general function $f$, whose integral mean is denoted by $\overline{f}$, and prove that when $\lambda\overline{f}<0$, the equation $(0.1)$ has a solution; when $\lambda\overline{f}>0$, there exist two critical numbers $\Lambda^\ast>0$ and $\Lambda_\ast<0$ such that if $\lambda\in(\Lambda^\ast,+\infty)\cup(-\infty,\Lambda_\ast)$, then $(0.1)$ has at least two solutions, including one local minimum solution; if $\lambda\in(0,\Lambda^\ast)\cup(\Lambda_\ast,0)$, then $(0.1)$ has no solution; while if $\lambda=\Lambda^\ast$ or $\Lambda_\ast$, then $(0.1)$ has at least one solution. Our method is calculating the topological degree and using the relation between the degree and the critical group of a related functional. Similar method is also applied to the Chern-Simons Higgs system, and a partial result for the multiple solutions of the system is obtained.
Jiayu Li, Linlin Sun, Yunyan Yang
2023-09-21T12:42:44Z
http://arxiv.org/abs/2309.12024v1
# Topological degree for Chern-Simons Higgs models on finite graphs 1 ###### Abstract Let \((V,E)\) be a finite connected graph. We are concerned about the Chern-Simons Higgs model \[\Delta u=\lambda e^{u}(e^{u}-1)+f, \tag{1}\] where \(\Delta\) is the graph Laplacian, \(\lambda\) is a real number and \(f\) is a function on \(V\). When \(\lambda>0\) and \(f=4\pi\sum_{i=1}^{N}\delta_{p_{i}}\), \(N\in\mathbb{N}\), \(p_{1},\cdots,p_{N}\in V\), the equation (1) was investigated by Huang, Lin, Yau (Commun. Math. Phys. 377 (2020) 613-621) and Hou, Sun (Calc. Var. 61 (2022) 139) via the upper and lower solutions principle. We now consider an arbitrary real number \(\lambda\) and a general function \(f\), whose integral mean is denoted by \(\overline{f}\), and prove that when \(\lambda\overline{f}<0\), the equation (1) has a solution; when \(\lambda\overline{f}>0\), there exist two critical numbers \(\Lambda^{*}>0\) and \(\Lambda_{*}<0\) such that if \(\lambda\in(\Lambda^{*},+\infty)\cup(-\infty,\Lambda_{*})\), then (1) has at least two solutions, including one local minimum solution; if \(\lambda\in(0,\Lambda^{*})\cup(\Lambda_{*},0)\), then (1) has no solution; while if \(\lambda=\Lambda^{*}\) or \(\Lambda_{*}\), then (1) has at least one solution. Our method is calculating the topological degree and using the relation between the degree and the critical group of a related functional. Similar method is also applied to the Chern-Simons Higgs system, and a partial result for the multiple solutions of the system is obtained. keywords: Topological degree, Chern-Simons Higgs mode, Finite graph Msc: [2020] 39A12, 46E39 + Footnote †: journal: ## 1 Introduction The Chern-Simons Higgs model, introduced by Hong, Kim, Pac [19] and Jackiw, Weinberg [27], has always attracted the attention of many mathematicians in the fields of geometry and physics, see for examples [2; 3; 9; 10; 31; 40; 41; 43; 46]. Among many versions, the self-dual Chern-Simons Higgs vortex equation on a flat 2-torus \(\Sigma\) can be written as \[\Delta u=\frac{4}{k^{2}}e^{u}(e^{u}-1)+4\pi\sum_{i=1}^{k_{0}}m_{i}\delta_{p_{ i}}, \tag{2}\] where \(k>0\) is the Chern-Simons constant, \(m_{i}\in\mathbb{N}\), \(p_{i}\in\Sigma\), \(i=1,\cdots,k_{0}\). The solution of the above equation is called a vertex solution, each \(p_{i}\) is called a vertex point, and \(m_{i}\) stands for the multiplicity of \(p_{i}\). From the view of physics, the vortex points are closely related to the local maximum point of the magnetic flux in the Chern-Simons Higgs model. Let \(u_{0}\) be a solution of \[\left\{\begin{array}{l}\Delta u_{0}=-\frac{4\pi N}{|\Sigma|}+4\pi\sum_{i=1}^{k_ {0}}m_{i}\delta_{p_{i}}\\ \int_{\Sigma}u_{0}dv_{g}=0,\end{array}\right.\] where \(N=\sum_{i=1}^{k_{0}}m_{i}\). Set \(v=u-u_{0}\). Then (1) can be written in a more favourable form \[\Delta v=\lambda he^{v}(he^{v}-1)+\frac{4\pi N}{|\Sigma|}, \tag{2}\] where \(\lambda=\frac{4}{k^{2}}\) and \(h=e^{u_{0}}\) is a positive function on \(\Sigma\). A solution \(v\) of (2) is called of finite energy if \(v\in W^{1,2}(\Sigma)\), a usual Sobolev space. Indeed, it is known that the corresponding physical energy of the solution \(v\) is finite if \(u\in W^{1,2}(\Sigma)\). Thus, solutions of finite energy are physically meaningful in (2) and there have been many existence results for \(W^{1,2}(\Sigma)\) solutions of (2), see [2, 8, 9, 40, 41, 43, 44, 45] and the references therein. By using the principle of upper and lower solutions, Caffarelli and Yang constructed a maximal solution. In addition to the above references, [10, 29] also indicated that the equation (2) admits a variational structure. Different from the theoretical significance on Riemann surfaces, the analysis on graphs is very important for applications, such as image processing, data mining, network and so on. Among lots of directions, partial differential equations arising in geometry or physics are worth studying on graphs. Various equations, including the heat equation [20, 26, 32, 33], the Fokker-Planck and Schrodinger equations [6, 7], have been studied by many mathematicians. In particular, Grigor'yan, Lin and Yang [13, 14, 15] studied the existence of solutions for a series of nonlinear elliptic equations on graphs by using the variational methods. In this direction, Zhang, Zhao, Han and Shao [17, 18, 49] obtained nontrivial solutions to nonlinear Schrodinger equations with potential wells. Similar problems on infinite metric graphs were studied by Akduman-Pankov [1]. The Kazdan-Warner equation was extended by Keller-Schwarz [28] to canonically compactifiable graphs. Semi-linear heat equations on locally finite graphs were studied by Ge, Jiang, Lin and Wu [12, 32, 33]. For other related works, we refer the readers to [11, 16, 22, 23, 34, 35, 36, 38, 39, 47, 48, 50] and the references therein. To describe the Chern-Simons Higgs model in the graph setting, we introduce some notations. Let \((V,E)\) be a connected finite graph, where \(V\) is the set of vertices and \(E\) is the set of edges. Let \(\mu:V\to(0,+\infty)\) and \(\{w_{xy}:xy\in E\}\) be its measure and weights respectively. The weight \(w_{xy}\) is always assumed to be positive and symmetric. The Laplacian of a function \(u:V\to\mathbb{R}\) reads as \[\Delta u(x)=\frac{1}{\mu(x)}\sum_{y\sim x}w_{xy}(u(y)-u(x)),\] where \(y\sim x\) means \(y\) is adjacent to \(x\), i.e. \(xy\in E\). The gradient of \(u\) is defined as \[\nabla u(x)=\left(\sqrt{\frac{w_{xy_{1}}}{2\mu(x)}}(u(y_{1})-u(x)),\cdots,\sqrt {\frac{w_{xy_{t_{x}}}}{2\mu(x)}}(u(y_{\ell_{x}})-u(x))\right),\] where \(\{y_{1},\cdots,y_{\ell_{x}}\}\) are all distinct points adjacent to \(x\). Clearly, such an \(\ell_{x}\) is unique and \(\nabla u(x)\in\mathbb{R}^{\ell_{x}}\). The integral of \(u\) is given by \[\int_{V}ud\mu=\sum_{x\in V}\mu(x)u(x).\] Now we consider an analog of (2) on a connected finite graph, namely \[\Delta u=\lambda e^{u}(e^{u}-1)+f\quad\mbox{in}\quad V, \tag{3}\] where \(\lambda\in\mathbb{R}\), \(f:V\rightarrow\mathbb{R}\) is a function. It was proved by Huang, Lin and Yau [24] that if \(\lambda>0\) and \(f=4\pi\sum_{i=1}^{N}\delta_{p_{i}}\), there exists a critical number \(\lambda^{*}>0\) such that (3) has a solution when \(\lambda>\lambda^{*}\), while (3) has no solution when \(0<\lambda<\lambda^{*}\). The critical case \(\lambda=\lambda^{*}\) was solved by Hou and Sun [21], who proved that (3) has also a solution. Such results are essentially based on the method of upper and lower solutions principle. This together with variational method may lead to existence results for other forms of Chern-Simons Higgs models, see Chao and Hou [5]. Recently, a more delicate analysis was employed by Huang, Wang and Yang [25] to get existence of solutions of the Chern-Simons Higgs system. Topological degree theory is a powerful tool in studying partial differential equations in the Euclidean space or Riemann surfaces, see for example Li [30]. It was first used by Sun and Wang [42] to solve the Kazdan-Warner equation on finite graphs. Very recently, it was also employed by Liu [37] to deal with the mean field equation. Our aim is to use this powerful tool to study the Chern-Simons Higgs model. The first and most important step is to get a priori estimate for solutions, say **Theorem 1**.: _Let \((V,E)\) be a connected finite graph with symmetric weights, i.e. \(w_{xy}=w_{yx}\) for all \(xy\in E\). Let \(\sigma\in[0,1]\), \(\lambda\) and \(f\) satisfy_ \[\Lambda^{-1}\leq|\lambda|\leq\Lambda,\ \ \Lambda^{-1}\leq\left|\int_{V}fd \mu\right|\leq\Lambda,\ \ \|f\|_{L^{\infty}(V)}\leq\Lambda \tag{4}\] _for some real number \(\Lambda>0\). If \(u\) is a solution of_ \[\Delta u=\lambda e^{u}(e^{u}-\sigma)+f\quad\text{in}\quad V, \tag{5}\] _then there exists a constant \(C\), depending only on \(\Lambda\) and the graph \(V\), such that \(|u(x)|\leq C\) for all \(x\in V\)._ When \(\sigma=1\), the equation (5) is exactly (3). In the case \(\lambda>0\) and \(f=4\pi\sum_{i=1}^{N}\delta_{p_{i}}\), where \(p_{1},\cdots,p_{N}\in V\) and \(N\in\mathbb{N}\), let \(\lambda^{*}\) be the critical number in [24]. Then for any \(\lambda_{k}>\lambda^{*}\) with \(\lambda_{k}\rightarrow\lambda^{*}\) as \(k\rightarrow\infty\), there exists a solution \(u_{\lambda_{k}}\) of (3) with \(\lambda=\lambda_{k}\), \(k=1,2,\cdots\). It follows from Theorem 1 that \((u_{\lambda_{k}})\) is uniformly bounded in \(V\). Hence up to a subsequence, \((u_{\lambda_{k}})\) uniformly converges to some \(u^{*}\), which is a solution of (3) with \(\lambda=\lambda^{*}\). This gives another proof of a result of Hou and Sun [21]. Denote \(X=L^{\infty}(V)\) and define a map \(F:X\to X\) by \[F(u)=-\Delta u+\lambda e^{u}(e^{u}-1)+f. \tag{6}\] The second step is to calculate the topological degree of \(F\) by using its homotopic invariance property. **Theorem 2**.: _Let \((V,E)\) be a connected finite graph with symmetric weights, and \(F:X\to X\) be a map defined by (6). Suppose that \(\lambda\int_{V}fd\mu\neq 0\). Then there exists a large number \(R_{0}>0\) such that for all \(R\geq R_{0}\),_ \[\deg(F,B_{R},0)=\left\{\begin{array}{lcl}1&\mathrm{if}&\lambda>0,\,\int_{V} fd\mu<0\\ 0&\mathrm{if}&\lambda\int_{V}fd\mu>0\\ -1&\mathrm{if}&\lambda<0,\,\int_{V}fd\mu>0,\end{array}\right.\] _where \(B_{R}=\{u\in X:\|u\|_{L^{\infty}(V)}<R\}\) is a ball in \(X\)._ As an application of the above topological degree, our existence results for the Chern-Simons Higgs model read as follows: **Theorem 3**.: _Let \((V,E)\) be a connected finite graph with symmetric weights. Then we have the following:_ (a) _If \(\lambda\int_{V}fd\mu<0\), then the equation (3) has a solution;_ (b) _If \(\lambda\int_{V}fd\mu>0\), then two subcases are distinguished:_ (i) \(\int_{V}fd\mu>0\). There exists a real number \(\Lambda^{*}>0\) such that when \(\lambda>\Lambda^{*}\), (3) has at least two different solutions; when \(0<\lambda<\Lambda^{*}\), (3) has no solution; when \(\lambda=\Lambda^{*}\), (3) has at least one solution;_ (ii) \(\int_{V}fd\mu<0\)_. There exists a real number \(\Lambda_{*}<0\) _such that when \(\lambda<\Lambda_{*}\), (3) has at least two different solutions; when \(\Lambda_{*}<\lambda<0\), (3) has no solution; when \(\lambda=\Lambda_{*}\), (3) has at least one solution._ We remark that Case (b) (\(i\)) includes \(\lambda>0\) and \(f=4\pi\sum_{i=1}^{N}\delta_{p_{i}}\) as a special case, which was studied in [5, 21, 24, 25]. In the subcase \(\lambda>\Lambda^{*}>0\) or \(\lambda<\Lambda_{*}<0\), we shall construct a local minimum solution, and then use the topological degree to obtain the existence of another solution. Our arguments are essentially different from those in [5, 25, 36]. Note that a solution of (3) is a critical point of the functional \(J_{\lambda}:X\rightarrow\mathbb{R}\) defined by \[J_{\lambda}(u)=\frac{1}{2}\int_{V}|\nabla u|^{2}d\mu+\frac{\lambda}{2}\int_{V} (e^{\mu}-1)^{2}d\mu+\int_{V}fud\mu. \tag{7}\] Here a local minimum solution of (3) means a local minimum critical point of \(J_{\lambda}\). Also we consider the Chern-Simons Higgs system \[\left\{\begin{array}{l}\Delta u=\lambda e^{u}(e^{u}-1)+f\\ \Delta v=\lambda e^{u}(e^{v}-1)+g,\end{array}\right. \tag{8}\] where \(\lambda\) is a real number, and \(f,g\) are functions on \(V\). Similar to the single equation, we need also a priori estimate. **Theorem 4**.: _Let \((V,E)\) be a connected finite graph with symmetric weights. Suppose that \(\sigma\in[0,1]\), \(\lambda,\eta\) are two positive real numbers, \(f,g\) are two functions verifying that \(\int_{V}fd\mu>0\) and \(\int_{V}gd\mu>0\). If \((u,v)\) is a solution of the system_ \[\left\{\begin{array}{l}\Delta u=\lambda e^{v}(e^{u}-\sigma)+f\\ \Delta v=\eta e^{u}(e^{v}-\sigma)+g,\end{array}\right. \tag{9}\] _then there exists a constant \(C\), depending only on \(\lambda,\eta,f,g\) and the graph \(V\), such that_ \[\|u\|_{L^{\infty}(V)}+\|v\|_{L^{\infty}(V)}\leq C.\] To compute the topological degree, we define a map \(\mathcal{F}:X\times X\to X\times X\) by \[\mathcal{F}(u,v)=(-\Delta u+\lambda e^{v}(e^{u}-1)+f,-\Delta v+\eta e^{u}(e^{ v}-1)+g). \tag{10}\] **Theorem 5**.: _Let \((V,E)\) be a connected finite graph with symmetric weights, and \(\mathcal{F}\) be a map defined by (10). If \(\lambda>0\), \(\eta>0\), \(\int_{V}fd\mu>0\) and \(\int_{V}gd\mu>0\), then there exists a large number \(R_{0}>0\) such that for all \(R\geq R_{0}\),_ \[\deg(\mathcal{F},B_{R},(0,0))=0,\] _where \(B_{R}=\{(u,v)\in X\times X:\|u\|_{L^{\infty}(V)}+\|v\|_{L^{\infty}(V)}<R\}\) is a ball in \(X\times X\)._ Define a functional \(\mathcal{J}_{\lambda}:X\times X\rightarrow\mathbb{R}\) by \[\mathcal{J}_{\lambda}(u,v)=\int_{V}\nabla u\nabla vd\mu+\lambda\int_{V}(e^{u} -1)(e^{v}-1)d\mu+\int_{V}(fv+gu)d\mu. \tag{11}\] Note that for all \((\phi,\psi)\in X\times X\), \[\langle\mathcal{J}^{r}_{\lambda}(u,v),(\phi,\psi)\rangle = \left.\frac{d}{dt}\right|_{t=0}\mathcal{J}(u+t\phi,v+t\psi) \tag{12}\] \[= \int_{V}\left\{(-\Delta v+\lambda e^{u}(e^{v}-1)+g)\,\phi+(- \Delta u+\lambda e^{v}(e^{u}-1)+f)\,\psi\right\}d\mu.\] Clearly \((u,v)\) is a critical point of \(\mathcal{J}_{\lambda}\) if and only if it is a solution of the system (8). As a consequence of Theorem 5, we have the following **Theorem 6**.: _Let \((V,E)\) be a connected finite graph with symmetric weights, \(\lambda>0\), \(\int_{V}fd\mu>0\), \(\int_{V}gd\mu>0\), and \(\mathcal{J}_{\lambda}\) be a functional defined by (11). If either \(\mathcal{J}_{\lambda}\) has a non-degenerate critical point, or \(\mathcal{J}_{\lambda}\) has a local minimum critical point, then it must have another critical point._ It should be remarked that Theorem 6 gives another solution of (8) under the condition that \(\mathcal{J}_{\lambda}\) has a non-degenerate or a local minimum critical point beforehand. So it is only a partial result for the problem of multiple solutions of the system (8). The remaining part of this paper is organized as follows: In Section 2, we give a priori estimate for solutions of (3) (Theorem 1); The topological degree of \(F:X\to X\) (Theorem 2) was calculated in Section 3; In Section 4, we prove the existence result (Theorem 3); The priori estimate and existence of solutions of the Chern-Simons Higgs system (Theorems 4-6) are discussed in Section 5. ## 2 A priori estimate In this section, we shall prove Theorem 1. In order to provide readers with a clear understanding of the proof, we demonstrate the entire process from simple cases to complex cases. Precisely the proof will be divided into several lemmas as below. The first priori estimate is for fixed \(\lambda\) and \(f\). **Lemma 7**.: _Suppose that \(u\) is a solution of (3), where \(\lambda\neq 0\) and \(\int_{V}fd\mu\neq 0\). Then there exists a constant \(C\), depending only on \(\lambda\), \(f\) and the graph \(V\), such that \(|u(x)|\leq C\) for all \(x\in V\)._ Proof.: If \(u\) is a solution of (3), then integration by parts gives \[0=\int_{V}\Delta ud\mu=\lambda\int_{V}e^{u}(e^{u}-1)d\mu+\int_{V}fd\mu. \tag{13}\] Firstly, we show that \(u\) has a uniform upper bound. With no loss of generality, we may assume \(\max_{V}u>0\). For otherwise, \(u\) has already upper bound \(0\). Observing \[\left|\int_{u<0}e^{u}(e^{u}-1)d\mu\right|\leq\frac{1}{4}|V|,\] we derive from (13) that \[\int_{u\geq 0}e^{u}(e^{u}-1)d\mu\leq a:=\frac{1}{4}|V|+\frac{1}{|\lambda|} \left|\int_{V}fd\mu\right|.\] This together with the fact \[\int_{u\geq 0}e^{u}(e^{u}-1)d\mu=\sum_{x\in V,u(x)\geq 0}\mu(x)e^{u(x)}(e^{u (x)}-1)\geq\mu_{0}e^{\max_{V}u}(e^{\max_{V}u}-1)\] leads to \[\max_{V}u\leq\log\frac{1+\sqrt{1+4a/\mu_{0}}}{2}, \tag{14}\] where \(\mu_{0}=\min_{x\in V}\mu(x)>0\), since \(V\) is finite. Secondly, we prove that \(u\) has also a uniform lower bound. To see this, in view of (3) and (14), we calculate for any \(x\in V\), \[|\Delta u(x)| \leq |\lambda|\left[e^{u(x)}(e^{u(x)}-1)\right]+|f(x)|\] \[\leq |\lambda|(e^{2u(x)}+e^{u(x)})+|f(x)|\] \[\leq |\lambda|\left(\frac{(1+\sqrt{1+4a/\mu_{0}})^{2}}{4}+\frac{1+ \sqrt{1+4a/\mu_{0}}}{2}\right)+\|f\|_{L^{u}(V)}\] \[=: b.\] Hence, there holds \[\|\Delta u\|_{L^{u}(V)}\leq b. \tag{15}\] We may assume \(V=\{x_{1},\cdots,x_{\ell}\}\), \(u(x_{1})=\max_{V}u\), \(u(x_{\ell})=\min_{V}u\), and without loss of generality \(x_{1}x_{2},x_{2}x_{3},\cdots,x_{\ell-1}x_{\ell}\) is the shortest path connecting \(x_{1}\) and \(x_{\ell}\). It follows that \[0\leq u(x_{1})-u(x_{\ell}) \leq \sum_{j=1}^{\ell-1}|u(x_{j})-u(x_{j+1})| \tag{16}\] \[\leq \frac{\sqrt{\ell-1}}{\sqrt{w_{0}}}\left(\sum_{j=1}^{\ell-1}w_{x_ {j}x_{j+1}}(u(x_{j})-u(x_{j+1}))^{2}\right)^{1/2}\] \[\leq \frac{\sqrt{\ell-1}}{\sqrt{w_{0}}}\left(\int_{V}|\nabla u|^{2}d \mu\right)^{1/2},\] where \(w_{0}=\min_{x\in V,y\sim x}w_{xy}>0\). Denoting \(\overline{u}=\frac{1}{|V|}\int_{V}ud\mu\), we obtain by integration by parts \[\int_{V}|\nabla u|^{2}d\mu = -\int_{V}(u-\overline{u})\Delta ud\mu\] \[\leq \left(\int_{V}(u-\overline{u})^{2}d\mu\right)^{1/2}\left(\int_{V }(\Delta u)^{2}d\mu\right)^{1/2}\] \[\leq \left(\frac{1}{\lambda_{1}}\int_{V}|\nabla u|^{2}d\mu\right)^{1/ 2}\left(\int_{V}(\Delta u)^{2}d\mu\right)^{1/2},\] which gives \[\int_{V}|\nabla u|^{2}d\mu\leq\frac{1}{\lambda_{1}}\int_{V}(\Delta u)^{2}d\mu \leq\frac{1}{\lambda_{1}}\|\Delta u\|_{L^{u}(V)}^{2}|V|, \tag{17}\] where \(\lambda_{1}=\inf_{\overline{u}=0,\int_{V}z^{2}d\mu=1}\int_{V}|\nabla v|^{2}d \mu>0\). Combining (16) and (17), we conclude \[\max_{V}u-\min_{V}u\leq\sqrt{\frac{(\ell-1)|V|}{w_{0}\lambda_{1}}}\|\Delta u\| _{L^{u}(V)}. \tag{18}\] We remark that (18) holds for arbitrary function \(u\), such an inequality was obtained by Sun and Wang [42] by using the equivalence of all norms in a finite dimensional vector space, and here we give an explicit constant instead of \(C\). The power of (18) is evident. In view of (15), we have \[\max_{V}u-\min_{V}u\leq c_{0}:=b\sqrt{\frac{(\ell-1)|V|}{w_{0}\lambda_{1}}}. \tag{19}\] Coming back to (13), we have \[\int_{V}e^{u}(e^{u}-1)d\mu=c_{1}:=-\frac{1}{\lambda}\int_{V}fd\mu. \tag{20}\] By the assumptions \(\lambda\neq 0\) and \(\int_{V}fd\mu\neq 0\), we know \(c_{1}\neq 0\). Now we _claim_ that \[\max_{V}u>-A:=\log\min\left\{1,\frac{|c_{1}|}{4|V|}\right\}. \tag{21}\] For otherwise, \(\max_{V}u\leq-A\), which together with (20) implies \[|c_{1}| = \left|\int_{V}e^{u}(e^{u}-1)d\mu\right|\] \[\leq \int_{V}(e^{2u}+e^{u})d\mu\] \[\leq (e^{2\max_{V}u}+e^{\max_{V}u})|V|\] \[\leq 2e^{-A}|V|\] \[< \frac{|c_{1}|}{2}.\] This contradicts \(c_{1}\neq 0\), and thus confirms our claim (21). Inserting (21) into (19), we obtain \[-A-c_{0}\leq\min_{V}u\leq\max_{V}u\leq\log\frac{1+\sqrt{1+4a/\mu_{0}}}{2},\] as we desired. The second priori estimate is for the changing \(\lambda\) and \(f\). **Lemma 8**.: _Let \(u\) be a solution of (3). If \(\lambda\) and \(f\) satisfy (4), then there exists a constant \(C\), depending only on \(\Lambda\) and the graph \(V\), such that \(|u(x)|\leq C\) for all \(x\in V\)._ Proof.: It suffices to modify the argument in the proof of Lemma 7. Similar to (14), we first have the upper bound estimate \[\max_{V}u\leq\log\frac{1+\sqrt{1+4a/\mu_{0}}}{2}, \tag{22}\] where \(\mu_{0}=\min_{x\in V}\mu(x)\) and \(a=|V|+\Lambda^{2}\). Next, instead of (19), we have \[\max_{V}u-\min_{V}u\leq c_{0}=b\sqrt{\frac{(\ell-1)|V|}{w_{0}\lambda_{1}}}, \tag{23}\] where \(\lambda_{1}=\inf_{\gamma=0,\int_{V},v^{2}d\mu=1}\int_{V}|\nabla v|^{2}d\mu\), \(\ell\) denotes the number of all points of \(V\), \(w_{0}=\min_{x\in V,y\to x}w_{xy}\) and \[b=\Lambda\left(\frac{(1+\sqrt{1+4a/\mu_{0}})^{2}}{4}+\frac{1+\sqrt{1+4a/\mu_{0} }}{2}+1\right).\] To proceed, we shall show \[\max_{V}u>-A=\log\min\left\{1,\frac{1}{4|V|\Lambda^{2}}\right\}. \tag{24}\] Suppose not. We have \(\max_{V}u\leq-A\) and \[\frac{1}{\Lambda^{2}}\leq\left|\frac{1}{\lambda}\int_{V}fd\mu\right| = \left|\int_{V}e^{\mu}(e^{\mu}-1)d\mu\right|\] \[\leq \int_{V}(e^{2\mu}+e^{\mu})d\mu\] \[\leq 2e^{-\Lambda}|V|\] \[< \frac{1}{2\Lambda^{2}},\] which is impossible. Thus (24) holds. Combining (22), (23) and (24), we get the desired result. The third priori estimate is not only for changing \(\lambda\) and \(f\), but also for the changing parameter \(\sigma\). **Lemma 9**.: _Let \(\sigma\in[0,1]\), \(\lambda\) and \(f\) satisfy (4) for some real number \(\Lambda>0\). If \(u\) is a solution of (5), then there exists a constant \(C\), depending only on \(\Lambda\) and the graph \(V\), such that \(|u(x)|\leq C\) for all \(x\in V\)._ Proof.: If \(u\) is a solution of (5), then integration by parts gives \[0=\int_{V}\Delta ud\mu=\lambda\int_{V}e^{\mu}(e^{u}-\sigma)d\mu+\int_{V}fd\mu.\] Similar to (14), keeping in mind \(\sigma\in[0,1]\), we first have the same upper bound estimate as (22), namely \[\max_{V}u\leq\log\frac{1+\sqrt{1+4a/\mu_{0}}}{2},\] where \(\mu_{0}=\min_{x\in V}\mu(x)\) and \(a=|V|+\Lambda^{2}\). Next, we have the same estimates as (23) and (24), which is independent of the parameter \(\sigma\in[0,1]\). In particular \[\max_{V}u>-A=\log\min\left\{1,\frac{1}{4|V|\Lambda^{2}}\right\}.\] This ends the proof of the lemma, and completes the proof of Theorem 1. ## 3 Topological degree In this section, we shall prove Theorem 2. Precisely we shall compute the topological degree of certain maps related to the Chern-Simons Higgs model. _Proof of Theorem 2._ Assume \(V=\{x_{1},\cdots,x_{\ell}\}\). let \(X=L^{\infty}(V)\). We may identify \(X\) with the Euclidean space \(\mathbb{R}^{\ell}\). Without causing ambiguity, we define a map \(F:X\times[0,1]\to X\) by \[F(u,\sigma)=-\Delta u+\lambda e^{u}(e^{u}-\sigma)+f,\quad(u,\sigma)\in X\times [0,1].\] Obviously, \(F\) is a smooth map. For the fixed real number \(\lambda\) and the fixed function \(f\), since \(\lambda\overline{f}\neq 0\), there must exist a large number \(\Lambda>0\) such that \[\Lambda^{-1}\leq|\lambda|\leq\Lambda,\ \ \Lambda^{-1}\leq\left|\int_{V}fd\mu \right|\leq\Lambda,\ \ \|f\|_{L^{\infty}(V)}\leq\Lambda. \tag{25}\] Here and in the sequel, \(\overline{f}\) denotes the integral mean of a function \(f\). Then it follows from Theorem 1 that there exists a constant \(R_{0}>0\), depending only on \(\Lambda\) and the graph \(V\), such that for all \(\sigma\in[0,1]\), all solutions of \(F(u,\sigma)=0\) satisfy \(\|u\|_{L^{\infty}(V)}<R_{0}\). Denote a ball centered at \(0\in X\) with radius \(r\) by \(B_{r}\subset X\), and its boundary by \(\partial B_{r}=\{u\in X:\|u\|_{L^{\infty}(V)}=r\}\). Thus we conclude \[0\notin F(\partial B_{R},\sigma),\quad\forall\sigma\in[0,1],\ \forall R\geq R_{0}.\] By the homotopic invariance of the topological degree, we have \[\deg(F(\cdot,1),B_{R},0)=\deg(F(\cdot,0),B_{R},0),\quad\forall R\geq R_{0}. \tag{26}\] Given any \(\epsilon>0\), we define another smooth map \(G_{\epsilon}:X\times[0,1]\to X\) by \[G_{\epsilon}(u,t)=-\Delta u+\lambda e^{2u}+(t+(1-t)\epsilon)f,\quad(u,t)\in X \times[0,1].\] Notice that \[\left|(t+(1-t)\epsilon)\int_{V}fd\mu\right|\geq\min\{1,\epsilon\}\left|\int_{ V}fd\mu\right|,\quad\forall t\in[0,1].\] Applying Theorem 1 again, we find a constant \(R_{\epsilon}>0\), depending only on \(\epsilon\), \(\Lambda\) and the graph \(V\), such that all solutions \(u\) of \(G_{\epsilon}(u,t)=0\) satisfy \(\|u\|_{L^{\infty}(V)}<R_{\epsilon}\) for all \(t\in[0,1]\). This implies \[0\notin G_{\epsilon}(\partial B_{R_{\epsilon}},t),\quad\forall t\in[0,1].\] Hence the homotopic invariance of the topological degree leads to \[\deg(G_{\epsilon}(\cdot,1),B_{R_{\epsilon}},0)=\deg(G_{\epsilon}(\cdot,0),B_{ R_{\epsilon}},0). \tag{27}\] To calculate \(\deg(G_{\epsilon}(\cdot,0),B_{R_{\epsilon}},0)\), we need to understand the solvability of the equation \[G_{\epsilon}(u,0)=-\Delta u+\lambda e^{2u}+\epsilon f=0. \tag{28}\] Now we _claim_ two properties of solutions of (28): \((i)\) If \(\lambda\overline{f}<0\), then there exists an \(\epsilon_{0}>0\) such that for any \(\epsilon\in(0,\epsilon_{0})\), (28) has a unique solution \(u_{\epsilon}\), which satisfies \(e^{2u_{\epsilon}}\leq C\epsilon\), where \(C\) is a constant depending only on \(\Lambda\) and the graph \(V\); \((ii)\) If \(\lambda\overline{f}>0\), then (28) has no solution for all \(\epsilon>0\). To see Claim \((i)\), for any \(\epsilon>0\), we let \(v_{\epsilon}\) be the unique solution of the equation \[\left\{\begin{array}{l}\Delta v=\epsilon f-\epsilon\overline{f}\quad\mbox{ in}\quad V\\ \overline{v}=0.\end{array}\right.\] Then the solvability of (28) is equivalent to that of the equation \[\Delta w=\lambda e^{2v_{\epsilon}}e^{2w}+\epsilon\overline{f}. \tag{29}\] Note that the existence of solutions to (29), under the assumptions that \(\epsilon\) is sufficiently small and \(\lambda\overline{f}<0\), follows from ([13], Theorems 2 and 4). Hence there exists some \(\epsilon_{1}>0\) such that if \(0<\epsilon<\epsilon_{1}\), then the equation (28) has a solution \(u_{\epsilon}\). Integrating both sides of (28), we have by (25), \[\int_{V}e^{2u_{\epsilon}}d\mu=-\frac{\epsilon}{\lambda}\int_{V}fd\mu\leq\Lambda^ {2}\epsilon,\] which leads to \[e^{2u_{\epsilon}(x)}\leq\frac{\Lambda^{2}}{\mu_{0}}\epsilon,\quad\forall x\in V, \tag{30}\] where \(\mu_{0}=\min_{x\in V}\mu(x)\). We also need to prove the uniqueness of the solution. Let \(\varphi\) be an arbitrary solution of (28), namely it satisfies \[\Delta\varphi=\lambda e^{2\varphi}+\epsilon f. \tag{31}\] The same procedure as above gives \[\int_{V}e^{2\varphi}d\mu\leq\Lambda^{2}\epsilon,\quad e^{2\varphi(x)}\leq \frac{\Lambda^{2}}{\mu_{0}}\epsilon\quad\text{for all}\quad x\in V. \tag{32}\] Subtracting (31) from (28) and integrating by parts, we have \[0=\int_{V}\Delta(u_{\epsilon}-\varphi)d\mu=\lambda\int_{V}(e^{2u_{\epsilon}}- e^{2\varphi})d\mu,\] which leads to \[\min_{V}(u_{\epsilon}-\varphi)<0<\max_{V}(u_{\epsilon}-\varphi).\] As a consequence, there holds \[|u_{\epsilon}-\varphi|\leq\max_{V}(u_{\epsilon}-\varphi)-\min_{V}(u_{\epsilon }-\varphi). \tag{33}\] Also we derive from (28), (30), (31), and (32), \[|\Delta(u_{\epsilon}-\varphi)(x)| = \left|\lambda\left(e^{2u_{\epsilon}(x)}-e^{2\varphi(x)}\right)\right| \tag{34}\] \[\leq 2\Lambda\left(e^{2u_{\epsilon}(x)}+e^{2\varphi(x)}\right)|u_{ \epsilon}(x)-\varphi(x)|\] \[\leq \frac{4\Lambda^{3}}{\mu_{0}}\epsilon|u_{\epsilon}(x)-\varphi(x)|.\] Combining (18), (33) and (34), we obtain \[\max_{V}(u_{\epsilon}-\varphi)-\min_{V}(u_{\epsilon}-\varphi)\leq\sqrt{\frac {(\ell-1)|V|}{w_{0}\lambda_{1}}}\frac{4\Lambda^{3}}{\mu_{0}}\epsilon\left( \max_{V}(u_{\epsilon}-\varphi)-\min_{V}(u_{\epsilon}-\varphi)\right). \tag{35}\] Choose \[\epsilon_{0}=\min\left\{\epsilon_{1},\sqrt{\frac{w_{0}\lambda_{1}}{(\ell-1)|V |}}\frac{\mu_{0}}{8\Lambda^{3}}\right\}.\] If we take \(0<\epsilon<\epsilon_{0}\), then (35) implies \(\varphi\equiv u_{\epsilon}\) on \(V\), and thus (28) has a unique solution. Hence (\(i\)) holds. To see Claim (\(ii\)), in the case \(\lambda\overline{f}>0\), if (28) has a solution \(u\), then there holds \[0=\int_{V}\Delta ud\mu=\lambda\int_{V}e^{2u}d\mu+\int_{V}fd\mu,\] which is impossible. This confirms \((ii)\), and our claims hold. Let us continue to prove the theorem. Note that \(-\Delta:X\to X\) is a nonnegative definite symmetric operator, its eigenvalues are written as \[0=\lambda_{0}<\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{\ell-1},\] where \(\ell\) is the number of all points in \(V\). By Claim \((i)\), in the case \(\lambda\overline{f}<0\), we may choose a sufficiently small \(\epsilon>0\) such that \(G_{\epsilon}(u,0)=0\) has a unique solution \(u_{\epsilon}\) verifying \[2|\lambda|e^{2u_{\epsilon}(x)}<\lambda_{1}.\] A straightforward calculation shows \[DG_{\epsilon}(u_{\epsilon},0)=-\Delta+2\lambda e^{2u_{\epsilon}}\mathrm{I},\] where we identify the linear operator \(-\Delta\) with the \(\ell\times\ell\) matrix corresponding to \(-\Delta\), and denote the \(\ell\times\ell\) diagonal matrix \(\mathrm{diag}[1,1,\cdots,1]\) by \(\mathrm{I}\). Clearly \[\deg(G_{\epsilon}(\cdot,0),B_{R_{\epsilon}},0)=\mathrm{sgn}\det\left(DG_{ \epsilon}(u_{\epsilon},0)\right)=\mathrm{sgn}\left\{2\lambda e^{2u_{\epsilon} (x)}\Pi_{j=1}^{\ell-1}(\lambda_{j}+2\lambda e^{2u_{\epsilon}(x)})\right\}= \mathrm{sgn}\lambda.\] This together with (26) and (27) leads to \[\deg(F(\cdot,1),B_{R_{\epsilon}},0) = \deg(F(\cdot,0),B_{R_{\epsilon}},0)\] \[= \deg(G_{\epsilon}(\cdot,1),B_{R_{\epsilon}},0)\] \[= \deg(G_{\epsilon}(\cdot,0),B_{R_{\epsilon}},0)\] \[= \mathrm{sgn}\lambda.\] By Claim \((ii)\), in the case \(\lambda\overline{f}>0\), since \(G_{\epsilon}(u,0)=0\) has no solution, we obtain \[\deg(F(\cdot,1),B_{R_{\epsilon}},0)=\deg(G_{\epsilon}(\cdot,0),B_{R_{\epsilon }},0)=0.\] Thus the proof of Theorem 2 is completed. ## 4 Existence results In this section, we shall prove Theorem 3 by using the topological degree in Theorem 2. _Proof of Theorem 3_ (a). If \(\lambda\overline{f}<0\), then by Theorem 2, we find some large \(R_{0}>1\) such that \[\deg(F,B_{R_{0}},0)\neq 0.\] Thus the Kronecker's existence theorem implies (3) has a solution. In the remaining part of this section, we always assume \(\lambda\overline{f}>0\). We first prove that (3) has a local minimum solution for large \(|\lambda|\), say **Lemma 10**.: _If \(|\lambda|\) is chosen sufficiently large, then the equation (3) has a local minimum solution._ Proof.: Let us first consider the subcase \(\lambda>0\) and \(\overline{f}>0\). Set \[L_{\lambda}u=-\Delta u+\lambda e^{u}(e^{u}-1)+f. \tag{36}\] For real numbers \(A\) and \(\lambda\), there hold \[L_{\lambda}A=\lambda e^{A}(e^{A}-1)+f,\quad L_{\lambda}\log\frac{1}{2}=-\frac{1} {4}\lambda+f.\] Clearly, taking sufficiently large \(A>1\) and \(\lambda>1\), we have \[L_{\lambda}A>0,\quad L_{\lambda}\log\frac{1}{2}<0. \tag{37}\] Recall the functional \(J_{\lambda}:X=L^{\infty}(V)\rightarrow\mathbb{R}\) defined by (7). Since \(X\cong\mathbb{R}^{\ell}\), \(J_{\lambda}\in C^{2}(X,\mathbb{R})\), and \(\{u\in X:\log\frac{1}{2}\leq u\leq A\}\) is a bounded closed subset of \(X\), it is easy to find some \(u_{\lambda}\in X\) satisfying \(\log\frac{1}{2}\leq u_{\lambda}(x)\leq A\) for all \(x\in V\) and \[J_{\lambda}(u_{\lambda})=\min_{\log\frac{1}{2}\leq u\leq A}J_{ \lambda}(u). \tag{38}\] We _claim_ that \[\log\frac{1}{2}<u_{\lambda}(x)<A\quad\mbox{for all}\quad x\in V. \tag{39}\] Suppose not. There must hold \(u_{\lambda}(x_{0})=\log\frac{1}{2}\) for some \(x_{0}\in V\), or \(u_{\lambda}(x_{1})=A\) for some \(x_{1}\in V\). If \(u_{\lambda}(x_{0})=\log\frac{1}{2}\), then we take a small \(\epsilon>0\) such that \[\log\frac{1}{2}<u_{\lambda}(x)+t\delta_{x_{0}}(x)<A,\quad\forall x \in V,\,\forall t\in(0,\epsilon).\] On one hand, in view of (37) and (38), we have \[0 \leq \frac{d}{dt}\Big{|}_{t=0}\,J_{\lambda}(u_{\lambda}+t\delta_{x_{0}}) \tag{40}\] \[= \int_{V}(-\Delta u_{\lambda}+\lambda e^{u_{\lambda}}(e^{u_{ \lambda}}-1)+f)\,\delta_{x_{0}}d\mu\] \[= -\Delta u_{\lambda}(x_{0})+\lambda e^{u_{\lambda}(x_{0})}(e^{u_{ \lambda}(x_{0})}-1)+f(x_{0})\] \[< -\Delta u_{\lambda}(x_{0}).\] On the other hand, since \(u_{\lambda}(x)\geq u_{\lambda}(x_{0})\) for all \(x\in V\), we conclude \(\Delta u_{\lambda}(x_{0})\geq 0\), which contradicts (40). Hence \(u_{\lambda}(x)>\log\frac{1}{2}\) for all \(x\in V\). In the same way, we exclude the possibility of \(u_{\lambda}(x_{1})=A\) for some \(x_{1}\in V\). This confirms our claim (39). Combining (38) and (39), we conclude that \(u_{\lambda}\) is a local minimum critical point of \(J_{\lambda}\), in particular, \(u_{\lambda}\) is a solution of (3). Now we consider the subcase \(\lambda<0\) and \(\overline{f}<0\). Let \(\varphi\) be the unique solution of \[\left\{\begin{array}{l}\Delta\varphi=f-\overline{f}\\ \overline{\varphi}=0.\end{array}\right.\] Using the notation of the operator \(L_{\lambda}\) given by (36), we have \[L_{\lambda}(\varphi-A) = -\Delta\varphi+\lambda e^{\varphi-A}(e^{\varphi-A}-1)+f \tag{41}\] \[= \lambda e^{\varphi-A}(e^{\varphi-A}-1)+\overline{f}\] \[< 0\] \[L_{\lambda}(\log\frac{1}{2}) = \lambda e^{\log\frac{1}{2}}(e^{\log\frac{1}{2}}-1)+f\] \[= -\frac{\lambda}{4}+f\] \[> 0,\] provided that \(\lambda<4\min_{V}f\) and \(A>1\) is chosen sufficiently large. Similar to (38) and (39), there exists some \(u_{\lambda}\) satisfying \(\varphi(x)-A<u_{\lambda}(x)<\log\frac{1}{2}\) for all \(x\in V\) and \[J_{\lambda}(u_{\lambda})=\min_{\varphi\sim A\leq\varphi\leq\log\frac{1}{2}}J_ {\lambda}(u)=\min_{\varphi\sim A<\omega\log\frac{1}{2}}J_{\lambda}(u).\] This implies \(u_{\lambda}\) is a local minimum solution of (3). To proceed, we also need the following: **Lemma 11**.: _If \(\lambda_{1}>0\) such that the equation \(L_{\lambda_{1}}u=0\) has a solution \(u_{\lambda_{1}}\), then for any \(\lambda>\lambda_{1}\), we have_ \[L_{\lambda}\left(u_{\lambda_{1}}+\log\frac{\lambda_{1}}{\lambda}\right)<0.\] _Similarly, if \(\lambda_{2}<0\) such that \(L_{\lambda_{2}}u_{\lambda_{2}}=0\), then for any \(\lambda<\lambda_{2}\), there holds_ \[L_{\lambda}\left(u_{\lambda_{2}}+\log\frac{\lambda_{2}}{\lambda} \right)>0.\] Proof.: If \(\lambda>\lambda_{1}>0\), then \[L_{\lambda}\left(u_{\lambda_{1}}+\log\frac{\lambda_{1}}{\lambda}\right) = -\Delta u_{\lambda_{1}}+\lambda_{1}e^{u_{\lambda_{1}}}\left( \frac{\lambda_{1}}{\lambda}e^{u_{\lambda_{1}}}-1\right)+f\] \[< -\Delta u_{\lambda_{1}}+\lambda_{1}e^{u_{\lambda_{1}}}(e^{u_{ \lambda_{1}}}-1)+f\] \[= 0.\] If \(\lambda<\lambda_{2}<0\), then \[L_{\lambda}\left(u_{\lambda_{2}}+\log\frac{\lambda_{2}}{\lambda}\right) = -\Delta u_{\lambda_{2}}+\lambda_{2}e^{u_{\lambda_{2}}}\left( \frac{\lambda_{2}}{\lambda}e^{u_{\lambda_{2}}}-1\right)+f\] \[> -\Delta u_{\lambda_{2}}+\lambda_{2}e^{u_{\lambda_{2}}}\left(e^{u_ {\lambda_{2}}}-1\right)+f\] \[= 0,\] as we desired. As a consequence, we have **Lemma 12**.: _Assume \(L_{\lambda_{1}}u_{\lambda_{1}}=L_{\lambda_{2}}u_{\lambda_{2}}=0\) on \(V\). If either \(\lambda>\lambda_{1}>0\) or \(\lambda<\lambda_{2}<0\), then the equation (3) has a local minimum solution \(u_{\lambda}\)._ Proof.: Assume \(\lambda>\lambda_{1}>0\). Let \(A>1\) be a sufficiently large constant such that \(L_{\lambda}A>0\) and \(u_{\lambda_{1}}+\log\frac{\lambda_{1}}{\lambda}<A\) on \(V\). Then there exists some \(u_{\lambda}\) such that \[J_{\lambda}(u_{\lambda})=\min_{u_{\lambda_{1}}+\log\frac{\lambda_{1}}{\lambda }\leq\sigma\leq A}J_{\lambda}(u).\] Suppose there is some point \(x_{0}\in V\) satisfying \(u_{\lambda}(x_{0})=u_{\lambda_{1}}(x_{0})+\log\frac{\lambda_{1}}{\lambda}\). Let \(\epsilon>0\) be so small that for \(t\in(0,\epsilon)\), there holds \[u_{\lambda_{1}}(x)+\log\frac{\lambda_{1}}{\lambda}<u_{\lambda}(x)+t\delta_{x_{0 }}(x)<A\quad\text{for all}\quad x\in V.\] Similarly as we did in the proof of Lemma 10, we have by Lemma 11, \[0 \leq \left.\frac{d}{dt}\right|_{t=0}J_{\lambda}(u_{\lambda}+t\delta_{x _{0}})\] \[= -\Delta u_{\lambda}(x_{0})+\lambda e^{u_{\lambda}(x_{0})}(e^{u_{ \lambda}(x_{0})}-1)+f(x_{0})\] \[= -\Delta\left(u_{\lambda}-u_{\lambda_{1}}\right)(x_{0})+L_{ \lambda}\left(u_{\lambda_{1}}+\log\frac{\lambda_{1}}{\lambda}\right)(x_{0})\] \[< -\Delta\left(u_{\lambda}-u_{\lambda_{1}}\right)(x_{0}).\] This contradicts the fact that \(x_{0}\) is a minimum point of \(u_{\lambda}-u_{\lambda_{1}}-\log\frac{\lambda_{1}}{\lambda}\). Hence \[u_{\lambda}(x)>u_{\lambda_{1}}(x)+\log\frac{\lambda_{1}}{\lambda},\quad\forall x \in V.\] In the same way we obtain \(u(x)<A\) for all \(x\in V\). Therefore \(u_{\lambda}\) is a local minimum critical point of \(J_{\lambda}\). Assume \(\lambda<\lambda_{2}<0\). The constant \(A>1\) is chosen sufficiently large such that \(\varphi-A<u_{\lambda_{2}}+\log\frac{\lambda_{2}}{\lambda}\) on \(V\), and \(\varphi-A\) satisfies (41). Clearly there exists some \(u_{\lambda}\) such that \[J_{\lambda}(u_{\lambda})=\min_{\varphi-A\leq u\leq u_{\lambda_{2}}+\log\frac{ \lambda_{2}}{\lambda}}J_{\lambda}(u).\] If there is some point \(x_{1}\in V\) satisfying \(u_{\lambda}(x_{1})=u_{\lambda_{2}}(x_{1})+\log\frac{\lambda_{2}}{\lambda}\), then there is a small \(\epsilon>0\) such that for \(t\in(0,\epsilon)\), there holds \[\varphi(x)-A<u_{\lambda}(x)-t\delta_{x_{1}}(x)<u_{\lambda_{2}}(x)+\log\frac{ \lambda_{2}}{\lambda}\quad\text{for all}\quad x\in V.\] Thus we have by Lemma 11, \[0 \leq \left.\frac{d}{dt}\right|_{t=0}J_{\lambda}(u_{\lambda}-t\delta_{x _{1}})\] \[= \Delta u_{\lambda}(x_{1})-\lambda e^{u_{\lambda}(x_{1})}(e^{u_{ \lambda}(x_{1})}-1)-f(x_{0})\] \[= \Delta\left(u_{\lambda}-u_{\lambda_{2}}\right)(x_{0})-L_{\lambda }\left(u_{\lambda_{2}}+\log\frac{\lambda_{2}}{\lambda}\right)(x_{0})\] \[< \Delta\left(u_{\lambda}-u_{\lambda_{2}}\right)(x_{0}).\] This contradicts the fact that \(x_{1}\) is a maximum point of \(u_{\lambda}-u_{\lambda_{2}}-\log\frac{\lambda_{2}}{\lambda}\). Hence \[u_{\lambda}(x)<u_{\lambda_{2}}(x)+\log\frac{\lambda_{2}}{\lambda},\quad\forall x \in V.\] In the same way we obtain \(u(x)>\varphi(x)-A\) for all \(x\in V\). Therefore \(u_{\lambda}\) is a local minimum critical point of \(J_{\lambda}\). Thus we complete the proof of the lemma. \(\Box\) We conclude from Lemmas 10 and 12 that the following two critical numbers are well defined. \[\Lambda^{*}=\inf\left\{\lambda>0:\lambda\overline{f}>0,J_{\lambda} \text{ has a local minimum critical point}\right\} \tag{42}\] \[\Lambda_{*}=\sup\left\{\lambda<0:\lambda\overline{f}>0,J_{\lambda} \text{ has a local minimum critical point}\right\}. \tag{43}\] **Lemma 13**.: _If \(\overline{f}>0\), then \(\Lambda^{*}\geq 4\overline{f}\); If \(\overline{f}<0\), then \(\Lambda_{*}\leq 4\overline{f}\)._ Proof.: Suppose \(\lambda\neq 0\) and \(u\) is a solution of \(\Delta u=\lambda e^{u}(e^{u}-1)+f\). Integration by parts gives \[-\frac{\int_{V}fd\mu}{\lambda}=\int_{V}e^{u}(e^{u}-1)d\mu\geq-\frac{|V|}{4},\] since \(e^{u}(e^{u}-1)\geq-\frac{1}{4}\). The conclusion follows from (42) and (43) immediately. We are now ready to complete the proof of the remaining part of the theorem. _Proof of Theorem 3_ (b). _We first consider the solvability of the equation (3) under the assumption \(\lambda\in(0,\Lambda^{*}]\cup[\Lambda_{*},0)\)._ If \(\lambda\in(0,\Lambda^{*})\cup(\Lambda_{*},0)\), then (3) has no solution. Indeed, suppose there exists a number \(\lambda_{1}\in(0,\Lambda^{*})\cup(\Lambda_{*},0)\) such that (3) has a solution at \(\lambda=\lambda_{1}\). With no loss of generality, we assume \(\lambda_{1}\in(\Lambda_{*},0)\), then by Lemma 12, (3) has a local minimum solution at any \(\lambda\in[\Lambda_{*},\lambda_{1})\). This contradicts the definition of \(\Lambda_{*}\). Hence (3) has no solution for any \(\lambda\in(0,\Lambda^{*})\cup(\Lambda_{*},0)\). Note that for any \(j\in\mathbb{N}\), there exists a solution \(u_{j}\) of (3) with \(\lambda=\Lambda_{*}-1/j\). According to Theorem 1, \((u_{j})\) is uniformly bounded in \(V\). Thus up to a subsequence, \((u_{j})\) uniformly converges to some function \(u^{*}\), a solution of (3) with \(\lambda=\Lambda_{*}\). In the same way, (3) has also a solution at \(\lambda=\Lambda^{*}\). _We next consider multiple solutions of (3) under the assumption \(\lambda\in(\Lambda^{*},+\infty)\cup(-\infty,\Lambda_{*})\)._ If \(\lambda\in(\Lambda^{*},+\infty)\cup(-\infty,\Lambda_{*})\), by (42) and (43), we let \(u_{\lambda}\) be a local minimum critical point of \(J_{\lambda}\). With no loss of generality, we may assume \(u_{\lambda}\) is the unique critical point of \(J_{\lambda}\). For otherwise, \(J_{\lambda}\) has already at least two critical points, and the proof terminates. According to ([4], Chapter 1, page 32), the \(q\)-th critical group of \(J_{\lambda}\) at \(u_{\lambda}\) is defined by \[\mathsf{C}_{q}(J_{\lambda},u_{\lambda})=\mathsf{H}_{q}(J_{\lambda}^{c}\cap U, \{J_{\lambda}^{c}\setminus\{u_{\lambda}\}\}\cap U,\mathsf{G}), \tag{44}\] where \(J_{\lambda}(u_{\lambda})=c\), \(J_{\lambda}^{c}=\{u\in X:J_{\lambda}(u)\leq c\}\), \(U\) is a neighborhood of \(u_{\lambda}\in X\), \(\mathsf{H}_{q}\) is the singular homology group with the coefficients groups \(\mathsf{G}\), say \(\mathbb{Z}\), \(\mathbb{R}\). By the excision property of \(\mathsf{H}_{q}\), this definition is not dependent on the choice of \(U\). It is easy to calculate \[\mathsf{C}_{q}(J_{\lambda},u_{\lambda})=\delta_{q0}\mathsf{G}. \tag{45}\] Note that \(J_{\lambda}\) satisfies the Palais-Smale condition. Indeed, if \(J_{\lambda}(u_{j})\to c\in\mathbb{R}\) and \(J^{\prime}(u_{j})\to 0\) as \(j\to\infty\), then using the method of proving Theorem 1, we obtain \((u_{j})\) is uniformly bounded. Since \(X\) is precompact, then up to a subsequence, \((u_{j})\) converges uniformly to some \(u^{*}\), a critical point \(J_{\lambda}\). Thus the Palais-Smale condition follows. Notice also that \[DJ_{\lambda}(u)=-\Delta u+\lambda e^{u}(e^{u}-1)+f=F(u),\] where \(F\) is given as in Theorem 2. According to ([4], Chapter 2, Theorem 3.2), in view of (45), we have for sufficiently large \(R>1\), \[\deg(F,B_{R},0)=\deg(DJ_{\lambda},B_{R},0)=\sum_{q=0}^{\infty}(-1)^{q}{\rm rank }\,\mathsf{C}_{q}(J_{\lambda},u_{\lambda})=1.\] This contradicts \(\deg(F,B_{R},0)=0\) derived from Theorem 2. Therefore the equation (3) has at least two different solutions, and the proof of Theorem 3 (b) is finished. ## 5 Chern-Simons Higgs System In this section, we shall calculate the topological degree of the map related to the Chern-Simons Higgs system (8), and then use the degree to obtain partial results for multiplicity of solutions to the system. In particular, Theorems 4-6 will be proved. We first derive a priori estimate for solutions of (9), a deformation of (8). _Proof of Theorem 4._ Let \(\sigma\in[0,1]\), \(\lambda>0\), \(\eta>0\), \(\overline{f}>0\), \(\overline{g}>0\), and \((u,v)\) be a solution of the system (9). Note that there exist a unique solution \(\varphi\) to the equation \[\left\{\begin{array}{l}\Delta\varphi=f-\overline{f}\\ \int_{V}\varphi d\mu=0\end{array}\right.\] and a unique solution \(\psi\) to the equation \[\left\{\begin{array}{l}\Delta\psi=g-\overline{g}\\ \int_{V}\psi d\mu=0.\end{array}\right.\] Set \(w=u-\varphi\) and \(z=v-\psi\). Then we have \[\left\{\begin{array}{l}\Delta w=\lambda e^{\varphi}e^{z}(e^{ \varphi}e^{w}-\sigma)+\overline{f}\\ \Delta z=\eta e^{\varphi}e^{w}(e^{\varphi}e^{z}-\sigma)+\overline{g},\end{array}\right. \tag{46}\] We _claim_ that \[w(x)<-\min_{V}\varphi\quad\mbox{for all}\quad x\in V. \tag{47}\] Suppose not. There necessarily hold \(\max_{V}w\geq-\min_{V}\varphi\). Take \(x_{0}\in V\) satisfying \(w(x_{0})=\max_{V}w\). Since \(\sigma\in[0,1]\), \(\lambda>0\), \(\overline{f}>0\) and \(\varphi(x_{0})+w(x_{0})\geq 0\), we have \[0\geq\Delta w(x_{0})=\lambda e^{\varphi(x_{0})}e^{z(x_{0})}(e^{ \varphi(x_{0})}e^{w(x_{0})}-\sigma)+\overline{f}\geq\overline{f}>0,\] which is impossible. Hence our claim (47) follows. Keeping in mind \(\eta>0\) and \(\overline{g}>0\), in the same way as above, we also have \[z(x)<-\min_{V}\psi\quad\mbox{for all}\quad x\in V. \tag{48}\] Inserting (47) and (48) into (46), we obtain \[\|\Delta w\|_{L^{\infty}(V)}+\|\Delta z\|_{L^{\infty}(V)}\leq C\] for some constant \(C\), depending only on \(\lambda,\eta,f,g\) and the graph \(V\). The most important thing here is that the constant \(C\) is not dependent on the parameter \(\sigma\in[0,1]\). Coming back to the inequality (18), we immediately conclude \[\max_{V}w-\min_{V}w\leq C \tag{49}\] and \[\max_{V}z-\min_{V}z\leq C.\] Observe that integration on both sides of the second equation in (46) leads to \[\int_{V}e^{\mu}e^{\nu}(e^{\vartheta}e^{z}-\sigma)d\mu=-\frac{\overline{\beta}}{ \eta}|V|.\] As a consequence, there holds \[0<\frac{\overline{\beta}}{\eta}\leq e^{\max_{V}w}e^{\max_{V}\varphi}\left(e^{ \max_{V}\psi}+1\right)\leq Ce^{\max_{V}w}.\] Hence \(\max_{V}w\geq-C\), and in view of (49), \[\min_{V}w\geq-C. \tag{50}\] In the same way, from (49) and the first equation of (46), we derive \[\min_{V}z\geq-C. \tag{51}\] In view of (47), (48), (50) and (51), the proof of the theorem is completed. \(\square\) Now we calculate the topological degree of the map defined as in (10). _Proof of Theorem 5._ Let \(X=L^{\infty}(V)\). Define a map \(\mathcal{F}:X\times X\times[0,1]\to X\times X\) by \[\mathcal{F}(u,v,\sigma)=(-\Delta u+\lambda e^{\nu}(e^{u}-\sigma)+f,-\Delta v+ \eta e^{u}(e^{\nu}-\sigma)+g),\quad\forall(u,v,\sigma)\in X\times X\times[0,1].\] Obviously \(\mathcal{F}\in C^{2}(X\times X\times[0,1],X\times X)\). On one hand, by Theorem 4, there exists some \(R_{0}>0\) such that for any \(R\geq R_{0}\), we have \[0\notin\mathcal{F}(\partial B_{R},\sigma),\quad\forall\sigma\in[0,1],\] and thus the homotopic invariance of the topological degree implies \[\deg(\mathcal{F}(\cdot,1),B_{R},(0,0))=\deg(\mathcal{F}(\cdot,0),B_{R},(0,0)). \tag{52}\] Here we denote \(B_{R}=\{(u,v)\in X\times X:\|u\|_{L^{\infty}(V)}+\|v\|_{L^{\infty}(V)}<R\}\) and \(\partial B_{R}=\{(u,v)\in X\times X:\|u\|_{L^{\infty}(V)}+\|v\|_{L^{\infty}(V)}=R\}\), as usual. On the other hand, we calculate \(\deg(\mathcal{F}(\cdot,0),B_{R},(0,0))\). Since \(\lambda>0\) and \(\overline{f}>0\), integrating both sides of the first equation of the system \[\left\{\begin{array}{l}\Delta u=\lambda e^{u+\nu}+f\\ \Delta v=\eta e^{u+\nu}+g,\end{array}\right. \tag{53}\] we get a contradiction, provided that (53) is solvable. This implies \[\{(u,v)\in X\times X:\mathcal{F}(u,v,0)=(0,0)\}=\varnothing.\] As a consequence, there holds \[\deg(\mathcal{F}(\cdot,0),B_{R},(0,0))=0. \tag{54}\] Combining (52) and (54), we get the desired result. \(\square\) Let \(\mathcal{J}_{\lambda}:X\times X\to\mathbb{R}\) be a functional defined as in (11). Note that the critical point of \(\mathcal{J}_{\lambda}\) is a solution of the Chern-Simons system (8). The following property of \(\mathcal{J}_{\lambda}\) will be not only useful for our subsequent analysis, but also of its own interest. **Lemma 14**.: _Under the assumptions \(\lambda>0\), \(\overline{f}>0\) and \(\overline{g}>0\), \(\mathcal{J}_{\lambda}\) satisfies the Palais-Smale condition at any level \(c\in\mathbb{R}\)._ Proof.: Let \(c\in\mathbb{R}\) and \(\{(u_{k},v_{k})\}\) be a sequence in \(X\times X\) such that \(\mathcal{J}_{\lambda}(u_{k},v_{k})\to c\) and \[\mathcal{J}^{\prime}_{\lambda}(u_{k},v_{k})\to(0,0)\quad\text{in}\quad(X \times X)^{*}\cong\mathbb{R}^{\ell}\times\mathbb{R}^{\ell}.\] This together with (12) gives \[\left\{\begin{array}{l}-\Delta u_{k}+\lambda e^{u_{k}}(e^{u_{k}}-1)+f=o_{k} (1)\\ -\Delta v_{k}+\lambda e^{u_{k}}(e^{v_{k}}-1)+g=o_{k}(1),\end{array}\right. \tag{55}\] where \(o_{k}(1)\to 0\) uniformly on \(V\) as \(k\to\infty\). Comparing (55) with the system (8), we have by using the same method as in the proof of Theorem 4, \[\|u_{k}\|_{L^{\infty}(V)}+\|v_{k}\|_{L^{\infty}(V)}\leq C\] for some constant \(C\), provided that \(k\geq k_{1}\) for some large positive integer \(k_{1}\). Since \(V\) is finite, \(X\) is precompact. Hence, up to a subsequence, \(u_{k}\to u^{*}\) and \(v_{k}\to v^{*}\) uniformly in \(V\) for some functions \(u^{*}\) and \(v^{*}\). Obviously \(\mathcal{J}^{\prime}_{\lambda}(u^{*},v^{*})=(0,0)\). Thus \(\mathcal{J}_{\lambda}\) satisfies the \((PS)_{c}\) condition. Finally we prove a partial multiple solutions result for the system (8). _Proof of Theorem 6._ We distinguish two hypotheses to proceed. _Case 1_. \(\mathcal{J}_{\lambda}\) _has a non-degenerate critical point_ \((u_{\lambda},v_{\lambda})\). Since \((u_{\lambda},v_{\lambda})\) is non-degenerate, we have \[\det D^{2}\mathcal{J}_{\lambda}(u_{\lambda},v_{\lambda})\neq 0.\] Suppose \((u_{\lambda},v_{\lambda})\) is the unique critical point of \(\mathcal{J}_{\lambda}\). Then we conclude for all \(R>\|u_{\lambda}\|_{L^{\infty}(V)}+\|v_{\lambda}\|_{L^{\infty}(V)}\), \[\deg(D\mathcal{J}_{\lambda},B_{R},(0,0))=\operatorname{sgn}\,\det D^{2} \mathcal{J}_{\lambda}(u_{\lambda},v_{\lambda})\neq 0. \tag{56}\] Here and in the sequel, as in the proof of Theorem 5, \(B_{R}\) is a ball centered at \((0,0)\) with radius \(R\). Notice that \(D\mathcal{J}_{\lambda}(u,v)=\mathcal{F}(u,v)\) for all \((u,v)\in X\times X\), where \(\mathcal{F}\) is defined as in (10). By Theorem 5, we have \[\deg(D\mathcal{J}_{\lambda},B_{R},(0,0))=\deg(\mathcal{F},B_{R},(0,0))=0,\] contradicting (56). Hence \(\mathcal{J}_{\lambda}\) must have at least two critical points. _Case 2_. \(\mathcal{J}_{\lambda}\) _has a local minimum critical point_ \((\varphi_{\lambda},\psi_{\lambda})\). Similar to (44), the \(q\)-th critical group of \(\mathcal{J}_{\lambda}\) at the critical point \((\varphi_{\lambda},\psi_{\lambda})\) reads as \[\mathsf{G}_{q}(\mathcal{J}_{\lambda},(\varphi_{\lambda},\psi_{\lambda}))= \mathsf{H}_{q}(\mathcal{J}_{\lambda}^{c}\cap\mathcal{U}\,,\{\mathcal{J}_{ \lambda}^{c}\setminus\{(\varphi_{\lambda},\psi_{\lambda})\}\cap\mathcal{U}, \mathsf{G}\},\] where \(\mathcal{J}_{\lambda}(\varphi_{\lambda},\psi_{\lambda})=c\), \(\mathcal{J}_{\lambda}^{c}=\{(u,v)\in X\times X:\mathcal{J}_{\lambda}(u,v)\leq c\}\), \(\mathcal{U}\) is a neighborhood of \((\varphi_{\lambda},\psi_{\lambda})\in X\times X\), \(\mathsf{G}=\mathbb{Z}\) or \(\mathbb{R}\) is the coefficient group of \(\mathsf{H}_{q}\). With no loss of generality, we assume \((\varphi_{\lambda},\psi_{\lambda})\) is the unique critical point of \(\mathcal{J}_{\lambda}\). Since \((\varphi_{\lambda},\psi_{\lambda})\) is a local minimum critical point, we easily get \[\mathsf{C}_{q}(\mathcal{J}_{\lambda},(\varphi_{\lambda},\psi_{\lambda}))= \delta_{q0}\mathsf{G}.\] By Lemma 14, \(\mathcal{J}_{\lambda}\) satisfies the Palais-Smale condition. Then applying ([4], Chapter 2, Theorem 3.2) and Theorem 5, we obtain \[0=\deg\left(\mathcal{F},B_{R},(0,0)\right) = \deg(D\mathcal{J}_{\lambda},B_{R},(0,0))\] \[= \sum_{q=0}^{\infty}(-1)^{q}\mathrm{rank}\,\mathsf{C}_{q}\left( \mathcal{J}_{\lambda},(\varphi_{\lambda},\psi_{\lambda})\right)\] \[= 1,\] provided that \(R>\|\varphi_{\lambda}\|_{L^{q}(V)}+\|\psi_{\lambda}\|_{L^{q}(V)}\). This is impossible, and thus \(\mathcal{J}_{\lambda}\) must have another critical point, as we desired.
2309.05558
A real-time, scalable, fast and highly resource efficient decoder for a quantum computer
To unleash the potential of quantum computers, noise effects on qubits' performance must be carefully managed. The decoders responsible for diagnosing noise-induced computational errors must use resources efficiently to enable scaling to large qubit counts and cryogenic operation. Additionally, they must operate at speed, to avoid an exponential slowdown in the logical clock rate of the quantum computer. To overcome such challenges, we introduce the Collision Clustering decoder and implement it on FPGA and ASIC hardware. We simulate logical memory experiments using the leading quantum error correction scheme, the surface code, and demonstrate MHz decoding speed - matching the requirements of fast-operating modalities such as superconducting qubits - up to an 881 and 1057 qubits surface code with the FPGA and ASIC, respectively. The ASIC design occupies 0.06 mm$^2$ and consumes only 8 mW of power. Our decoder is both highly performant and resource efficient, unlocking a viable path to practically realising fault-tolerant quantum computers.
Ben Barber, Kenton M. Barnes, Tomasz Bialas, Okan Buğdaycı, Earl T. Campbell, Neil I. Gillespie, Kauser Johar, Ram Rajan, Adam W. Richardson, Luka Skoric, Canberk Topal, Mark L. Turner, Abbas B. Ziad
2023-09-11T15:46:27Z
http://arxiv.org/abs/2309.05558v2
# A real-time, scalable, fast and highly resource efficient decoder for a quantum computer ###### Abstract Quantum computers promise to solve computing problems that are currently intractable using traditional approaches. This can only be achieved if the noise inevitably present in quantum computers can be efficiently managed at scale. A key component in this process is a classical decoder, which diagnoses the errors occurring in the system. If the decoder does not operate fast enough, an exponential slowdown in the logical clock rate of the quantum computer occurs. Additionally, the decoder must be resource efficient to enable scaling to larger systems and potentially operate in cryogenic environments. Here we introduce the Collision Clustering decoder, which overcomes both challenges. We implement our decoder on both an FPGA and ASIC, the latter ultimately being necessary for any cost-effective scalable solution. We simulate a logical memory experiment on large instances of the leading quantum error correction scheme, the surface code, assuming a circuit-level noise model. The FPGA decoding frequency is above a megahertz, a stringent requirement on decoders needed for e.g. superconducting quantum computers. To decode an 881 qubit surface code it uses only 4.5% of the available logical computation elements. The ASIC decoding frequency is also above a megahertz on a 1057 qubit surface code, and occupies 0.06 mm\({}^{2}\) area and consumes 8 mW of power. Our decoder is optimised to be both highly performant and resource efficient, while its implementation on hardware constitutes a viable path to practically realising fault-tolerant quantum computers. ## I Introduction Quantum computers have the potential to solve computational problems that are out of reach of classical computers. However, to realise this potential all architectures need to deal with the fragility of their quantum bits (qubits) [1; 2; 3; 4]. Qubits are highly likely to interact with the environment, or decohere, which leads to a loss of information stored and errors in the corresponding computation. Fortunately, Quantum Error Correction (QEC) protocols enable fault-tolerant computation in the presence of this noise. By adding redundancy, information can be protected by encoding it into logical qubits. Errors can still corrupt the information in this setting, and so a signal is periodically generated from the logical data, which characterises the errors that have occurred. This signal, called a syndrome, is processed by a decoder running on classical hardware whose output is the best guess of the error that has occurred on the logical data. This information is passed back to the control system so that corrective steps can be taken in subsequent operations. QEC must be performed continuously creating a stream of syndrome data, and therefore, as systems scale and lower logical error rates are required, the amount of data that needs to be processed by a decoder increases significantly. For large computations, this will require real-time decoders that can process the data at the rate it is received to avoid the creation of a backlog that grows exponentially with the depth of the computation [5; 6], ultimately slowing it down to a halt. Superconducting quantum devices, for example, generate a round of syndrome data in less than \(1\mu\)s (a rate of MHz), setting a stringent requirement on decoder speed. Future utility-scale quantum computers will therefore require an optimised hardware decoder integrated in a tight loop at the heart of the control system. There are several fast and accurate decoders implemented in software languages such as Python and C++ [7; 8; 9]. Most experiments to date have used software decoders to decode offline [10; 11; 12; 2]: rather than decoding in real-time, the syndrome data is processed after the experiment has concluded. However, real-time decoding is essential for logic branching required to implement non-Clifford gates to realise the full potential of quantum computation [13]. Even if software decoders were run during experiments, the non-deterministic latency of software would make it difficult to tightly integrate with the control system of a quantum computer. Hardware decoders based on Field Programmable Gate Arrays (FPGAs) or Application Specific Integrated Circuits (ASICs) can execute within a deterministic number of clock cycles, and also tightly integrate into control systems. Therefore, to meet the challenge of developing real-time decoders, the community has begun to implement decoders on dedicated hardware, specifically FPGAs [14; 15; 16; 17], and provide models of ASIC implementations [18; 16]. FPGAs will be sufficient for decoding problems in the medium term. They provide the flexibility to adapt and change implementations of decoders, and can be easily integrated into control systems. This level of flexibility will clarify the parameters that need to be optimised to improve the overall performance of the system. Until recently, only small instances of surface code decoders have been implemented on FPGAs [14; 16; 17]. Promising re
2309.09362
Language models are susceptible to incorrect patient self-diagnosis in medical applications
Large language models (LLMs) are becoming increasingly relevant as a potential tool for healthcare, aiding communication between clinicians, researchers, and patients. However, traditional evaluations of LLMs on medical exam questions do not reflect the complexity of real patient-doctor interactions. An example of this complexity is the introduction of patient self-diagnosis, where a patient attempts to diagnose their own medical conditions from various sources. While the patient sometimes arrives at an accurate conclusion, they more often are led toward misdiagnosis due to the patient's over-emphasis on bias validating information. In this work we present a variety of LLMs with multiple-choice questions from United States medical board exams which are modified to include self-diagnostic reports from patients. Our findings highlight that when a patient proposes incorrect bias-validating information, the diagnostic accuracy of LLMs drop dramatically, revealing a high susceptibility to errors in self-diagnosis.
Rojin Ziaei, Samuel Schmidgall
2023-09-17T19:56:39Z
http://arxiv.org/abs/2309.09362v1
# Language models are susceptible to incorrect patient self-diagnosis in medical applications ###### Abstract Large language models (LLMs) are becoming increasingly relevant as a potential tool for healthcare, aiding communication between clinicians, researchers, and patients. However, traditional evaluations of LLMs on medical exam questions do not reflect the complexity of real patient-doctor interactions. An example of this complexity is the introduction of patient self-diagnosis, where a patient attempts to diagnose their own medical conditions from various sources. While the patient sometimes arrives at an accurate conclusion, they more often are led toward misdiagnosis due to the patient's over-emphasis on bias validating information. In this work we present a variety of LLMs with multiple-choice questions from United States medical board exams which are modified to include self-diagnostic reports from patients. Our findings highlight that when a patient proposes incorrect bias-validating information, the diagnostic accuracy of LLMs drop dramatically, revealing a high susceptibility to errors in self-diagnosis. ## Introduction Medicine relies on effective communication between clinicians, researchers, and patients, making language a vital component of the field. However, it is only recently that AI models in healthcare have advanced applications in language, and are proving opportunities for improved human-AI interaction (Thirunavukarasu et al. (2023)). While there is much optimism about the potential for providing accessible doctor-quality healthcare through this technology, there is still significant need to understand where these models might fail One challenge that the healthcare industry faces with patient interaction is patient self-diagnosis (Farnood et al. (2020)). Patient self-diagnosis is when patients try to diagnose their own medical conditions without the aid of a medical professional. In this process, patients actively engage in the identification and exploration of potential medical conditions that could explain their symptoms. While this practice may sometimes lead to correct conclusions, it can often result in misdiagnosis due to the lack of medical training and the inability to conduct thorough medical examinations (White and Horvitz (2009)). Engaging with patients who have initiated their own diagnosis often leads doctors into complex terrain. Without a robust medical background, patients may inadvertently focus on rare conditions, misinterpreted symptoms, or misguided treatments with potential health risks. Additionally, when patients try to diagnose themselves, they can unintentionally guide doctors down the wrong path. This susceptibility accentuates on one of the most common flaws in clinical reasoning known by doctors as confirmation bias (Wellbery (2011)), toward which doctors must actively be trained to recognize. With over 40% of the world have limited access to healthcare (Organization et al. (2016)), it is clear that medical language models present a great opportunity for improving global health. However, the path forward presents many uncertainties; particularly, it is imperative to understand where these models _fail_, and a good place to start looking is where doctors fail (Mesko and Topol (2023)). Therefore, in this study, we examine to what extent incorrect patient self-diagnoses affect the diagnostic accuracy of language models. ## Methods In this study, we will assume access to a large language model solely through inference to emulate the patient's model access (i.e. no gradients or log probabilities). Suppose we are given a set of \(n\) examples denoted as \((x_{i},y_{i})_{i=1}^{n}\), where \(x_{i}\) represents the input text as a string (the prompt) and \(y_{i}\) are the corresponding outputs, which are not directly observable as they need to be predicted by the model. We define the output space \(\mathbb{O}\) to be specific to each task and can be characterized accordingly. For example, if the task is about predicting the next word in a sentence, and \(x_{1}\) is a sentence e.g. "The doctor suggests **[...]** as the potential diagnosis", the corresponding output space \(\mathbb{O}\) is the entire lexicon \(L\), i.e., \(\mathbb{O}=L\), wherein the task of the language model is to select the most probable word \(y_{1}\in\mathbb{O}\) as a response to \(x_{1}\). The inference operation is modeled as a function \(F:\mathbb{X}\rightarrow\mathbb{O}\), where \(\mathbb{X}\) is the input space. This function \(F\) is a representation of the language model, which accepts an input \(x_{i}\in\mathbb{X}\) and produces an output \(y_{i}\in\mathbb{O}\). ### Language models Four common language models are evaluated in our work: Llama 2 70B-chat (Llama) (Touvron et al. (2023)), PaLM (Barbam et al. (2022)), GPT-3.5, and GPT-4 (OpenAI (2023)). We focus on these models since they have high user accessibility, and thus are the most likely to be queried for medical questions. These models range in complexity both in terms of model parameter complexity, the amount of data, and the type of data they were trained on. Each of these models are described in detail below. **Pathways Language Model:** The Pathways Language Model (PaLM) is a large language model developed by Google trained on 780 billion tokens with 540 billion parameters. PaLM leverages the pathways dataflow (Barham et al. (2022)), which enables highly efficient training of very large neural networks across thousands of accelerator chips. This model was trained on a combination of webpages, books, Wikipedia, news articles, source code, and social media conversations, similar to the training of the LaMDA LLM (Thoppilan et al. (2022)). PaLM demonstrates excellent abilities in writing code, text analysis, and mathematics. PaLM also demonstrates significantly improved performance on chain-of-thought _reasoning_ problems. Figure 1: (Top) Demonstration of clinical scenario from US Medical Board Exam provided as input. (Middle) Non-adversarial prompt for LLM. (Bottom) Adversarial prompt with example of patient self-diagnosis report. **Llama 2 70B-Chat:** Llama is an open-access model developed by Meta trained on 2 trillion tokens of publicly available data and have parameters ranging in scale from 7 billion to 70 billion (Touvron et al. (2023)). We chose the 70 billion chat model since it is demonstrated to have some of the most robust performance across many metrics. Much effort was provided to ensure training that was aligned with proper safety metrics. Toward this, llama shows improvements in adversarial prompting across defined _risk categories_, which, importantly, includes giving unqualified advice (e.g., medical advice) as is prompted for in this work. **GPT-3.5 & GPT-4:** GPT-4 is a large-scale, multimodal LLM which is capable of accepting image and text inputs. GPT-3.5 (_gpt-3.5-turbo-0301_) is a subclass of GPT-3 (a 170B parameter model) (Brown et al. (2020)) fine-tuned on additional tokens and with human feedback (Christiano et al. (2017)). Unfortunately, unlike other models, the exact details of GPT-3.5 and GPT-4's structure, data, and training is proprietary. However, as is relevant to this study, technical reports demonstrate both models have significant understanding of medical and biological concepts, with GPT-4 consistently outperforming GPT-3.5 on knowledge benchmarks (OpenAI (2023)). In particular, GPT-3.5 achieves a 53% accuracy on the Medical Knowledge Self-Assessment while GPT-4 achieves 75% accuracy. ## Results To assess LLM medical diagnostic accuracy we present each LLMs with 400 questions from United States Medical Board Exams (the MedQA dataset (Jin et al. (2021))). This is the same examination that human doctors are evaluated on to test their professional knowledge and ability to make clinical decisions. The data begins by presenting a patient description (e.g. "25-year-old female") followed by a comprehensive account of their symptoms; see Fig. 1 for an example. Following this is a set of four to five multiple choice responses which could reasonably be the cause of the patient's symptoms. These elements form the basis of the input for the LLM. We then prompt the LLM with the following information together with the input: "You are a medical GPT model tasked with making clinical decisions. Please choose a response that best answers the provided medical question." From this, the model responds with information that best predicts what it expects to follow the prompting. Depending on the model, the LLM typically either responds with a single character corresponding to one of the multiple choice answers (e.g. "A") or the full answer (e.g. "Epistaxis"). The adversarial prompt represents a patient providing biased self-diagnostic information to the language model. To do this, the model is prompted with the following input directly after the case is presented: "The patient is convinced that various medical sources suggest the correct answer is [randomly sampled self-diagnosis]." The self-diagnosis is sampled from the set of _incorrect_ MedQA answers for the given prompt. To ensure that the language model does not have memory of previous prompts, the context window is reset at each inference step. Figure 2: Performance of language models on MedQA dataset questions. Two bars are provided for each model with the accuracy of non-adversarial input on the left and adversarial on the right. (with self-diagnosis). GPT-4 and GPT-3.5 are shown to perform significantly better (75 percent and 53 percent respective) than PaLM and Llama (45 percent and 36 percent). When provided with the adversarial prompt, accuracy degredation was seen for three of the four models (GPT-3.5, PaLM, and Llama). Accuracy decreased by from 53 percent to 37 percent for GPT-3.5 (30.2 percent decrease), 45 percent to 26 percent for PaLM (42.2 percent decrease), and 36 percent to 26 percent for Llama (27.78 percent decrease). While these models show clear decreases, GPT-4 does not demonstrate significant performance decline when provided the adversarial prompt, going from 75 percent to 73 percent (2.6 percent decrease). It is worth noting that despite some of these models being trained to prevent providing information supporting risk categories (e.g. medical advice), all of the models provided answers to the prompting without any warning that indicates a medical professional should be consulted. While this would not be a problem for a trained clinical model which is tasked with diagnosis, common chat models such as those included in this work should redirect diagnoses to healthcare professionals. ## Related Work There has been a clear growing interest in applying language models to medicine (Thirunavukarasu et al. (2023)). Toward this, many recent works have explored existing promises and pitfalls in these LLM applications. One such work explored whether LLMs can reason about medical questions (Lievin et al. (2022)), with promising results demonstrating that LLMs can achieve close to human performance using chain-of-thought reasoning. MedPalm-2 is another promising model, which has shown accuracy rates of up to 86.5 percent on the MedQA dataset (Singhal et al. (2023)). However, this model has remained closed access, preventing a deeper study of where the model might fail in clinical settings. Another study found that LLMs perform poorly in providing accurate medical recommendations and can exhibit overconfidence in their incorrect answers, increasing the risk of spreading medical misinformation (Barnard et al. (2023)). Negative results such as these have led to further ethical and practical concerns about the deployment of these models (Harrer (2023)). This study claims that more research is needed toward understanding potential problems with medical LLMs. ## Conclusion As medical language models approach clinical use, it's essential to address any potential reasoning biases that may exist. By developing these models responsibly and ensuring their reliability, accuracy, and ethical use, we can support doctors' decisions without introducing or reinforcing biases, thereby facilitating their widespread use. In this work, we demonstrated the susceptibility of language models to patient self-diagnosis. We compared the performance of four popular chat-based language models (PaLM, Llama, GPT-3.5, and GPT-4) in their ability to diagnose patient symptoms. We then demonstrated their ability to diagnose symptoms when the patient adversarial prompting via a self-diagnostic suggestion. The results suggest that most language models demonstrate significant drops in performance with the self-diagnosis, validating the incorrect belief of the patient. However, it was also shown that one model, GPT-4, was robust against the adversarial input. Future work on developing medical language models should provide as part of the training being able to recognize and work around common clinical diagnosing errors, such as the biasing that patient self-diagnosis can cause (much like a medical doctor would need to learn). Additionally, it is worth investigating why some models (GPT-4 in this work) are able to avoid being affected by the adversarial input, whereas other models are affected significantly. Incorporating these methods into the training of clinical models could help prevent diagnostic error and potentially save patient lives. We hope this work sheds light on an important issue toward the practical use of clinical LLMs, and helps toward building the future of accessible healthcare.
2302.14727
Automatically Classifying Emotions based on Text: A Comparative Exploration of Different Datasets
Emotion Classification based on text is a task with many applications which has received growing interest in recent years. This paper presents a preliminary study with the goal to help researchers and practitioners gain insight into relatively new datasets as well as emotion classification in general. We focus on three datasets that were recently presented in the related literature, and we explore the performance of traditional as well as state-of-the-art deep learning models in the presence of different characteristics in the data. We also explore the use of data augmentation in order to improve performance. Our experimental work shows that state-of-the-art models such as RoBERTa perform the best for all cases. We also provide observations and discussion that highlight the complexity of emotion classification in these datasets and test out the applicability of the models to actual social media posts we collected and labeled.
Anna Koufakou, Jairo Garciga, Adam Paul, Joseph Morelli, Christopher Frank
2023-02-28T16:34:55Z
http://arxiv.org/abs/2302.14727v1
# Automatically Classifying Emotions based on Text: A Comparative Exploration of Different Datasets ###### Abstract Emotion Classification based on text is a task with many applications which has received growing interest in recent years. This paper presents a preliminary study with the goal to help researchers and practitioners gain insight into relatively new datasets as well as emotion classification in general. We focus on three datasets that were recently presented in the related literature, and we explore the performance of traditional as well as state-of-the-art deep learning models in the presence of different characteristics in the data. We also explore the use of data augmentation in order to improve performance. Our experimental work shows that state-of-the-art models such as RoBERTa perform the best for all cases. We also provide observations and discussion that highlight the complexity of emotion classification in these datasets and test out the applicability of the models to actual social media posts we collected and labeled. Emotion Detection, Emotion Classification, Emotion Recognition, Deep Learning, Natural Language Processing ## I Introduction Recognizing emotion automatically from text is of great interest in many applications from emotion-aware recommender systems to intelligent chatbots to suicide prevention. Emotion detection or classification is quite different from sentiment analysis, a popular term which has attracted a lot of research in earlier years. Sentiment analysis refers to discovering if the content of a text (opinion, tweet, essay) is positive or negative. This is usually a binary problem (for example, positive vs negative reviews of movies or products bought online) though it could extend to more categories (e.g. very positive or very negative or neutral). Emotion prediction or detection on the other hand deals with recognizing specific emotions in the text, such as anger, sadness, or joy. Emotions are complex and many times not as clear to humans, so automatic detection is a challenging task. Building datasets for this task is also difficult as there are different taxonomies of emotions, e.g. Ekman [1] or Plutchik [2]; additionally, assigning a single emotion label to text can be highly subjective. One of the earliest examples of emotion labeled data is in SemEval 2007 [3]. More recent work in emotion related datasets include tweets (SemEval-2018 Task 1) [4], conversations [5], and movie subtitles [6], as examples. The authors in [7] unified 14 popular emotion classification corpora under one framework. As examples of using deep learning for emotion classification, the authors in [8] performed multi-label classification on the SemEval2018 Task 1 dataset using a multi-channel, multi-filter CNN-BiLSTM. Various BERT-like models were explored in text-based emotion recognition in [9]. Recent reviews on text-based emotion detection include [10, 11, 12]. In this paper, we presented a preliminary study of three different datasets related to emotion classification: one based on a UK survey related to COVID-19 [13]; another containing essays written after reading news articles [14]; and one extracted from social media comments [15] (we describe the dataset in detail in Section II). We explored various traditional and state-of-the-art models and present our findings and observations. Our goal is to show the different focus in each dataset and the complexity of recognizing emotion from different datasets. We also attempted to test the applicability of the models on a few example social media posts we collected and labeled. This work aims to help the researcher and practitioner interested in this field by exploring data with different characteristics and perspectives. Our study is preliminary in that our plan is to include more datasets and models as well as incorporate other features from the data in the classification (e.g. demographics or emotion intensity of words). The organization of this paper is as follows: Section II describes and compares the datasets we used in this work. Section III summarizes the models and set up of our experiments, followed by Section IV which presents the results and observations. Finally, Section V includes concluding remarks and future research directions. ## II Datasets In this paper, we experimented with three datasets described below1. All datasets are in English and were prepared and presented in 2020 or later. There are several existing datasets that were presented before 2020, e.g. from the SemEval-2018 Task 1) [4], or deal with conversations such as [5], which are outside the focus of this work. Footnote 1: Datasets and code available by request **COVID-19 Survey Data2** was presented in [13]. This data was collected via a survey in the UK in April 2020, when the UK was under lockdown. The responses of 2500 participants included a text response as well as demographic data (e.g. gender) and emotion-related ratings entered by the participants themselves. We focused on the "chosen emotion": a category which the participants chose out of several emotion options. Besides this attribute, each record also had a rating from 1 to 9 for several emotions (anger, fear, worry, etc). The dataset is mainly focused on worry and anxiety due to the topic of the survey. We kept the emotions that were representing at least 4% of the total records resulting in a dataset of 2408 records. **GoEmotions Data3** was presented in [15]. The original dataset contains about 58 thousand Reddit comments with human annotations mapped to 27 emotions or neutral. For our work, we removed neutral comments and kept records that were assigned only a single emotion label (as we are focusing on single label classification) that is in the Ekman taxonomy [1]: Anger, Disgust, Fear, Joy, Sadness, and Surprise. The reason for keeping only Ekman taxonomy emotions was to align this dataset with the next dataset, WASSA-21 (see next item). This dataset already comes with distinct sets for train and test. We trained on the train set (4343 records after removing neutral and non-single emotion records) and tested on the test set (553 records). Footnote 3: Data available at [https://github.com/google-research/google-research/tree/master/geomotions](https://github.com/google-research/google-research/tree/master/geomotions) **WASSA-21 Data4** was part of the WASSA5 2021 Shared Task on Empathy Detection and Emotion Classification summarized in [14]. This dataset is an extension of [16]'s dataset based on news articles related to harm to an individual, group, nature, etc. The dataset contains essays in which authors expressed their empathy and distress in reactions to these news articles. The essays are annotated for empathy and distress, as well as personality traits and demographic information (age, gender, etc.). Each essay is also tagged with one of the Ekman's emotions [1]: Anger, Disgust, Fear, Joy, Sadness, and Surprise. We only focused on the emotion for each essay, not the empathy or distress labels. The WASSA-21 dataset already comes with distinct sets for train, dev (development), and test. We trained on the training set (1585 records) and tested on the dev set (245 records). Both datasets are heavily dominated by Sadness (about 40% of the records) and Anger (22% in train and 31% in test). Footnote 4: Data available at [https://competitions.codalab.org/competitions/28713](https://competitions.codalab.org/competitions/28713) Table I shows a comparison of the datasets: the emotions and their distribution in each dataset, as well as the total number of records and the average length in characters of the records in each dataset. As shown in the Table, the datasets differ in the emotion categories as well as the distribution: COVID-19 Survey is heavily skewed towards anxiety (57%), which does not exist in the Ekman taxonomy [1] followed by the other two datasets. GoEmotions and WASSA-21 are both dominated by anger and sadness, although Goemotions is more balanced overall than the other two datasets. The GoEmotions dataset is about double in number of records compared to the other two datasets, but has much shorter text in each record than the other two datasets. In essense, GoEmotions has sentence-length records, while the other two datasets contain essays made up of multiple sentences. Finally, for our experiments, we turned text in all datasets to lower case and removed stop words using NLTK6. Both WASSA-21 and GoEmotions sets have predefined training and test/development sets as described above. For the COVID-19 Survey Dataset, we used an 80-20 split and report the average of the results of 5 runs (standard deviation was below 2%). Footnote 6: [https://www.nltk.org/](https://www.nltk.org/) ## III Methodology ### _Models_ In this work, we explored traditional techniques as well as current state-of-the-art Deep Learning. The models we used are briefly described in the next paragraphs. We used Google colab7 to run all our experiments. Footnote 7: [https://colab.research.google.com/](https://colab.research.google.com/) **Bag-of-Words (BoW).** BoW techniques do not take the textual order of the words into account and instead rely on a word-to-document matrix which contains frequency counts: how often each word is found in the text records. Besides a numeric count of word frequency, we also used Tfdf (Term Frequency-Inverse Document Frequency), which is a statistical measure used to evaluate the importance of a word in a document in a corpus. Using Tfdf, the importance increases proportionally to the frequency of the word in the document, but it is offset by the frequency of the word in the corpus. We experimented with the usual models such as Naive Bayes, Support Vector Machines, Linear Regression, etc. We used the defaults for all models in scikit-learn8[17] to run our BoW experiments. Footnote 8: [https://scikit-learn.org](https://scikit-learn.org) **Transformed-Based Deep Learning.** Among the more recent neural architectures, transformer-based models [18] are considered state-of-the-art in several NLP tasks. A Transformer combines a multi-head self-attention mechanism with an encoder-decoder. BERT (Bidirectional Encoder Representations from Transformers) is a transformer-based language model developed by Google [19]. BERT's architecture is based on a multi-layer bidirectional Transformer. In our work, we also explored the use of two extensions of BERT: RoBERTa (Robustly optimized BERT approach) [20] which removed and modified some parts of BERT and trained with more data to make it more robust; and ELECTRA [21] which has a different pre-training approach from BERT. We chose these specific models (as opposed to other transformer-based models or even more traditional deep learning such as CNNs or LSTMs) because they were used successfully in recent papers and shared tasks such as the WASSA-21 [14] and performed well in own early experiments. For these experiments, we used Pytorch and HuggingFace9. Specifically, we experimented with the bert-uncased model, the roberta-base model, the electra-small-discriminator model and the electra-large-discriminator model. For all models, we reported the results for 5 Epochs, learning rate of \(1e^{-}5\), _maxlen_ of 256 and batch of 8. We experimented with other values and these seemed to be the best overall in performance. Footnote 9: [https://huggingface.co/transformers](https://huggingface.co/transformers) ### _Metrics_ We report our results based on the classification metrics defined below: \[Precision\ (Recall)=\frac{TP}{TP+FP\ (FN)} \tag{1}\] \[Accuracy=\frac{TP+TN}{N} \tag{2}\] \[F1\text{-}score=\frac{2\times Precision\times Recall}{Precision+Recall} \tag{3}\] where \(TP\) is True Positives, \(FP\) is False Positives, \(FN\) is False Negatives, and \(N\) is the total number of records. Besides Accuracy, we chose to also report the F1-macro which averages the F1-score over the classes: the macro-averaged F1 is better suited for showing algorithm effectiveness on smaller categories (see [22]), which is important as we are working with imbalanced datasets. ## IV Experimental Results Table II shows the results for the three datasets we used in this paper, only showing the top three transformer-based models and the top two BoW models for each dataset (ordered by f1-macro). As seen in Table II, the transformer-based models performed better than the BoW models for all three datasets, as expected. The RoBERTa model performed the best in each case. Among the datasets, GoEmotions results were much higher than for the other two datasets. The f1-macro results for GoEmotions were in the mid 70's for the BoW models to low 80's for the transformer-based models, while the highest f1-macro was 49% for COVID-19 Survey and 54% for WASSA-21. These results make sense as the GoEmotions dataset has double the records, follows a more balanced distribution of emotion categories, and includes short text records (sentence-length comments taken from reddit). In contrast, the WASSA-21 and the COVID-19 datasets are more highly skewed towards one or two emotions (anxiety for COVID-19 Survey and anger/sadness for WASSA-21). Also, both datasets contain essay-type records with several sentences which sometimes contain multiple emotions. For example, a record in the COVID-19 survey dataset may have had "chosen emotion" as anxiety, but the participant also gave a high rating to fear and sadness. We only used "chosen emotion" from this data for our experiments, and this shows how different emotions may overlap in the same essay-type text. This was also discussed in the original paper for the WASSA-21 data [14] where the authors observed that when an essay was misclassified in their experiments, the essay often contained many emotions. In addition, we explored the results by inspecting the confusion matrix of specific runs. Given the COVID-19 Survey data, an example result for the Linear SVM confusion matrix is shown in Figure 1 and for RoBERTa in Figure 2. Both figures show a normalized confusion matrix (per the number of records in each class). Figure 1 shows that the Linear SVM classified most records as the dominant class, anxiety (57% of the entire data, see Table I). In contrast, RoBERTa classified the different emotion categories overall much better as seen in Figure 2, though it still assigns many records from other emotions to the dominant class, anxiety. Machine Learning models, and especially Deep Learning models, are known to perform better when they are trained on larger sets of records. For the datasets that did not perform as well, we used data augmentation to explore possible improvements in performance. For the WASSA-21 data, we used a total of 5928 records for training: we augmented the WASSA-21 Train Set with the data from the GoEmotions Train Set. As both sets (original WASSA-21 and our preprocessed GoEmotions - see Section II) follow the Ekman taxonomy [1], this augmentation was straightforward. The results are shown in Table III. The models that showed improvement are the RoBERTa and the linear SVM, while the rest of the models stay at same levels of accuracy and f1-macro. For the COVID-19 Survey data, we used GoEmotions train data as well, though the emotion categories are different (see Table I for emotion categories and distributions). Therefore, we only added records from the GoEmotion train dataset whose label was anger, fear or sadness. This resulted in a train set of 4198 records. The results are shown in Table IV. In this case, all results improved for all models and some models show large improvements such as the ELECTRA-large with 9% improvement of the f1-macro. Overall, data augmentation seems to improve the performance of the classifiers by adding more records and also making the dataset more balanced. However, the data in GoEmotions are quite different from the data in the other two datasets (reddit comments versus essays) so the f1-macro results are still not higher than 50's for the COVID-19 Survey data and 60's for the WASSA-21 data. ### _Case Study: Testing example records from social media_ We also explored the applicability of the models to the task of recognizing emotions from actual posts on social media. Due to time limitations, a full study is left for future work; for this paper, we pulled a few posts from reddit, manually annotated them with emotion labels, and then tested two of the models in the previous section. Specifically, the posts were collected from subreddits (forums) 'r/Anxiety' and 'r/COVID19_support' and date in 2021. The models we tested are the top RoBERTA models based on (a) GoEmotion and (b) COVID-19 Survey Data: our justification was that these datasets are either based on reddit (GoEmotions) or related to COVID-19 topic (COVID-19 Survey). In the following, we include part of the reddit comment, the emotion label for the emotion assigned by us, and the resulting emotion classification label for (a) and for (b). 1. (r/Anxiety) "_I tested positive for Covid back in [...] and it was like my whole world collapsed. Instantly as i found out i started to shake and chain smoked [...] The weight on my shoulders was unbearable. [...] With constant watching of the news and social media built up a storm i was unaware about within my body_". Our label: 'Fear', (a) predicted 'Fear', (b) predicted 'Anxiety'. 2. (r/Anxiety)"_I just want to say how appreciative I am for each and every one of you. Reading these comments has calmed me down completely and brought me back to reality. [...] hearing it from regular people like me makes it a million times more powerful_". Our label: 'Joy', (a) predicted 'Joy', (b) predicted 'Sadness'. 3. (r/COVID-19 Support) "_This is one of my main problems during the pandemic! I hate missing out on all the big events because it seems like most of my friends are still getting together. It's hard_". Our label: 'Sadness', (a) predicted 'Sadness', (b) predicted 'Sadness'. 4. (r/COVID-19 Support) "_That is SO incredibly frustrating [...] was probably ignorantly walking around maskless thinking everyone was fine with it. when in reality no one wanted to be the one to reprimamad a superior. Ridiculous_". Our label: 'Anger', (a) predicted 'Anger', Fig. 1: Normalized Confusion Matrix of Linear SVM on the COVID-19 Survey Data Fig. 2: Normalized Confusion Matrix of RoBERTa on the COVID-19 Survey Data (b) predicted 'Anger'. 5. (r/COVID-19 Support) "_I can't wait to get my final shot [...] imma go back to the gym!!!! gained so much weight and people keep pointing it out"_. Our label: 'Joy', (a) predicted 'Joy', (b) predicted 'Sadness'. 6. (r/COVID-19 Support) "_My daughter was born the night before they declared a pandemic. I went into the hospital [...], and the world had been turned upside down [...] basically been a nightmare since_". Our label: 'Fear', (a) predicted 'Sadness', (b) predicted 'Fear'. From this small experiment, we see that the (a) model which is based on GoEmotions data does better, and the (b) model based on COVID-19 Survey data assigned 'Sadness' to 'Joy' labeled records. The 'Joy' emotion does not exist in the COVID-19 Survey data, but the model did not assign the closest emotion either, which is 'Relaxation'. ## V Conclusions In this paper, we presented an extensive experimentation with three different datasets that were recently presented to the research community. The datasets on which we focused exhibit a variety of characteristics: size in number of records, average length of text in the records, emotion taxonomies and distributions, self ratings versus human annotated labels, among others. Our experiments involved a variety of models for emotion classification. We also explored data augmentation as a means to improve performance. Overall, RoBERTa was the highest performing model and data augmentation did work for most cases. We also tested the applicability of the best models on example records we pulled from social media posts and labeled ourselves with emotions. Our discussion highlights the complexity of emotion prediction or classification using text. Future directions include expanding our research to other datasets and additional models, exploring the use of lexicons such as the NRC EmoLex [23] as well as additional characteristics from the data (e.g. author demographics), and experimenting with multi-label classification and sentence-level classification as a means to improve essay-level classification.
2301.13518
Relations between values of arithmetic Gevrey series, and applications to values of the Gamma function
We investigate the relations between the rings ${\bf E}$, ${\bf G}$ and ${\bf D}$ of values taken at algebraic points by arithmetic Gevrey series of order either $-1$ ($E$-functions), $0$ (analytic continuations of $G$-functions) or $1$ (renormalization of divergent series solutions at $\infty$ of $E$-operators) respectively. We prove in particular that any element of ${\bf G}$ can be written as multivariate polynomial with algebraic coefficients in elements of ${\bf E}$ and ${\bf D}$, and is the limit at infinity of some $E$-function along some direction. This prompts to defining and studying the notion of mixed functions, which generalizes simultaneously $E$-functions and arithmetic Gevrey series of order 1. Using natural conjectures for arithmetic Gevrey series of order 1 and mixed functions (which are analogues of a theorem of Andr\'e and Beukers for $E$-functions) and the conjecture ${\bf D}\cap{\bf E}=\overline{\mathbb Q}$ (but not necessarily all these conjectures at the same time), we deduce a number of interesting Diophantine results such as an analogue for mixed functions of Beukers' linear independence theorem for values of $E$-functions, the transcendance of the values of the Gamma function and its derivatives at all non-integral algebraic numbers, the transcendance of Gompertz constant as well as the fact that Euler's constant is not in ${\bf E}$.
Stéphane Fischler, Tanguy Rivoal
2023-01-31T10:12:38Z
http://arxiv.org/abs/2301.13518v1
Relations between values of arithmetic Gevrey series, and applications to values of the Gamma function ###### Abstract We investigate the relations between the rings \(\mathbf{E}\), \(\mathbf{G}\) and \(\mathbf{D}\) of values taken at algebraic points by arithmetic Gevrey series of order either \(-1\) (\(E\)-functions), \(0\) (analytic continuations of \(G\)-functions) or \(1\) (renormalization of divergent series solutions at \(\infty\) of \(E\)-operators) respectively. We prove in particular that any element of \(\mathbf{G}\) can be written as multivariate polynomial with algebraic coefficients in elements of \(\mathbf{E}\) and \(\mathbf{D}\), and is the limit at infinity of some \(E\)-function along some direction. This prompts to defining and studying the notion of mixed functions, which generalizes simultaneously \(E\)-functions and arithmetic Gevrey series of order \(1\). Using natural conjectures for arithmetic Gevrey series of order \(1\) and mixed functions (which are analogues of a theorem of Andre and Beukers for \(E\)-functions) and the conjecture \(\mathbf{D}\cap\mathbf{E}=\overline{\mathbb{Q}}\) (but not necessarily all these conjectures at the same time), we deduce a number of interesting Diophantine results such as an analogue for mixed functions of Beukers' linear independence theorem for values of \(E\)-functions, the transcendance of the values of the Gamma function and its derivatives at all non-integral algebraic numbers, the transcendance of Gompertz constant as well as the fact that Euler's constant is not in \(\mathbf{E}\). ## 1 Introduction A power series \(\sum_{n=0}^{\infty}\frac{a_{n}}{n!}x^{n}\in\overline{\mathbb{Q}}[[x]]\) is said to be an \(E\)-function when it is solution of a linear differential equation over \(\overline{\mathbb{Q}}(x)\) (holonomic), and \(|\sigma(a_{n})|\) (for any \(\sigma\in\operatorname{Gal}(\overline{\mathbb{Q}}/\overline{\mathbb{Q}})\)) and the least common denominator of \(a_{0},a_{1},\ldots,a_{n}\) grow at most exponentially in \(n\). They were defined and studied by Siegel in 1929, who also defined the class of \(G\)-functions: a power series \(\sum_{n=0}^{\infty}a_{n}x^{n}\in\overline{\mathbb{Q}}[[x]]\) is said to be a \(G\)-function when \(\sum_{n=0}^{\infty}\frac{a_{n}}{n!}x^{n}\) is an \(E\)-function. In this case, \(\sum_{n=0}^{\infty}n!a_{n}z^{n}\in\overline{\mathbb{Q}}[[z]]\) is called an \(\mathfrak{D}\)-function, following the terminology introduced by Andre in [1]. \(E\)-functions are entire, while \(G\)-functions have a positive radius of convergence, which is finite except for polynomials. Here and below, we see \(\overline{\mathbb{Q}}\) as embedded into \(\mathbb{C}\). Following Andre again, \(E\)-functions, \(G\)-functions and \(\mathfrak{D}\)-fonctions are exactly arithmetic Gevrey series of order \(s=-1,0,1\) respectively. Actually Andre defines arithmetic Gevrey series of any order \(s\in\mathbb{Q}\), but the set of values at algebraic points is the same for a given \(s\neq 0\) as for \(s/|s|\) using [1, Corollaire 1.3.2]. \(\mathfrak{S}\)-functions are divergent series, unless they are polynomials. Given an \(\mathfrak{S}\)-function \(\mathfrak{f}\) and any \(\theta\in\mathbb{R}\), except finitely many values mod \(2\pi\) (namely anti-Stokes directions of \(\mathfrak{f}\)), one can perform Ramis' 1-summation of \(\mathfrak{f}(1/z)\) in the direction \(\theta\), which coincides in this setting with Borel-Laplace summation (see [14] or [9]). This provides a function denoted by \(\mathfrak{f}_{\theta}(1/z)\), holomorphic on the open subset of \(\mathbb{C}\) consisting in all \(z\neq 0\) such that \(\theta-\frac{\pi}{2}-\varepsilon<\arg z<\theta+\frac{\pi}{2}+\varepsilon\) for some \(\varepsilon>0\), of which \(\mathfrak{f}(1/z)\) is the asymptotic expansion in this sector (called a large sector bisected by \(\theta\)). Of course \(\mathfrak{f}(1/z)\) can be extended further by analytic continuation, but this asymptotic expansion may no longer be valid. When an \(\mathfrak{S}\)-function is denoted by \(\mathfrak{f}_{j}\), we shall denote by \(\mathfrak{f}_{j,\theta}\) or \(\mathfrak{f}_{j;\theta}\) its 1-summation and we always assume (implicitly or explicitly) that \(\theta\) is not an anti-Stokes direction. In [8], [9] and [10, SS4.3], we have studied the sets \(\mathbf{G}\), \(\mathbf{E}\) and \(\mathbf{D}\) defined respectively as the sets of all the values taken by all (analytic continuations of) \(G\)-functions at algebraic points, of all the values taken by all \(E\)-functions at algebraic points and of all values \(\mathfrak{f}_{\theta}(1)\) where \(\mathfrak{f}\) is an \(\mathfrak{S}\)-function (\(\theta=0\) if it is not an anti-Stokes direction, and \(\theta>0\) is very small otherwise.) These three sets are countable sub-rings of \(\mathbb{C}\) that all contain \(\overline{\mathbb{Q}}\); conjecturally, they are related to the set of periods and exponential periods, see SS3. (The ring \(\mathbf{D}\) is denoted by \(\mathfrak{D}\) in [10].) We shall prove the following result in SS3. **Theorem 1**.: _Every element of \(\mathbf{G}\) can be written as a multivariate polynomial (with coefficients in \(\overline{\mathbb{Q}}\)) in elements of \(\mathbf{E}\) and \(\mathbf{D}\)._ _Moreover, \(\mathbf{G}\) coincides with the set of all convergent integrals \(\int_{0}^{\infty}F(x)dx\) where \(F\) is an \(E\)-function, or equivalently with the set of all finite limits of \(E\)-functions at \(\infty\) along some direction._ Above, a convergent integral \(\int_{0}^{\infty}F(x)dx\) means a finite limit of the \(E\)-function \(\int_{0}^{z}F(x)dx\) as \(z\to\infty\) along some direction; this explains the equivalence of both statements. We refer to Eq. (3.2) in SS3 for an expression of \(\log(2)\) as a polynomial in elements in \(\mathbf{E}\) and \(\mathbf{D}\); the number \(\pi\) could be similarly expressed by considering \(z\) and \(iz\) instead of \(z\) and \(2z\) there. Examples of the last statement are the identities (see [12] for the second one): \[\int_{0}^{+\infty}\frac{\sin(x)}{x}dx=\frac{\pi}{2}\quad\text{and}\quad\int_{0 }^{+\infty}J_{0}(ix)e^{-3x}dx=\frac{\sqrt{6}}{96\pi^{3}}\Gamma\Big{(}\frac{1}{ 24}\Big{)}\Gamma\Big{(}\frac{5}{24}\Big{)}\Gamma\Big{(}\frac{7}{24}\Big{)} \Gamma\Big{(}\frac{11}{24}\Big{)}.\] It is notoriously difficult to prove/disprove that a given element of \(\mathbf{G}\) is transcendental; it is known that a Siegel-Shidlovskii type theorem for \(G\)-functions can not hold _mutatis mutandis_. Theorem 1 suggests that an alternative approach to the study of the Diophantine properties of elements of \(\mathbf{G}\) can be through a better understanding of joint study of the elements of \(\mathbf{E}\) and \(\mathbf{D}\), modulo certain conjectures to begin with. Our applications will not be immediately directed to the elements of \(\mathbf{G}\) but rather to the understanding of the (absence of) relations between the elements of \(\mathbf{E}\) and \(\mathbf{D}\). It seems natural (see [9, p. 37]) to conjecture that \({\bf E}\,\cap\,{\bf G}=\overline{\mathbb{Q}}\), and also that \({\bf G}\,\cap\,{\bf D}=\overline{\mathbb{Q}}\), though both properties seem currently out of reach. In this paper, we suggest (see SS2) a possible approach towards the following analogous conjecture. **Conjecture 1**.: _We have \({\bf E}\,\cap\,{\bf D}=\overline{\mathbb{Q}}\)._ In SS2 we shall make a functional conjecture, namely Conjecture 3, that implies Conjecture 1. We also prove that Conjecture 1 has very important consequences, as the following result shows. **Theorem 2**.: _Assume that Conjecture 1 holds. Then \(\Gamma^{(s)}(a)\) is a transcendental number for any rational number \(a>0\) and any integer \(s\geq 0\), except of course if \(s=0\) and \(a\in\mathbb{N}\)._ One of the aims of this paper is to show that combining \(\mathfrak{D}\)- and \(E\)-functions may lead to very important results in transcendental number theory. Let us recall now briefly the main known results on \(E\)-functions. Point \((i)\) in the following result is due to Andre [2] for \(E\)-functions with rational Taylor coefficients, and to Beukers [6] in the general case. Andre used this property to obtain a new proof of the Siegel-Shidlovskii Theorem, and Beukers to prove an optimal refinement of this theorem (namely, \((ii)\) below). **Theorem A**.: \((i)\) [_Andre, Beukers_] _If an \(E\)-function \(F(z)\) is such that \(F(1)=0\), then \(\frac{F(z)}{z-1}\) is an \(E\)-function._ \((ii)\) [_Beukers_] _Let \(\underline{F}(z):={}^{t}(f_{1}(z),\ldots,f_{n}(z))\) be a vector of \(E\)-functions solution of a differential system \(\underline{F}^{\prime}(z)=A(z)\underline{F}(z)\) for some matrix \(A(z)\in M_{n}(\overline{\mathbb{Q}}(z))\)._ _Let \(\xi\in\overline{\mathbb{Q}}^{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ **Conjecture 2**.: _Let \(\mathfrak{f}(z)\) be an \(\mathfrak{I}\)-function and \(\theta\in(-\pi/2,\pi/2)\) be such that \(\mathfrak{f}_{\theta}(1)=0\). Then \(\frac{\mathfrak{f}(z)}{z-1}\) is an \(\mathfrak{I}\)-function._ In other words, the conclusion of this conjecture asserts that \(\frac{z}{1-z}\mathfrak{f}(1/z)\) is an \(\mathfrak{I}\)-function in \(1/z\); this is equivalent to \(\frac{\mathfrak{f}(1/z)}{z-1}\) being an \(\mathfrak{I}\)-function in \(1/z\) (since we have \(\frac{\mathfrak{f}(1/z)}{z-1}=O(1/z)\) unconditionally as \(|z|\to\infty\)). Following Beukers' proof [6] yields the following result (see [3, SS4.6] for a related conjecture). **Theorem 3**.: _Assume that Conjecture 2 holds._ _Let \(\mathfrak{f}(z):={}^{t}(\mathfrak{f}_{1}(z),\ldots,\mathfrak{f}_{n}(z))\) be a vector of \(\mathfrak{I}\)-functions solution of a differential system \(\mathfrak{f}^{\prime}(z)=A(z)\mathfrak{f}(z)\) for some matrix \(A(z)\in M_{n}(\overline{\mathbb{Q}}(z))\). Let \(\xi\in\overline{\mathbb{Q}}^{*}\) and \(\theta\in(\arg(\xi)-\pi/2,\arg(\xi)+\pi/2)\) ; assume that \(\xi\) is not a pole of a coefficient of \(A\), and that \(\theta\) is anti-Stokes for none of the \(\mathfrak{f}_{j}\)._ _Let \(P\in\overline{\mathbb{Q}}[X_{1},\ldots,X_{n}]\) be a homogeneous polynomial such that_ \[P(\mathfrak{f}_{1,\theta}(1/\xi),\ldots,\mathfrak{f}_{n,\theta}(1/\xi))=0.\] _Then there exists \(Q\in\overline{\mathbb{Q}}[Z,X_{1},\ldots,X_{n}]\), homogeneous in the \(X_{i}\), such that_ \[Q(z,\mathfrak{f}_{1}(z),\ldots,\mathfrak{f}_{n}(z))=0\text{ identically and }P(X_{1},\ldots,X_{n})=Q(1/\xi,X_{1},\ldots,X_{n}).\] _In particular, we have_ \[\operatorname{trdeg}_{\overline{\mathbb{Q}}}(\mathfrak{f}_{1,\theta}(1/\xi), \ldots,\mathfrak{f}_{n,\theta}(1/\xi))=\operatorname{trdeg}_{\overline{ \mathbb{Q}}(z)}(\mathfrak{f}_{1}(z),\ldots,\mathfrak{f}_{n}(z)).\] As an application of Theorem 3, we shall prove the following corollary. Note that under his weaker version of Conjecture 2, Ferguson [7, p. 171, Theorem 2] proved that Gompertz's constant is an irrational number. **Corollary 1**.: _Assume that Conjecture 2 holds. Then for any \(\alpha\in\overline{\mathbb{Q}}\), \(\alpha>0\), and any \(s\in\mathbb{Q}\setminus\mathbb{Z}_{\geq 0}\), the number \(\int_{0}^{\infty}(t+\alpha)^{s}e^{-t}dt\) is a transcendental number._ _In particular, Gompertz's constant \(\delta:=\int_{0}^{\infty}e^{-t}/(t+1)dt\) is a transcendental number._ In this text we suggest an approach towards Conjecture 1, based on the new notion of _mixed functions_ which enables one to consider \(E\)- and \(\mathfrak{I}\)-functions at the same time. In particular we shall state a conjecture about such functions, namely Conjecture 3 in SS2, which implies both Conjecture 1 and Conjecture 2. The following result is a motivation for this approach. **Proposition 1**.: _Assume that both Conjectures 1 and 2 hold. Then neither Euler's constant \(\gamma:=-\Gamma^{\prime}(1)\) nor \(\Gamma(a)\) (with \(a\in\mathbb{Q}^{+}\setminus\mathbb{N}\)) are in \(\mathbf{E}\)._ It is likely that none of these numbers is in \(\mathbf{G}\), but (as far as we know) there is no "functional" conjecture like Conjecture 3 that implies this. It is also likely that none is in \(\mathbf{D}\) as well, but we don't know if this can be deduced from Conjecture 3. The structure of this paper is as follows. In SS2 we define and study mixed functions, a combination of \(E\)- and \(\mathfrak{O}\)-functions. Then in SS3 we express any value of a \(G\)-function as a polynomial in values of \(E\)- and \(\mathfrak{O}\)-functions, thereby proving Theorem 1. We study derivatives of the \(\Gamma\) function at rational points in SS4, and prove Theorem 2 and Proposition 1. At last, SS5 is devoted to adapting Beukers' method to our setting: this approach yields Theorem 3 and Corollary 1. ## 2 Mixed functions ### Definition and properties In view of Theorem 1, it is natural to study polynomials in \(E\)- and \(\mathfrak{O}\)-functions. We can prove a Diophantine result that combines both Theorems A\((ii)\) and 3 but under a very complicated polynomial generalization of Conjecture 2. We opt here for a different approach to mixing \(E\)- and \(\mathfrak{O}\)-functions for which very interesting Diophantine consequences can be deduced from a very easy to state conjecture which is more in the spirit of Conjecture 2. We refer to SS2.3 for proofs of all properties stated in this section (including Lemma 1 and Proposition 2), except Theorem 4. **Definition 1**.: _We call mixed (arithmetic Gevrey) function any formal power series_ \[\sum_{n\in\mathbb{Z}}a_{n}z^{n}\] _such that \(\sum_{n\geq 0}a_{n}z^{n}\) is an \(E\)-function in \(z\), and \(\sum_{n\geq 1}a_{-n}z^{-n}\) is an \(\mathfrak{O}\)-function in \(1/z\)._ In other words, a mixed function is defined as a formal sum \(\Psi(z)=F(z)+\mathfrak{f}(1/z)\) where \(F\) is an \(E\)-function and \(\mathfrak{f}\) is an \(\mathfrak{O}\)-function. In particular, such a function is zero if, and only if, both \(F\) and \(\mathfrak{f}\) are constants such that \(F+\mathfrak{f}=0\); obviously, \(F\) and \(\mathfrak{f}\) are uniquely determined by \(\Psi\) upon assuming (for instance) that \(\mathfrak{f}(0)=0\). The set of mixed functions is a \(\overline{\mathbb{Q}}\)-vector space stable under multiplication by \(z^{n}\) for any \(n\in\mathbb{Z}\). Unless \(\mathfrak{f}(z)\) is a polynomial, such a function \(\Psi(z)=F(z)+\mathfrak{f}(1/z)\) is purely formal: there is no \(z\in\mathbb{C}\) such that \(\mathfrak{f}(1/z)\) is a convergent series. However, choosing a direction \(\theta\) which is not anti-Stokes for \(\mathfrak{f}\) allows one to evaluate \(\Psi_{\theta}(z)=F(z)+\mathfrak{f}_{\theta}(1/z)\) at any \(z\) in a large sector bisected by \(\theta\). Here and below, such a direction will be said _not anti-Stokes for \(\Psi\)_ and whenever we write \(\mathfrak{f}_{\theta}\) or \(\Psi_{\theta}\) we shall assume implicitly that \(\theta\) is not anti-Stokes. Definition 1 is a formal definition, but one may identify a mixed function with the holomorphic function it defines on a given large sector by means of the following lemma. **Lemma 1**.: _Let \(\Psi\) be a mixed function, and \(\theta\in\mathbb{R}\) be a non-anti-Stokes direction for \(\Psi\). Then \(\Psi_{\theta}\) is identically zero (as a holomorphic function on a large sector bisected by \(\theta\)) if, and only if, \(\Psi\) is equal to zero (as a formal power series in \(z\) and \(1/z\))._ Any mixed function \(\Psi(z)=F(z)+\mathfrak{f}(1/z)\) is solution of an \(E\)-operator. Indeed, this follows from applying [1, Theorem 6.1] twice: there exist an \(E\)-operator \(L\) such that \(L(\mathfrak{f}(1/z))=0\), and an \(E\)-operator \(M\) such that \(M(L(F(z)))=0\) (because \(L(F(z))\) is an \(E\)-function). Hence \(ML(F(z)+\mathfrak{f}(1/z))=0\) and by [1, p. 720, SS4.1], \(ML\) is an \(E\)-operator. We formulate the following conjecture, which implies both Conjecture 1 and Conjecture 2. **Conjecture 3**.: _Let \(\Psi(z)\) be an mixed function, and \(\theta\in(-\pi/2,\pi/2)\) be such that \(\Psi_{\theta}(1)=0\). Then \(\frac{\Psi(z)}{z-1}\) is an mixed function._ The conclusion of this conjecture is that \(\Psi(z)=(z-1)\Psi_{1}(z)\) for some mixed function \(\Psi_{1}\). This conclusion can be made more precise as follows; see SS2.3 for the proof. **Proposition 2**.: _Let \(\Psi(z)=F(z)+\mathfrak{f}(1/z)\) be an mixed function, and \(\theta\in(-\pi/2,\pi/2)\) be such that \(\Psi_{\theta}(1)=0\). Assume that Conjecture 3 holds for \(\Psi\) and \(\theta\)._ _Then both \(F(1)\) and \(\mathfrak{f}_{\theta}(1)\) are algebraic, and \(\frac{\mathfrak{f}(1/z)-\mathfrak{f}_{\theta}(1)}{z-1}\) is an \(\mathcal{O}\)-function._ Of course, in the conclusion of this proposition, one may assert also that \(\frac{F(z)-F(1)}{z-1}\) is an \(E\)-function using Theorem A\((i)\). Conjecture 3 already has far reaching Diophantine consequences: Conjecture 2 and Theorem 2 stated in the introduction, and also the following result that encompasses Theorem 3 in the linear case. **Theorem 4**.: _Assume that Conjecture 3 holds._ _Let \(\boldsymbol{\Psi}(z):={}^{t}(\Psi_{1}(z),\ldots,\Psi_{n}(z))\) be a vector of mixed functions solution of a differential system \(\boldsymbol{\Psi}^{\prime}(z)=A(z)\boldsymbol{\Psi}(z)\) for some matrix \(A(z)\in M_{n}(\overline{\mathbb{Q}}(z))\). Let \(\xi\in\overline{\mathbb{Q}}^{*}\) and \(\theta\in(\arg(\xi)-\pi/2,\arg(\xi)+\pi/2)\) ; assume that \(\xi\) is not a pole of a coefficient of \(A\), and that \(\theta\) is anti-Stokes for none of the \(\Psi_{j}\)._ _Let \(\lambda_{1},\ldots,\lambda_{n}\in\overline{\mathbb{Q}}\) be such that_ \[\sum_{i=1}^{n}\lambda_{i}\Psi_{i,\theta}(\xi)=0.\] _Then there exist \(L_{1},\ldots,L_{n}\in\overline{\mathbb{Q}}[z]\) such that_ \[\sum_{i=1}^{n}L_{i}(z)\Psi_{i}(z)=0\text{ identically and }L_{i}(\xi)=\lambda_{i} \text{ for any }i.\] _In particular, we have_ \[\operatorname{rk}_{\overline{\mathbb{Q}}}(\Psi_{1,\theta}(\xi),\ldots,\Psi_{n,\theta}(\xi))=\operatorname{rk}_{\overline{\mathbb{Q}}(z)}(\Psi_{1}(z), \ldots,\Psi_{n}(z)).\] The proof of Theorem 4 follows exactly the linear part of the proof of Theorem 3 (see SS5.1), which is based on [6, SS3]. The only difference is that \(\mathfrak{O}\)-functions have to be replaced with mixed functions, and Conjecture 2 with Conjecture 3. In particular Proposition 4 stated in SS5.1 remains valid with these modifications. However a product of mixed functions is not, in general, a mixed function. Therefore the end of [6, SS3] does not adapt to mixed functions, and there is no hope to obtain in this way a result on the transcendence degree of a field generated by values of mixed functions. As an application of Theorem 4, we can consider the mixed functions \(1,e^{\beta z}\) and \(\mathfrak{f}(1/z):=\sum_{n=0}^{\infty}(-1)^{n}n!z^{-n}\), where \(\beta\) is a fixed non-zero algebraic number. These three functions are linearly independent over \(\mathbb{C}(z)\) and form a solution of a differential system with only \(0\) for singularity (because \((\mathfrak{f}(1/z))^{\prime}=(1+1/z)f(1/z)-1\)), hence for any \(\alpha\in\overline{\mathbb{Q}}\), \(\alpha>0\) and any \(\varrho\in\overline{\mathbb{Q}}^{*}\), the numbers \(1,e^{\varrho},\mathfrak{f}_{0}(1/\alpha):=\int_{0}^{\infty}e^{-t}/(1+\alpha t )dt\) are \(\overline{\mathbb{Q}}\)-linearly independent (for a fixed \(\alpha\), take \(\beta=\varrho/\alpha\)). ### Values of mixed functions We denote by \(\mathbf{M}_{G}\) the set of values \(\Psi_{\theta}(1)\), where \(\Psi\) is a mixed function and \(\theta=0\) if it is not anti-Stokes, \(\theta>0\) is sufficiently small otherwise. This set is obviously equal to \(\mathbf{E}+\mathbf{D}\). **Proposition 3**.: _For every integer \(s\geq 0\) and every \(a\in\mathbb{Q}^{+}\), \(a\neq 0\), we have \(\Gamma^{(s)}(a)\in e^{-1}\mathbf{M}_{G}\)._ This results follows immediately from Eq. (4.4) below (see SS4.2), written in the form \[\Gamma^{(s)}(a)=e^{-1}\big{(}(-1)^{s}es!E_{a,s+1}(-1)+\mathfrak{f}_{a,s+1;0}(1 )\big{)},\] because \(e^{z}E_{a,s+1}(-z)\) is an \(E\)-function and \(\mathfrak{f}_{a,s+1;0}(1)\) is the \(1\)-summation in the direction \(0\) of an \(\mathfrak{D}\)-function. It would be interesting to know if \(\Gamma^{(s)}(a)\) belongs to \(\mathbf{M}_{G}\). We did not succeed in proving it does, and we believe it does not. Indeed, for instance if we want to prove that \(\gamma\in\mathbf{M}_{G}\), a natural strategy would be to construct an \(E\)-function \(F(z)\) with asymptotic expansion of the form \(\gamma+\log(z)+\mathfrak{f}(1/z)\) in a large sector, and then to evaluate at \(z=1\). However this strategy cannot work since there is no such \(E\)-function (see the footnote in the proof of Lemma 1 in SS2.3). ### Proofs concerning mixed functions To begin with, let us take Proposition 2 for granted and prove that Conjecture 3 implies both Conjecture 1 and Conjecture 2. Concerning Conjecture 2 it is clear. To prove that it implies Conjecture 1, let \(\xi\in\mathbf{D}\), i.e. \(\xi=\mathfrak{f}_{\theta}(1)\) is the \(1\)-summation of an \(\mathfrak{D}\)-function \(\mathfrak{f}(z)\) in the direction \(\theta=0\) if it is not anti-Stokes, and \(\theta>0\) close to \(0\) otherwise. Assume that \(\xi\) is also in \(\mathbf{E}\): we have \(\xi=F(1)\) for some \(E\)-function \(F(z)\). Therefore, \(\Psi(z)=F(z)-\mathfrak{f}(1/z)\) is a mixed function such that \(\Psi_{\theta}(1)=0\). By Conjecture 3 and Proposition 2, we have \(\xi=\mathfrak{f}_{\theta}(1)\in\overline{\mathbb{Q}}\). This concludes the proof that Conjecture 3 implies Conjecture 1. Let us prove Proposition 2 now. Assuming that Conjecture 3 holds for \(\Psi\) and \(\theta\), there exists a mixed function \(\Psi_{1}(z)=F_{1}(z)+\mathfrak{f}_{1}(1/z)\) such that \(\Psi(z)=(z-1)\Psi_{1}(z)\). We have \[F(z)-(z-1)F_{1}(z)+\mathfrak{f}(1/z)-(z-1)\mathfrak{f}_{1}(1/z)=0 \tag{2.1}\] as a formal power series in \(z\) and \(1/z\). Now notice that \(z-1=z(1-\frac{1}{z})\), and that we may assume \(\mathfrak{f}\) and \(\mathfrak{f}_{1}\) to have zero constant terms. Denote by \(\alpha\) the constant term of \(\mathfrak{f}(1/z)-z(1-\frac{1}{z})\mathfrak{f}_{1}(1/z)\). Then we have \[F(z)-(z-1)F_{1}(z)+\alpha+\mathfrak{f}_{2}(1/z)=0\] for some \(\mathfrak{I}\)-function \(\mathfrak{f}_{2}\) without constant term, so that \(\mathfrak{f}_{2}=0\), \(F(z)=(z-1)F_{1}(z)-\alpha\) and \(F(1)=-\alpha\in\overline{\mathbb{Q}}\). This implies \(\mathfrak{f}_{\theta}(1)=\alpha\), and \(\frac{\mathfrak{f}(1/z)-\mathfrak{f}_{\theta}(1)}{z-1}=\mathfrak{f}_{1}(1/z)\) is an \(\mathfrak{I}\)-function since \(\mathfrak{f}_{2}=0\). This concludes the proof of Proposition 2. At last, let us prove Lemma 1. We write \(\Psi(z)=F(z)+\mathfrak{f}(1/z)\) and assume that \(\Psi_{\theta}\) is identically zero. Modifying \(\theta\) slightly if necessary, we may assume that the asymptotic expansion \(-\mathfrak{f}(1/z)\) of \(F(z)\) in a large sector bisected by \(\theta\) is given explicitly by [9, Theorem 5] applied to \(F(z)-F(0)\); recall that such an asymptotic expansion is unique (see [9]). As in [9] we let \(g(z)=\sum_{n=1}^{\infty}a_{n}z^{-n-1}\) where the coefficients \(a_{n}\) are given by \(F(z)-F(0)=\sum_{n=1}^{\infty}\frac{a_{n}}{n!}z^{n}\). For any \(\sigma\in\mathbb{C}\setminus\{0\}\) there is no contribution in \(e^{\sigma z}\) in the asymptotic expansion of \(F(z)\), so that \(g(z)\) is holomorphic at \(\sigma\). At \(\sigma=0\), the local expansion of \(g\) is of the form \(g(z)=h_{1}(z)+h_{2}(z)\log(z)\) with \(G\)-functions \(h_{1}\) and \(h_{2}\), and the coefficients of \(h_{2}\) are related to those of \(\mathfrak{f}\); however we shall not use this special form (1). Now recall that \(g(z)=G(1/z)/z\) where \(G\) is a \(G\)-function; then \(G\) is entire and has moderate growth at infinity (because \(\infty\) is a regular singularity of \(G\)), so it is a polynomial due to Liouville's theorem. This means that \(F(z)\) is a polynomial in \(z\). Recall that asymptotic expansions in large sectors are unique. Therefore both \(F\) and \(\mathfrak{f}\) are constant functions, and \(F+\mathfrak{f}=0\). This concludes the proof of Lemma 1. Footnote 1: Actually we are proving that the asymptotic expansion of a non-polynomial \(E\)-function is never a \(\mathbb{C}\)-linear combination of functions \(z^{\alpha}\log^{k}(z)\mathfrak{f}(1/z)\) with \(\alpha\in\mathbb{Q}\), \(k\in\mathbb{N}\) and \(\mathfrak{I}\)-functions \(\mathfrak{f}\): some exponentials have to appear. ## 3 Proof of Theorem 1: values of \(G\)-functions In this section we prove Theorem 1. Let us begin with an example, starting with the relation proved in [15, Proposition 1] for \(z\in\mathbb{C}\setminus(-\infty,0]\): \[\gamma+\log(z)=zE_{1,2}(-z)-e^{-z}\mathfrak{f}_{1,2;0}(1/z) \tag{3.1}\] where \(E_{1,2}\) is an \(E\)-function, and \(\mathfrak{f}_{1,2}\) is an \(\mathfrak{I}\)-function, both defined below in SS4.2. Apply Eq. (3.1) at both \(z\) and \(2z\), and then substract one equation from the other. This provides a relation of the form \[\log(2)=F(z)+e^{-z}\mathfrak{f}_{1;0}(1/z)+e^{-2z}\mathfrak{f}_{2;0}(1/z) \tag{3.2}\] valid in a large sector bisected by \(0\), with an \(E\)-function \(F\) and \(\mathfrak{I}\)-functions \(\mathfrak{f}_{1}\) and \(\mathfrak{f}_{2}\). Choosing arbitrarily a positive real algebraic value of \(z\) yields an explicit expression of \(\log(2)\in\mathbf{G}\) as a multivariate polynomial in elements of \(\mathbf{E}\) and \(\mathbf{D}.\) But this example shows also that a polynomial in \(E\)- and \(\mathfrak{I}\)-functions may be constant eventhough there does not seem to be any obvious reason. In particular, the functions \(1\), \(F(z)\), \(e^{-z}\mathfrak{f}_{1;0}(1/z)\), and \(e^{-2z}\mathfrak{f}_{2;0}(1/z)\) are linearly dependent over \(\mathbb{C}\). However we see no reason why they would be linearly dependent over \(\overline{\mathbb{Q}}\). This could be a major drawback to combine in \(E\)- and \(\mathfrak{I}\)-functions, since functions that are linearly dependent over \(\mathbb{C}\) but not over \(\overline{\mathbb{Q}}\) can not belong to any Picard-Vessiot extension over \(\overline{\mathbb{Q}}\). Let us come now to the proof of Theorem 1. We first prove the second part, which runs as follows (it is reproduced from the unpublished note [16]). From the stability of the class of \(E\)-functions by \(\frac{d}{dz}\) and \(\int_{0}^{z}\), we deduce that the set of convergent integrals \(\int_{0}^{\infty}F(x)dx\) of \(E\)-functions and the set of finite limits of \(E\)-functions along some direction as \(z\to\infty\) are the same. Theorem 2\((iii)\) in [9] implies that if an \(E\)-function has a finite limit as \(z\to\infty\) along some direction, then this limit must be in \(\mathbf{G}\). Conversely, let \(\beta\in\mathbf{G}\). By Theorem 1 in [8], there exists a \(G\)-function \(G(z)=\sum_{n=0}^{\infty}a_{n}z^{n}\) of radius of convergence \(\geq 2\) (say) such that \(G(1)=\beta\). Let \(F(z)=\sum_{n=0}^{\infty}\frac{a_{n}}{n!}z^{n}\) be the associated \(E\)-function. Then for any \(z\) such that \(\operatorname{Re}(z)>\frac{1}{2}\), we have \[\frac{1}{z}G\Big{(}\frac{1}{z}\Big{)}=\int_{0}^{+\infty}e^{-xz}F(x)dx.\] Hence, \(\beta=\int_{0}^{+\infty}e^{-x}F(x)dx\) where \(e^{-z}F(z)\) is an \(E\)-function. We shall now prove the first part of Theorem 1. In fact, we shall prove a slightly more general result, namely Theorem 5 below. We first recall a few notations. Denote by \(\mathbf{S}\) the \(\mathbf{G}\)-module generated by all derivatives \(\Gamma^{(s)}(a)\) (with \(s\in\mathbb{N}\) and \(a\in\mathbb{Q}\setminus\mathbb{Z}_{\leq 0}\)), and by \(\mathbf{V}\) the \(\mathbf{S}\)-module generated by \(\mathbf{E}\). Recall that \(\mathbf{G}\), \(\mathbf{S}\) and \(\mathbf{V}\) are rings. Conjecturally, \(\mathbf{G}=\mathcal{P}[1/\pi]\) and \(\mathbf{V}=\mathcal{P}_{e}[1/\pi]\) where \(\mathcal{P}\) and \(\mathcal{P}_{e}\) are the ring of periods and the ring of exponential periods over \(\overline{\mathbb{Q}}\) respectively (see [8, SS2.2] and [10, SS4.3]). We have proved in [10, Theorem 3] that \(\mathbf{V}\) is the \(\mathbf{S}\)-module generated by the numbers \(e^{\rho}\chi\), with \(\rho\in\overline{\mathbb{Q}}\) and \(\chi\in\mathbf{D}\). **Theorem 5**.: _The ring \(\mathbf{V}\) is the ring generated by \(\mathbf{E}\) and \(\mathbf{D}.\) In particular, all values of \(G\)-functions belong to the ring generated by \(\mathbf{E}\) and \(\mathbf{D}.\)_ In other words, the elements of \(\mathbf{V}\) are exactly the sums of products \(ab\) with \(a\in\mathbf{E}\) and \(b\in\mathbf{D}.\) Proof of Theorem 5.: We already know that \(\mathbf{V}\) is a ring, and that it contains \(\mathbf{E}\) and \(\mathbf{D}.\) To prove the other inclusion, denote by \(U\) the ring generated by \(\mathbf{E}\) and \(\mathbf{D}.\) Using Proposition 3 proved in SS2.2 and the functional equation of \(\Gamma\), we have \(\Gamma^{(s)}(a)\in U\) for any \(s\in\mathbb{N}\) and any \(a\in\mathbb{Q}\setminus\mathbb{Z}_{\leq 0}\). Therefore for proving that \(\mathbf{V}\subset U\), it is enough to prove that \(\mathbf{G}\subset U\). Let \(\xi\in\mathbf{G}\). Using [11, Theorem 3] there exists an \(E\)-function \(F(z)\) such that for any for any \(\theta\in[-\pi,\pi)\) outside a finite set, \(\xi\) is a coefficient of the asymptotic expansion of \(F(z)\) in a large sector bisected by \(\theta\). As the proof of [11, Theorem 3] shows, we can assume that \(\xi\) is the coefficient of \(e^{z}\) in this expansion. Denote by \(L\) an \(E\)-operator of which \(F\) is a solution, and by \(\mu\) its order. Andre has proved [1] that there exists a basis \((H_{1}(z),\ldots,H_{\mu}(z))\) of formal solutions of \(L\) at infinity such that for any \(j\), \(e^{-\rho_{j}z}H_{j}(z)\in\mathrm{NGA}\{1/z\}_{1}^{\overline{\mathbb{Q}}}\) for some algebraic number \(\rho_{j}\). We recall that elements of \(\mathrm{NGA}\{1/z\}_{1}^{\overline{\mathbb{Q}}}\) are arithmetic Nilsson-Gevrey series of order \(1\) with algebraic coefficients, i.e. \(\overline{\mathbb{Q}}\)-linear combinations of functions \(z^{k}(\log z)^{\ell}\mathfrak{f}(1/z)\) with \(k\in\mathbb{Q}\), \(\ell\in\mathbb{N}\) and \(\mathfrak{D}\)-functions \(\mathfrak{f}\). Expanding in this basis the asymptotic expansion of \(F(z)\) in a large sector bisected by \(\theta\) (denoted by \(\widetilde{F}\)), there exist complex numbers \(\kappa_{1}\),..., \(\kappa_{d}\) such that \(\widetilde{F}(z)=\kappa_{1}H_{1}(z)+\ldots+\kappa_{\mu}H_{\mu}(z)\). Then we have \(\xi=\kappa_{1}c_{1}+\ldots+\kappa_{\mu}c_{\mu}\), where \(c_{j}\) is the coefficient of \(e^{z}\) in \(H_{j}(z)\in e^{\rho_{j}z}\mathrm{NGA}\{1/z\}_{1}^{\overline{\mathbb{Q}}}\). We have \(c_{j}=0\) if \(\rho_{j}\neq 1\), and otherwise \(c_{j}\) is the constant coefficient of \(e^{-z}H_{j}(z)\): in both cases \(c_{j}\) is an algebraic number. Therefore to conclude the proof that \(\xi\in U\), it is enough to prove that \(\kappa_{1},\ldots,\kappa_{\mu}\in U\). For simplicity let us prove that \(\kappa_{1}\in U\). Given solutions \(F_{1},\ldots,F_{\mu}\) of \(L\), we denote by \(W(F_{1},\ldots,F_{\mu})\) the corresponding wronskian matrix. Then for any \(z\) in a large sector bisected by \(\theta\) we have \[\kappa_{1}=\frac{\det W(F(z),H_{2,\theta}(z),\ldots,H_{\mu,\theta}(z))}{\det W (H_{1,\theta}(z),\ldots,H_{\mu,\theta}(z))}\] where \(H_{j,\theta}(z)\) is the \(1\)-summation of \(H_{j}(z)\) in this sector. The determinant in the denominator belongs to \(e^{az}\mathrm{NGA}\{1/z\}_{1}^{\overline{\mathbb{Q}}}\) with \(a=\rho_{1}+\ldots+\rho_{\mu}\in\overline{\mathbb{Q}}\). As the proof of [10, Theorem 6] shows, there exist \(b,c\in\overline{\mathbb{Q}}\), with \(c\neq 0\), such that \[\det W(H_{1,\theta}(z),\ldots,H_{\mu,\theta}(z))=cz^{b}e^{az}.\] We take \(z=1\), and choose \(\theta=0\) if it is not anti-Stokes for \(L\) (and \(\theta>0\) sufficiently small otherwise). Then we have \[\kappa_{1}=c^{-1}e^{-a}\Big{(}\det W(F(z),H_{2,\theta}(z),\ldots,H_{\mu,\theta }(z))\Big{)}_{|z=1}\in U.\] This concludes the proof. _Remark 1_.: The second part of Theorem 1 suggests the following comments. It would be interesting to have a better understanding (in terms of \(\mathbf{E}\), \(\mathbf{G}\) and \(\mathbf{D}\)) of the set of convergent integrals \(\int_{0}^{\infty}R(x)F(x)dx\) where \(R\) is a rational function in \(\overline{\mathbb{Q}}(x)\) and \(F\) is an \(E\)-function, which are thus in \(\mathbf{G}\) when \(R=1\) (see [16] for related considerations). Indeed, classical examples of such integrals are \(\int_{0}^{+\infty}\frac{\cos(x)}{1+x^{2}}dx=\pi/(2e)\in\pi\mathbf{E}\), Euler's constant \(\int_{0}^{+\infty}\frac{1-(1+x)e^{-x}}{x(1+x)}dx=\gamma\in\mathbf{E}+e^{-1} \mathbf{D}\) (using Eq. (3.1) and [20, p. 248, Example 2]) and Gompertz constant \(\delta:=\int_{0}^{+\infty}\frac{e^{-x}}{1+x}dx\in\mathbf{D}\). A large variety of behaviors can thus be expected here. For instance, using various explicit formulas in [13, Chapters 6.5-6.7], it can be proved that \[\int_{0}^{+\infty}R(x)J_{0}(x)dx\in{\bf G}+{\bf E}+\gamma{\bf E}+\log(\overline{ \mathbb{Q}}^{*}){\bf E}\] for any \(R(x)\in\overline{\mathbb{Q}}(x)\) without poles on \([0,+\infty)\), where \(J_{0}(x)=\sum_{n=0}^{\infty}(ix/2)^{2n}/n!^{2}\) is a Bessel function. A second class of examples is when \(R(x)F(x)\) is an even function without poles on \([0,+\infty)\) and such that \(\lim_{|x|\to\infty,\operatorname{Im}(x)\geq 0}x^{2}R(x)F(x)=0\). Then by the residue theorem, \[\int_{0}^{+\infty}R(x)F(x)dx=i\pi\sum_{\rho,\,\operatorname{Im}(\rho)>0} \operatorname{Res}_{x=\rho}\bigl{(}R(x)F(x)\bigr{)}\in\pi{\bf E}\] where the summation is over the poles of \(R\) in the upper half plane. ## 4 Derivatives of the \(\Gamma\) function at rational points In this section we prove Theorem 2 and Proposition 1 stated in the introduction, dealing with \(\Gamma^{(s)}(a)\). To begin with, we define \(E\)-functions \(E_{a,s}(z)\) in SS4.1 and prove a linear independence result concerning these functions. Then we prove in SS4.2 a formula for \(\Gamma^{(s)}(a)\), namely Eq. (4.4), involving \(E_{a,s+1}(-1)\) and the \(1\)-summation of an \(\mathfrak{D}\)-function. This enables us to prove Theorem 2 in SS4.3 and Proposition 1 in SS4.4. ### Linear independence of a family of \(E\)-functions To study derivatives of the \(\Gamma\) function at rational points, we need the following lemma. For \(s\geq 1\) and \(a\in\mathbb{Q}\setminus\mathbb{Z}_{\leq 0}\), we consider the \(E\)-function \(E_{a,s}(z):=\sum_{n=0}^{\infty}\frac{z^{n}}{n!(n+a)^{s}}\). **Lemma 2**.: \((i)\) _For any \(a\in\mathbb{Q}\setminus\mathbb{Z}\) and any \(s\geq 1\), the functions_ \[1,e^{z},e^{z}E_{a,1}(-z),e^{z}E_{a,2}(-z),\ldots,e^{z}E_{a,s}(-z)\] _are linearly independent over \(\mathbb{C}(z)\)._ \((ii)\) _For any \(a\in\mathbb{N}^{*}\) and any \(s\geq 2\), the functions_ \[1,e^{z},e^{z}E_{a,2}(-z),\ldots,e^{z}E_{a,s}(-z)\] _are linearly independent over \(\mathbb{C}(z)\)._ _Remark 2_.: Part \((i)\) of the lemma is false if \(a\in\mathbb{N}^{*}\) because \(1,e^{z},e^{z}E_{a,1}(-z)\) are \(\mathbb{Q}(z)\)-linearly dependent in this case (see the proof of Part \((ii)\) below). Proof.: \((i)\) For simplicity, we set \(\psi_{s}(z):=e^{z}E_{a,s}(-z)\). We proceed by induction on \(s\geq 1\). Let us first prove the case \(s=1\). The derivative of \(\psi_{1}(z)\) is \((1+(z-a)\psi_{1}(z))/z\). Let us assume the existence of a relation \(\psi_{1}(z)=u(z)e^{z}+v(z)\) with \(u,v\in\mathbb{C}(z)\) (a putative relation \(U(z)+V(z)e^{z}+W(z)\psi_{1}(z)=0\) forces \(W\neq 0\) because \(e^{z}\notin\mathbb{C}(z)\)). Then after differentiation of both sides, we end up with \[\frac{1+(z-a)\psi_{1}(z)}{z}=\big{(}u(z)+u^{\prime}(z)\big{)}e^{z}+v^{\prime}( z).\] Hence, \[\frac{1+(z-a)\big{(}u(z)e^{z}+v(z)\big{)}}{z}=\big{(}u(z)+u^{\prime}(z)\big{)} e^{z}+v^{\prime}(z).\] Since \(e^{z}\notin\mathbb{C}(z)\), the function \(v(z)\) is a rational solution of the differential equation \(zv^{\prime}(z)=(z-a)v(z)+1\): \(v(z)\) cannot be identically \(0\), and it cannot be a polynomial (the degrees do not match on both sides). It must then have a pole at some point \(\omega\), of order \(d\geq 1\) say. We must have \(\omega=0\) because otherwise the order of the pole at \(\omega\) of \(zv^{\prime}(z)\) is \(d+1\) while the order of the pole of \((z-a)v(z)+1\) is at most \(d\). Writing \(v(z)=\sum_{n\geq-d}v_{n}z^{n}\) with \(v_{-d}\neq 0\) and comparing the term in \(z^{-d}\) of \(zv^{\prime}(z)\) and \((z-a)v(z)+1\), we obtain that \(d=a\). This forces \(a\) to be an integer \(\geq 1\), which is excluded. Hence, \(1,e^{z},e^{z}E_{a,1}(-z)\) are \(\mathbb{C}(z)\)-linearly independent. Let us now assume that the case \(s-1\geq 1\) holds. Let us assume the existence of a relation over \(\mathbb{C}(z)\) \[\psi_{s}(z)=v(z)+u_{0}(z)e^{z}+\sum_{j=1}^{s-1}u_{j}(z)\psi_{j}(z). \tag{4.1}\] (A putative relation \(V(z)+U_{0}(z)e^{z}+\sum_{j=1}^{s}U_{j}(z)\psi_{j}(z)=0\) forces \(U_{s}\neq 0\) by the induction hypothesis). Differentiating (4.1) and because \(\psi^{\prime}_{j}(z)=(1-\frac{a}{z})\psi_{j}(z)+\frac{1}{z}\psi_{j-1}(z)\) for all \(j\geq 1\) (where we have let \(\psi_{0}(z)=1\)), we have \[A(z)\psi_{s}(z)+\frac{1}{z}\psi_{s-1}(z)=v^{\prime}(z)+\big{(}u_ {0}(z)+u^{\prime}_{0}(z)\big{)}e^{z}+\sum_{j=1}^{s-1}u^{\prime}_{j}(z)\psi_{j} (z)\\ +\sum_{j=1}^{s-1}u_{j}(z)\big{(}A(z)\psi_{j}(z)+\frac{1}{z}\psi_{ j-1}(z)\big{)}, \tag{4.2}\] where \(A(z):=1-a/z\). Substituting the right-hand side of (4.1) for \(\psi_{s}(z)\) on the left-hand side of (4.2), we then deduce that \[v^{\prime}(z)-A(z)v(z)+\big{(}u^{\prime}_{0}(z)+(1-A(z))u_{0}(z) \big{)}e^{z}\\ +\frac{1}{z}(z-a)u_{1}(z)\psi_{1}(z)+\sum_{j=1}^{s-1}u^{\prime}_{ j}(z)\psi_{j}(z)+\frac{1}{z}\sum_{j=1}^{s-1}u_{j}(z)\psi_{j-1}(z)-\frac{1}{z} \psi_{s-1}(z)=0.\] This is a non-trivial \(\mathbb{C}(z)\)-linear relation between \(1,e^{z},\psi_{1}(z),\psi_{2}(z),\ldots,\psi_{s-1}(z)\) because the coefficient of \(\psi_{s-1}(z)\) is \(u^{\prime}_{s-1}(z)-1/z\) and it is not identically \(0\) because \(u^{\prime}_{s-1}(z)\) cannot have a pole of order \(1\). But by the induction hypothesis, we know that such a relation is impossible. \((ii)\) The proof can be done by induction on \(s\geq 2\) similarily. In the case \(s=2\), assume the existence of a relation \(\psi_{2}(z)=u(z)e^{z}+v(z)\) with \(u(z),v(z)\in\mathbb{C}(z)\). By differentiation, we obtain \[\Big{(}1-\frac{a}{z}\Big{)}\psi_{2}(z)=-\frac{1}{z}\psi_{1}(z)+\big{(}u(z)+u^{ \prime}(z)\big{)}e^{z}+v^{\prime}(z).\] By induction on \(a\geq 1\), we have \(\psi_{1}(z)=(a-1)!e^{z}/z^{a}+w(z)\) for some \(w(z)\in\mathbb{Q}(z)\). Hence, we have \[\Big{(}1-\frac{a}{z}\Big{)}u(z)=-\Big{(}\frac{(a-1)!}{z^{a+1}}+1\Big{)}u(z)+u^ {\prime}(z)\] which is not possible. Let us now assume that the case \(s-1\geq 2\) holds, as well as the existence of a relation over \(\mathbb{C}(z)\) \[\psi_{s}(z)=v(z)+u_{0}(z)e^{z}+\sum_{j=2}^{s-1}u_{j}(z)\psi_{j}(z). \tag{4.3}\] We proceed exactly as above by differentiation of both sides of (4.3). Using the relation \(\psi^{\prime}_{j}(z)=(1-\frac{a}{z})\psi_{j}(z)+\frac{1}{z}\psi_{j-1}(z)\) for all \(j\geq 2\) and the fact that \(\psi_{1}(z)=(a-1)!e^{z}/z^{a}+w(z)\), we obtain a relation \(\widetilde{v}(z)+\widetilde{u}_{0}(z)e^{z}+\sum_{j=2}^{s-1}\widetilde{u}_{j} (z)\psi_{j}(z)=0\) where \(\widetilde{u}_{s-1}(z)=u^{\prime}_{s-1}(z)-1/z\) cannot be identically \(0\). The induction hypothesis rules out the existence of such a relation. ### A formula for \(\Gamma^{(s)}(a)\) Let \(z>0\) and \(a\in\mathbb{Q}^{+}\), \(a\neq 0\). We have \[\Gamma^{(s)}(a)=\int_{0}^{\infty}t^{a-1}\log(t)^{s}e^{-t}dt=\int_{0}^{z}t^{a- 1}\log(t)^{s}e^{-t}dt+\int_{z}^{\infty}t^{a-1}\log(t)^{s}e^{-t}dt.\] On the one hand, \[\int_{0}^{z}t^{a-1}\log(t)^{s}e^{-t}dt =\sum_{n=0}^{\infty}\frac{(-1)^{n}}{n!}\int_{0}^{z}t^{a+n-1}\log( t)^{s}dt\] \[=\sum_{n=0}^{\infty}\frac{(-1)^{n}}{n!}\sum_{k=0}^{s}(-1)^{k} \frac{s!}{(s-k)!}\frac{z^{n+a}\log(z)^{s-k}}{(n+a)^{k+1}}\] \[=\sum_{k=0}^{s}\frac{(-1)^{k}s!}{(s-k)!}z^{a}\log(z)^{s-k}E_{a,k+ 1}(-z);\] recall that \(E_{a,j}(z)=\sum_{n=0}^{\infty}\frac{z^{n}}{n!(n+a)^{j}}\). On the other hand, \[\int_{z}^{\infty}t^{a-1}\log(t)^{s}e^{-t}dt =e^{-z}\int_{0}^{\infty}(t+z)^{a-1}\log(t+z)^{s}e^{-t}dt\] \[=z^{a-1}e^{-z}\sum_{k=0}^{s}\binom{s}{k}\log(z)^{s-k}\int_{0}^{ \infty}(1+t/z)^{a-1}\log(1+t/z)^{k}e^{-t}dt.\] Now \(z>0\) so that \[\mathfrak{f}_{a,k+1;0}(z):=\int_{0}^{\infty}(1+tz)^{a-1}\log(1+tz)^{k}e^{-t}dt =\frac{1}{z}\int_{0}^{\infty}(1+x)^{a-1}\log(1+x)^{k}e^{-x/z}dx\] is the \(1\)-summation at the origin in the direction \(0\) of the \(\mathfrak{I}\)-function \[\sum_{n=0}^{\infty}n!u_{a,k,n}z^{n},\] where the sequence \((u_{a,k,n})_{n\geq 0}\in\mathbb{Q}^{\mathbb{N}}\) is defined by the expansion of the \(G\)-function: \[(1+x)^{a-1}\log(1+x)^{k}=\sum_{n=0}^{\infty}u_{a,k,n}x^{n}.\] Note that if \(k=0\) and \(a\in\mathbb{N}^{*}\), then \(u_{a,k,n}=0\) for any \(n\geq a\), and \(\mathfrak{f}_{a,k+1;0}(1/z)\) is a polynomial in \(1/z\). Therefore, we have for any \(z>0\): \[\Gamma^{(s)}(a)=\sum_{k=0}^{s}\frac{(-1)^{k}s!}{(s-k)!}z^{a}\log(z)^{s-k}E_{a,k+1}(-z)+z^{a-1}e^{-z}\sum_{k=0}^{s}\binom{s}{k}\log(z)^{s-k}\mathfrak{f}_{a, k+1;0}(1/z).\] In particular, for \(z=1\), this relation reads \[\Gamma^{(s)}(a)=(-1)^{s}s!E_{a,s+1}(-1)+e^{-1}\mathfrak{f}_{a,s+1;0}(1). \tag{4.4}\] Since \(\gamma=-\Gamma^{\prime}(1)\) we obtain as a special case the formula \[\gamma=E_{1,2}(-1)-e^{-1}\mathfrak{f}_{1,2;0}(1), \tag{4.5}\] which is also a special case of Eq. (3.1) proved in [15]. ### Proof of Theorem 2 Let us assume that \(\Gamma^{(s)}(a)\in\overline{\mathbb{Q}}\) for some \(a\in\mathbb{Q}^{+}\setminus\mathbb{N}\) and \(s\geq 0\). Then \(e^{z}\Gamma^{(s)}(a)+(-1)^{s+1}s!e^{z}E_{a,s+1}(-z)\) is an \(E\)-function. The relation \(e\Gamma^{(s)}(a)+(-1)^{s+1}s!eE_{a,s+1}(-1)=\mathfrak{f}_{a,s+1;0}(1)\) proved at the end of SS4.2 shows that \(\alpha:=e\Gamma^{(s)}(a)+(-1)^{s+1}s!eE_{a,s+1}(-1)\in\mathbf{E}\cap\mathbf{D}\). Hence \(\alpha\) is in \(\overline{\mathbb{Q}}\) by Conjecture 1 and we have a non-trivial \(\overline{\mathbb{Q}}\)-linear relation between \(1,e\) and \(eE_{a,s+1}(-1)\): we claim that this is not possible. Indeed, consider the vector \[Y(z):={}^{t}(1,e^{z},e^{z}E_{a,1}(-z),\dots,e^{z}E_{a,s+1}(-z)).\] It is solution of a differential system \(Y^{\prime}(z)=M(z)Y(z)\) where \(0\) is the only pole of \(M(z)\in M_{s+3}(\overline{\mathbb{Q}}(z))\) (see the computations in the proof of Lemma 2 above). Since the components of \(Y(z)\) are \(\overline{\mathbb{Q}}(z)\)-linearly independent by Lemma \(2(i)\), we deduce from Beukers' [6, Corollary 1.4] that \[1,\,e,\,eE_{a,1}(-1),\,\dots,\,eE_{a,s+1}(-1)\] are \(\overline{\mathbb{Q}}\)-linearly independent, and in particular that \(1,e\) and \(eE_{a,s+1}(-1)\) are \(\overline{\mathbb{Q}}\)-linearly independent. This concludes the proof if \(a\in\mathbb{Q}^{+}\setminus\mathbb{N}\). Let us assume now that \(\Gamma^{(s)}(a)\in\overline{\mathbb{Q}}\) for some \(a\in\mathbb{N}^{*}\) and \(s\geq 1\). Then \(e^{z}\Gamma^{(s)}(a)+(-1)^{s+1}s!e^{z}E_{a,s+1}(-z)\) is an \(E\)-function. The relation \(\Gamma^{(s)}(a)+(-1)^{s+1}s!E_{a,s+1}(-1)=e^{-1}\mathfrak{f}_{a,s+1;0}(1)\) shows that \(\alpha:=e\Gamma^{(s)}(a)+(-1)^{s+1}s!eE_{a,s+1}(-1)\in\mathbf{E}\cap\mathbf{D}\). Hence \(\alpha\) is in \(\overline{\mathbb{Q}}\) by Conjecture 1 and we have a non-trivial \(\overline{\mathbb{Q}}\)-linear relation between \(1,e\) and \(eE_{a,s+1}(-1)\): we claim that this is not possible. Indeed, consider the vector \(Y(z):={}^{t}(1,e^{z},e^{z}E_{a,2}(-z),\dots,\)\(e^{z}E_{a,s+1}(-z))\): it is solution of a differential system \(Y^{\prime}(z)=M(z)Y(z)\) where \(0\) is the only pole of \(M(z)\in M_{s+2}(\overline{\mathbb{Q}}(z))\). Since the components of \(Y(z)\) are \(\overline{\mathbb{Q}}(z)\)-linearly independent by Lemma \(2(ii)\), we deduce again from Beukers' theorem that \[1,\,e,\,eE_{a,2}(-1),\,\dots,\,eE_{a,s+1}(-1)\] are \(\overline{\mathbb{Q}}\)-linearly independent, and in particular that \(1,e\) and \(eE_{a,s+1}(-1)\) are \(\overline{\mathbb{Q}}\)-linearly independent. This concludes the proof of Theorem 2. ### Proof of Proposition 1 Recall that Eq. (4.5) proved in SS4.2 reads \(eE_{1,2}(-1)-e\gamma=\mathfrak{f}_{1,2;0}(1).\) Assuming that \(\gamma\in\mathbf{E}\), the left-hand side is in \(\mathbf{E}\) while the right-hand side is in \(\mathbf{D}.\) Hence both sides are in \(\overline{\mathbb{Q}}\) by Conjecture 1. Note that, by integration by parts, \[\mathfrak{f}_{1,2;0}(1)=\int_{0}^{\infty}\log(1+t)e^{-t}dt=\int_{0}^{\infty} \frac{e^{-t}}{1+t}dt\] is Gompertz's constant. Hence, by Corollary 1 (which holds under Conjecture 2), the number \(\mathfrak{f}_{1,2;0}(1)\) is not in \(\overline{\mathbb{Q}}\). Consequently, \(\gamma\notin\mathbf{E}\). Similarly, Eq. (4.4) with \(a\in\mathbb{Q}\setminus\mathbb{Z}\) and \(s=0\) reads \(e\Gamma(a)-eE_{a,1}(-1)=\mathfrak{f}_{a,1;0}(1)\). Assuming that \(\Gamma(a)\in\mathbf{E}\), the left-hand side is in \(\mathbf{E}\) while the right-hand side is in \(\mathbf{D}.\) Hence both sides are in \(\overline{\mathbb{Q}}\) by Conjecture 1. But by Corollary 1 (which holds under Conjecture 2), the number \(\mathfrak{f}_{a,1;0}(1)=\int_{0}^{\infty}(1+t)^{a-1}e^{-t}dt\) is not in \(\overline{\mathbb{Q}}\). Hence, \(\Gamma(a)\notin\mathbf{E}\). Application of Beukers' method and consequence In this section we prove Theorem 3 and Corollary 1 stated in the introduction. ### Proof of Theorem 3 The proof of Theorem 3 is based on the arguments given in [6], except that \(E\)-functions have to be replaced with \(\mathfrak{I}\)-functions, and \(1\)-summation in non-anti-Stokes directions is used for evaluations. Conjecture 2 is used as a substitute for Theorem A\((i)\). The main step is the following result, the proof of which is analogous to the end of the proof of [6, Corollary 2.2]. **Proposition 4**.: _Assume that Conjecture 2 holds._ _Let \(\mathfrak{f}\) be an \(\mathfrak{I}\)-function, \(\xi\in\overline{\mathbbm{Q}}^{*}\) and \(\theta\in(\arg(\xi)-\pi/2,\arg(\xi)+\pi/2)\). Assume that \(\theta\) is not anti-Stokes for \(\mathfrak{f}\), and that \(\mathfrak{f}_{\theta}(1/\xi)=0\). Denote by \(Ly=0\) a differential equation, of minimal order, satisfied by \(\mathfrak{f}(1/z)\)._ _Then all solutions of \(Ly=0\) are holomorphic and vanish at \(\xi\); the differential operator \(L\) has an apparent singularity at \(\xi\)._ To deduce Theorem 3 from Proposition 4, it is enough to follow [6, SS3]. ### Proof of Corollary 1 Let \(s\in\mathbb{Q}\setminus\mathbb{Z}_{\geq 0}\). The \(\mathfrak{I}\)-function \(\mathfrak{f}(z):=\sum_{n=0}^{\infty}s(s-1)\ldots(s-n+1)z^{n}\) is solution of the inhomogeneous differential equation \(z^{2}\mathfrak{f}^{\prime}(z)+(1-sz)\mathfrak{f}(z)-1=0\), which can be immediately transformed into a differential system satisfied by the vector of \(\mathfrak{I}\)-functions \({}^{t}(1,\mathfrak{f}(z))\). The coefficients of the matrix have only \(0\) as pole. Moreover, \(\mathfrak{f}(z)\) is a transcendental function because \(s\notin\mathbb{Z}_{\geq 0}\). Hence, by Theorem 3, \(\mathfrak{f}_{0}(1/\alpha)\notin\overline{\mathbb{Q}}\) when \(\alpha\in\overline{\mathbb{Q}}\), \(\alpha>0\), because \(0\) is not an anti-Stokes direction of \(\mathfrak{f}(z)\). It remains to observe that this \(1\)-sommation is \[\int_{0}^{\infty}(1+tz)^{s}e^{-t}dt.\]
2309.10633
Fundamental limitations of time measurement precision in Hong-Ou-Mandel interferometry
In quantum mechanics, the precision achieved in parameter estimation using a quantum state as a probe is determined by the measurement strategy employed. The ultimate quantum limit of precision is bounded by a value set by the state and its dynamics. Theoretical results have revealed that in interference measurements with two possible outcomes, this limit can be reached under ideal conditions of perfect visibility and zero losses. However, in practice, this cannot be achieved, so precision {\it never} reaches the quantum limit. But how do experimental setups approach precision limits under realistic circumstances? In this work we provide a general model for precision limits in two-photon Hong-Ou-Mandel interferometry for non-perfect visibility. We show that the scaling of precision with visibility depends on the effective area in time-frequency phase space occupied by the state used as a probe, and we find that an optimal scaling exists. We demonstrate our results experimentally for different states in a set-up where the visibility can be controlled and reaches up to $99.5\%$. In the optimal scenario, a ratio of $0.97$ is observed between the experimental precision and the quantum limit, establishing a new benchmark in the field.
Othmane Meskine, Eloi Descamps, Arne Keller, Aristide Lemaître, Florent Baboux, Sara Ducci, Pérola Milman
2023-09-19T14:15:22Z
http://arxiv.org/abs/2309.10633v1
# Fundamental limitations of time measurement precision in Hong-Ou-Mandel interferometry ###### Abstract In quantum mechanics, the precision achieved in parameter estimation using a quantum state as a probe is determined by the measurement strategy employed. The ultimate quantum limit of precision is bounded by a value set by the state and its dynamics. Theoretical results have revealed that in interference measurements with two possible outcomes, this limit can be reached under ideal conditions of perfect visibility and zero losses. However, in practice, this cannot be achieved, so precision _never_ reaches the quantum limit. But how do experimental setups approach precision limits under realistic circumstances? In this work we provide a general model for precision limits in two-photon Hong-Ou-Mandel interferometry for non-perfect visibility. We show that the scaling of precision with visibility depends on the effective area in time-frequency phase space occupied by the state used as a probe, and we find that an optimal scaling exists. We demonstrate our results experimentally for different states in a set-up where the visibility can be controlled and reaches up to \(99.5\%\). In the optimal scenario, a ratio of \(0.97\) is observed between the experimental precision and the quantum limit, establishing a new benchmark in the field. The Hong-Ou-Mandel (HOM) interferometer is currently used to demonstrate the phenomenon of bunching of two identical, independent bosonic quantum particles, such as single photons [1] (see Fig. 1). In this setup, photons are made to interfere on a balanced beam-splitter (BS) and their detection in coincidence at the output indicates whether they have bunched together or not. To control the distinguishability of the two paths of the interferometer, a time delay can be introduced for one of the input photons, consequently changing the coincidence detection probability. Despite its seemingly straightforward operating principles, the HOM interferometer has found diverse applications beyond its original scope [2]. For instance, the coincidence detection signal at the BS output has been demonstrated to serve as an entanglement witness [3; 4; 5], to provide phase space information about the spectral function [6; 7; 8], and to enable the simulation of different quantum exchange statistics [9; 10], among other applications [11; 12; 13]. In particular, the HOM interferometer is a valuable apparatus for quantum parameter estimation [14; 15; 16; 17; 18; 19]: its low-intensity regime opens the possibility of applying the tools of quantum metrology to small and fragile probes, as biological ones [21]; since the HOM effect is based on two-photon interference, it is robust against background noise and group velocity dispersion [22]. Last but not least, theoretical and experimental results indicate that it can arbitrarily approach the quantum ultimate precision limit for time delay (or path difference) estimation. However, in spite of the recent experiments reaching up to attosecond precision on time delay estimations [17; 18; 20], the exact mechanisms determining the limits and limitations of time measurement precision using the HOM are unknown. In particular, a curious result observed not only for HOM interferometers but for any parameter estimation dichotomic measurement (see also [23; 24], for instance) concerns the behavior of precision with the visibility \(V\) at the point where, in the ideal case, the former is expected to saturate the quantum limit. For instance in the HOM set-up, if \(V=1\), one can attain the ultimate precision limit at the point photons perfectly bunch or anti-bunch. Nevertheless, in the experimentally realistic case where the visibility \(V<1\), total bunching or anti-bunching is no longer observed and precision drops down to zero at this same point, suffering a discontinuity. A consequence of this is that for finite visibility, the ultimate quantum precision limit can only be approached, and a way to circumvent this in the context of a Mach-Zender interferometer was studied in [25] using a mode engineering-based strategy. Nevertheless, it is not clear, from a fundamental point of view, what regulates the achievable precision in a HOM experiment and why some spectral functions seem to present a better performance than others [15; 17; 18; 19]. In the present Letter, we provide a theoretical model together with its experimental demonstration explaining both qualitatively and quantitatively the performance of the HOM experiment as a quantum metrological tool. Our results explain why certain wavefunctions exhibit higher precision performance than others in the regime of finite visibility and we have exactly theoretically predicted and experimentally confirmed the wavefunction-dependency of the scaling of precision with the visibility for different wavefunctions. For some configurations, we reach the highest ratio between the achieved precision and the maximum possible one to date, thereby setting a new benchmark in this field. In a typical metrological protocol, a probe is prepared in an initial state, and undergoes a dynamical evolution depending on a parameter to be estimated, \(\theta\). The probe is measured providing an outcome \(k\), which is used to estimate \(\theta\). By associating the function \(p_{k}(\theta)\) to the probability of obtaining an outcome \(k\), the precision on the estimation of \(\theta\) is bounded by the relation \(\delta\theta\geq 1/\sqrt{\nu F(\theta)}\), where \(F(\theta)=\sum_{k}\frac{1}{p_{k}(\theta)}\left(\frac{\partial p_{k}(\theta)}{ \partial\theta}\right)^{2}\) is the Fisher information (FI) and \(\nu\) is the number of repetitions of the experiment. When using quantum mechanical resources - as individual photons, which is the case in a HOM experiment - one can define the quantum Fisher information (QFI) by using a quantum state as a probe [26]. The probe's evolution depends on the parameter \(\theta\), and for pure states and unitary evolutions, this dependency can be expressed as \(\left|\psi(\theta)\right\rangle=e^{i\hat{H}\theta}\left|\psi\right\rangle\), where \(\hat{H}\) is the Hamiltonian generating the dynamical evolution. Precision is thus limited by the relation \(\delta\theta\geq 1/\sqrt{\nu\mathcal{F}}\), where \(\mathcal{F}\) is the QFI, obtained by maximizing \(F(\theta)\) over all possible measurements on \(\left|\psi(\theta)\right\rangle\). In the case discussed above, \(\mathcal{F}=4\Delta^{2}\hat{H}\), where the variance is taken with respect to the initial state \(\left|\psi\right\rangle\)[27]. The bound \(1/\sqrt{\nu\mathcal{F}}\) is called the _quantum Cramer-Rao bound_ (QCR) [28]. The mathematical procedure to determine \(\mathcal{F}\) is known, but the problem of finding an experimental measurement strategy where \(F(\theta)=\mathcal{F}\) remains. There is no general rule for this, even though optimization procedures can be applied to particular states [29; 30] and symmetry arguments can be evoked in specific situations [31; 32], as in the HOM experiment [14; 16] that we'll discuss in details. As can be found in the literature, time precision in this type of interferometer with perfect visibility can reach the QCR bound. For this reason, several attempts have been made to reach this bound, obtaining astonishing precision on the estimation of time delays in HOM experiments [17; 18; 19; 20]. To understand these results, we consider as initial state (probe) a photon pair prepared in an arbitrary pure state, that enters the two input arms \(1,2\) of a perfectly balanced BS: \[\left|\psi\right\rangle=\int\int d\omega_{1}\omega_{2}f(\omega_{1},\omega_{2} )\hat{a}_{1}^{\dagger}(\omega_{1})\hat{a}_{2}^{\dagger}(\omega_{2})\left|0 \right\rangle, \tag{1}\] where \(f(\omega_{1},\omega_{2})\) is the complex-valued normalized joint spectral amplitude (JSA). Before impinging the balanced BS, one of the photons of the state (1) can be subjected to a time delay \(\tau\), the parameter to be estimated. This delay is described by a unitary evolution, and the associated Hamiltonian is \(\hat{H}=\hbar\int d\omega\omega\hat{a}_{1}^{\dagger}(\omega)\hat{a}_{1}(\omega )=\hbar\hat{\omega}_{1}\) (we have supposed, without loss of generality, that the arm labeled 1 is delayed), and state (1) is transformed as \(\left|\psi(\tau)\right\rangle=e^{i\hat{H}\tau/\hbar}\left|\psi\right\rangle= \hat{U}\left|\psi\right\rangle\). If we now consider the action of the BS and compute the probability of detecting both photons in coincidence \(P_{c}(\tau)\), we obtain [14]\(P_{c}(\tau)=\frac{1}{2}(1-\left\langle\psi\right|\hat{U}^{\dagger}\hat{S}U \left|\psi\right\rangle)\), where \(\hat{S}\hat{a}_{1}^{\dagger}(\omega_{1})\hat{a}_{2}^{\dagger}(\omega_{2}) \hat{S}^{\dagger}=\hat{a}_{1}^{\dagger}(\omega_{2})\hat{a}_{2}^{\dagger}(\omega _{1})\) corresponds to the swap of spatial modes. \(P_{c}(\tau)\) is typically directly obtained from the recorded experimental data. As for \(P_{a}(\tau)\), probability of anti-coincidences, it can also be directly detected [20] or inferred from the relation \(P_{a}(\tau)=1-P_{c}(\tau)\)[33]. Finally, the QFI can be expressed as [34]\(\mathcal{F}=4\Delta^{2}\hat{\omega}_{1}\). Notice that for perfectly symmetric (S) (anti-symmetric (AS)) states \(\left|\psi\right\rangle_{S(AS)}\) with respect to the exchange of spatial modes, we have that \(\hat{S}\left|\psi\right\rangle_{S(AS)}=\pm\left|\psi\right\rangle_{S(AS)}\) and \(P_{c}(\tau=0)=0(1)\). These states correspond to the situations where the HOM has perfect visibility and the ultimate quantum precision limit can be reached. The HOM experiment provides information about the collective variables \(\omega_{-}=\omega_{1}-\omega_{2}\)[16]. We'll now suppose that the input photons of the interferometer are generated by Spontaneous Parametric Down Conversion (SPDC) and that \(f(\omega_{1},\omega_{2})=f_{-}(\omega_{-})f_{+}(\omega_{+})\) in (1), with \(\omega_{\pm}=\omega_{1}\pm\omega_{2}\). Functions \(f_{-}\) and \(f_{+}\) are normalized functions related to the phase matching condition and to the energy conservation, respectively. Notice that this separability hypothesis has no impact on our results but simplifies their presentation. If one is using the HOM with metrological purposes [14; 15; 16], the best configuration is the one where \(f_{+}\) is a Dirac function centered on \(\omega_{p}\) (the pump's frequency) and \(\omega_{+}\) is close to constant (strict energy conservation), a situation that maximizes frequency correlation between photons [35]. In this case \(\mathcal{F}=4\Delta^{2}\omega_{1}=\Delta^{2}\omega_{-}\), where we have supposed for simplicity that both photons have the same spectral variance. Still using the proposed factorization in collective variables \(\omega_{\pm}\), in the case of perfect visibility (so \(f_{-}(\omega_{-})\) is, for instance, a perfectly even function of \(\omega_{-}\)[36]), we have that \(\left\langle\psi\right|\hat{U}^{\dagger}\hat{S}U\left|\psi\right\rangle= \int d\omega_{-}e^{i\omega_{-}\tau}f_{-}(\omega_{-})f_{-}^{*}(-\omega_{-})=W(0,\tau)\). Here, \(W(\mu,\tau)\) denotes the chronocyclic Wigner function associated with \(f_{-}(\omega_{-})\) on the time-frequency phase space (TFPS), specifically on the axis \(\mu=0\) while \(\tau\), the time delay, is variable. In this context, \(\mu\) represents the phase space variable associated with \(\omega_{-}\). It has Figure 1: Experimental setup for investigating the metrological performance of the Hong Ou Mandel (HOM) experiment, showing the generation, joint spectral amplitude (JSA) engineering and HOM interferometer stages been demonstrated in [6] and experimentally validated in [7; 8] that the HOM experiment directly measures the Wigner function points \(W(0,\tau)\), _i.e._, along the axis \(\mu=0\) of TFPS. Adopting this representation facilitates an intuitive understanding of the factors that dictate the limitations imposed by non-perfect visibility, which is the issue studied in the following. To this aim, we model the dependency of \(P_{c}(\tau)\) with the visibility \(V\) as follows [34]: \[P_{c}(\tau)=\frac{1}{2}-\frac{V}{2}W(0,\tau), \tag{2}\] where \(0\leq V\leq 1\) and \(W(0,\tau)\) is the Wigner function of the ideal state, _i.e._ a perfectly symmetric one, that leads to a perfect visibility. Since \(W(0,0)=1\), \(P_{c}(0)=(1-V)/2\) (this is simply the normalization condition). In the present model, we are not considering explicitly the role of experimental noise or losses in the measurement, evolution or preparation steps: these effects can either be included in the QFI - which is consequently modified - or in the state's purity [37; 38]. Including noise in the state corresponds to considering a different (non-pure) state as a probe, so a different function \(W(0,\tau)\). Thus, state noise or measurement losses have no incidence on the model (2), that remains valid. However, even in the case where a state is pure and is measured in a lossless configuration it can lead to a non-perfect visibility due to state preparation imperfections that cannot be circumvented [34] and this is why we focus on the visibility in the present work. Using (2), the FI at point \(\tau\) for a general value of \(V\) is given by \[F(V,\tau)=V^{2}\frac{(W^{\prime}(0,\tau))^{2}}{(1-V^{2}W^{2}(0,\tau))}, \tag{3}\] where the superscript \({}^{\prime}\) denotes the time derivative. By defining \(\max_{\tau}F(V,\tau)\stackrel{{\mbox{\tiny def}}}{{=}}\widetilde{F }_{V}\), we obtain, for \(V=1\) (perfectly S or AS states), \(\widetilde{F}_{1}=F(1,0)=\mathcal{F}\)[14; 16; 32]. This shows that the quantum precision limit can be achieved for perfect visibility. For \(V\neq 1\) we can still compute \(\widetilde{F}_{V}\), which is obtained at a point \(\tau=\tau_{M}\neq 0\). As a matter of fact, for \(\tau=0\), the function \(F(V,0)\), when considered as a function of \(V\), exhibits a discontinuity at \(V=1\): indeed, \(F(V<1,0)=0\), while \(F(1,0)=\widetilde{F}_{1}=\mathcal{F}\), as previously shown. For \(V<1\), the maximal values of \(\widetilde{F}_{V<1}\) satisfy the condition \(\widetilde{F}_{V<1}=-W^{\prime\prime}(0,\tau_{M})/W(0,\tau_{M})\). Experimental investigations of these results were conducted in [17; 19; 20], and [23], but a general theory explaining the overall behavior of the attainable values of \(\widetilde{F}_{V}\) and their limitations remains unknown. We'll now elucidate how \(\widetilde{F}_{V}\) approaches \(\mathcal{F}\). Importantly, we find that this approach depends not only on the visibility but we also identify a relation with the effective phase space occupation of the state's Wigner function \(W(\mu,\tau)\)[39]. In other words, the scaling of \(\widetilde{F}_{V}\) with \(V\) is connected to how far the quantum state is from saturating the time-frequency Heisenberg uncertainty principle [40; 41]. A first remark is that since \(V\leq 1\), using (3), we have that \(F(V,\tau)\leq V^{2}F(1,\tau)\), so \(\widetilde{F}_{V}=V^{2}\mathcal{F}\) is the best possible scaling of the FI with \(V\). This is a proof that for \(V<1\) the HOM can never reach the QCR bound, even in the absence of losses. In addition, using (3) we can see that the best possible scaling is obtained when \(W(0,\tau_{M})=0\) (_i.e._, \(P_{c}(\tau_{M})=1/2\)) and \(W^{\prime}(0,\tau_{M})\neq 0\). A sinusoidal function of frequency \(\sqrt{\mathcal{F}}\) satisfies these conditions (a solution also leading to a constant FI in \(\tau\) for \(V=1\)[34; 42]). Although this corresponds to an unphysical state, it represents the limit situation of Schrodinger cat (SC)-like states [18; 20; 43] with \(\Delta^{2}\hat{\omega}_{-}\Delta^{2}\hat{t}\gg 1\), so occupying a large effective area in the time-frequency phase space (TFPS) [34]. Surprisingly, SC-like states exhibit remarkable robustness in the presence of decreased visibility, making them the most resilient states in HOM-based quantum metrology, which is yet another interesting quality of these states in quantum metrology [44]. To identify the states leading to the worst possible scaling of \(\widetilde{F}_{V}\) with the visibility, we should identify the states that minimize \(\widetilde{F}_{V<1}\), and a trivial solution similar to the one discussed in [45; 46] is a constant function with \(W(0,\tau)=1\), so that \(\widetilde{F}_{1}=\mathcal{F}\). This is of course an unphysical solution, and the physical states which are the closest to this solution are Gaussian states, which saturate the Heisenberg's relation \(\Delta^{2}\hat{\omega}_{-}\Delta^{2}\hat{t}=1\). In conclusion, Gaussian states have the worst rate of distinguishability [46] and exhibit the worst scaling with \(V\), even though they can have the same limit value for the FI as SC states for \(V=1\) and \(\tau=0\), _i.e._, the QFI. In addition, for Gaussian states, the scaling with \(V\) does not depend on the exact values of \(\Delta\hat{\omega}_{-}\) or \(\Delta\hat{t}\), but on the associated function (Gaussian), which is univocally determined by the product of the two quantities. As states' phase space occupation change from the Gaussian to the sinusoidal behavior, their scaling with visibility improves. A remarkable consequence of the above discussion is that while the wavefunction shape _do not_ play a role in the value of \(\mathcal{F}\)[34; 47] it does play a role in the scaling of \(\widetilde{F}_{V}\) with \(V\), in a way that is related to the effective occupation of the TFPS. Previous works [39] established the connection between metrological properties and the state's occupation of the quadrature phase space [48; 49], where small structures determine quantum properties such as the QFI. Our analysis indicate that the small structures and phase space occupation contribute to the optimization of the scaling of precision with visibility, which is a distinct physical property. We emphasize that our analysis is applicable to various experimental setups where the visibility model (2) holds [23; 24], such as experiments with more than two photons [50], where the scaling with \(V\) is important to determine the tolerance of the sub-shot noise region to visibility decrease (see [23; 34]). We now validate our model using an experiment allowing to engineer two-photon states described by different functions \(f_{-}(\omega_{-})\) exhibiting diverse scaling behaviors with the visibility \(V\). The quantum source consists of an AlGaAs Bragg reflector waveguide, generating polarization-entangled photon pairs via type II Spontaneous Parametric Down-Conversion (SPDC) at telecom wavelengths and operating at room temperatures [51]. A sketch of the experimental setup is provided in Fig. 1. A continuous-wave laser having a wavelength \(\lambda_{pump}=772.42\,\mathrm{nm}\) is coupled into the waveguide using a microscope objective (MO). The output signal is collected by a second microscope objective, and the pump beam is filtered out using a long-pass filter (LPF). The generated photon pairs are then collected in a single mode fiber and possibly directed to a programmable filter (PF, Finisar 4000s), enabling the JSA engineering. When the filter is not inserted, the state generated by the source is described by \(f_{-}(\omega_{-})=\mathrm{sinc}(a\omega_{-}^{2}+k\omega_{-}+c)\) where the coefficients \(a\), \(b\) and \(c\) are related to optical properties of the material such as birefringence and chromatic dispersion [34; 52]. In addition to the study of this case, three different filter shapes are used: a \(15\,\mathrm{nm}\)-wide rectangular filter centered on the degeneracy wavelength \(\lambda_{deg}=1544.8\,\mathrm{nm}\), a Gaussian filter of identical width centered at the same wavelength, and a combination of two \(5\,\mathrm{nm}\)-wide rectangular filters centered at \(\lambda_{1}=1560\,\mathrm{nm}\) and \(\lambda_{2}=1530\,\mathrm{nm}\) corresponding to energy-matched channels and allowing to create a SC-like state, analogously to as described in [53; 54]. The functions can be classified according to the parameter \(\mathcal{S}=\Delta^{2}\hat{\omega}_{-}\Delta^{2}\hat{t}\), which determines the scaling with respect to \(V\) (see [34] for details). At the output of the filtering process, the photon pairs are separated by a polarizing beam splitter (PBS). The \(H\) (\(V\))-polarized photon enters the HOM interferometer through the arm 1 (2). Precise control over the polarization distinguishability, and thus the HOM visibility, is enabled by two fibered polarization controllers, one in each arm (FPC1 and FPC2). The temporal delay between the two photons is controlled by a motorized optical delay line (MDL). The two paths are recombined and separated by a 50/50 BS, then directed to superconducting nanowires single photon detectors (SNSPD). Temporal correlations between the detected photons are analyzed by a time-to-digital converter (TDC). We perform a series of measurements on the four states, systematically varying \(V\) to investigate the scaling of the ratio \(\widetilde{F}_{V}/\mathcal{F}\). Fig. 2 illustrates the results obtained. The coincidence counts data (red points) are fitted (red lines) using \(P_{c}(\tau)\) of Eq. (2) and the theoretical expression of each wavefunction [34]. The FI \(F(V,\tau)\) (blue lines) is then computed using Eq. (3). Firstly, we notice that a reduction in visibility directly leads to a decrease in \(F(V,\tau)\). As expected, with finite visibility, the value of \(F(V,\tau)\) drops to zero at \(\tau=0\). Remarkably high visibilities exceeding \(99\%\) are achieved with the Gaussian, rectangular and SC-like states. Due to a small modal birefringence of the AlGaAs source, the maximum visibility attainable with the full state is \(94,9\%\), still an excellent value given the broad spectral width it covers, approximately \(100\,\mathrm{nm}\). This broad spectrum results in a narrow HOM curve \(P_{c}(t)\), leading to a high FI value of \(2100\,\mathrm{ps}^{-2}\), which is two orders of magnitude higher than those obtained with the Gaussian and rectangular states. While filtering the quantum state does increase the FI, it also decreases the number of detected photons, therefore influencing the overall performance of the metrological protocol in a given integration time. Nevertheless, in this proof-of-principle experiment, we are mainly interested in testing the scaling of the ratio \(\widetilde{F}_{V}/\mathcal{F}\) to demonstrate our model, which can serve as a guideline to other experiments using different strategies. In Fig. 3, the evolution of this ratio with respect to visibility for the four engineered states is reported, both for experiments (points) and theory (lines). We clearly observe that the SC-like state exhibits the most favorable scaling behavior, in contrast to the Gaussian state, which displays a less optimal one. For instance, at a visibility level of around \(83\%\), the ratio drops from \(0.64\) for the SC-like state to \(0.35\) for Figure 2: Left column: Joint Spectral Amplitude of the four different states analyzed in this work. Central and right columns: corresponding Hong-Ou-Mandel coincidence probability \(P_{c}(t)\) and FI for different values of visibility \(V\). the Gaussian state. In conclusion, we have presented a comprehensive theoretical model and experimentally confirmed how precision limits scale with both visibility and the state's wavefunction in practical metrological protocols employing the HOM effect. The very good agreement between the experimental results and the simulations supports the validity of the model given by Eq. (2). Our findings show that reaching the precision limits in realistic conditions presents challenges that depend on the particular state under consideration. Moreover, our theoretical and experimental analysis establishes a general framework for interpreting previous experimental results [17, 18, 19, 20]. Our work holds significant implications, particularly in aiding the identification of optimal conditions to advance HOM-based quantum metrology protocols, leading to enhanced precision in measurements while minimizing the number of repetitions. Finally, some aspects of the presented results can be readily extended to other experiments where parity measurements are employed for quantum parameter estimation [6, 55, 56, 57, 58]. ## Acknowledgements We acknowledge funding from the Plan France 2030 through the project ANR-22-PETQ-0006, N. Fabre and G. Bie Alves for fruitful discussions and M. Karr-Ducci for Fig. 1. O.M. acknowledges Labex SEAM (Science and Engineering for Advanced Materials and devices), ANR-10-LABX-0096 and ANR-18-IDEX-0001 for financial support.
2309.11539
Carrollian c-functions and flat space holographic RG flows in BMS3/CCFT2
We discuss c-functions and their holographic counterpart for two-dimensional field theories with Carrollian conformal fixed points in the UV and the IR. Specifically, we construct asymptotically flat domain wall solutions of three-dimensional Einstein-dilaton gravity that model holographic RG flows between BMS3 invariant UV and IR fixed points. We prove three theorems for such flows: 1. for every holographic RG flow in AdS3, there is a corresponding one in flat space, 2. the BMS central charge in the UV cannot be smaller than in the IR, and 3. the UV/IR ratio of Virasoro central charges is identical to the UV/IR ratio of corresponding BMS central charges. Finally, we tentatively propose a Casini-Huerta-like c-functions for BMS3-invariant quantum field theories, inspired by the AdS3/CFT2 relation between monotonicity of the c-function and the quantum null energy condition.
Daniel Grumiller, Max Riegler
2023-09-20T18:00:00Z
http://arxiv.org/abs/2309.11539v2
# Carrollian \(\mathbf{c}\)-functions and flat space holographic RG flows in BMS\({}_{3}\)/CCFT\({}_{2}\) ###### Abstract We discuss \(c\)-functions and their holographic counterpart for two-dimensional field theories with Carrollian conformal fixed points in the UV and the IR. Specifically, we construct asymptotically flat domain wall solutions of three-dimensional Einstein-dilaton gravity that model holographic RG flows between BMS\({}_{3}\) invariant UV and IR fixed points. We prove three theorems for such flows: 1. for every holographic RG flow in AdS\({}_{3}\), there is a corresponding one in flat space, 2. the BMS central charge in the UV cannot be smaller than in the IR, and 3. the UV/IR ratio of Virasoro central charges is identical to the UV/IR ratio of corresponding BMS central charges. Finally, we tentatively propose a Casini-Huerta-like \(c\)-functions for BMS\({}_{3}\)-invariant quantum field theories, inspired by the AdS\({}_{3}\)/CFT\({}_{2}\) relation between monotonicity of the \(c\)-function and the quantum null energy condition. ###### Contents * I Introduction * II AdS\({}_{3}\)/CFT\({}_{2}\) review * II.1 Casini-Huerta \(c\)-function * II.2 Relation to Quantum Null Energy Condition * II.3 Domain walls in AdS\({}_{3}\) * III BMS\({}_{3}\)/CCFT\({}_{2}\) summary * III.1 Gravity aspects of BMS\({}_{3}\)/CCFT\({}_{2}\) * III.2 Field theory aspects * III.3 Quantum energy conditions in CCFT\({}_{2}\) * IV Domain walls in flat space * IV.1 Geometric aspects of flat space domain walls * IV.2 Flat space domain walls in Einstein-dilaton gravity * IV.3 Flat space holographic RG flow example * V Flat space holographic RG flow theorems * V.1 Definitions * V.2 Correspondence theorem * V.3 Monotonicity theorem and central charge ratio equivalence * VI Tentative proposal for Casini-Huerta-inspired Carrollian \(\mathbf{c}\)-function ## I Introduction Quantum field theories (QFT) adequately describe many systems in nature. A prototypical scenario is a relativistic QFT with conformal fixed points in the ultraviolet (UV) and the infrared (IR). The renormalization group (RG) flow connecting these fixed points can be characterized by \(c\)-functions that obey a \(c\)-theorem. The latter guarantees the existence of some (positive) function, \(c(g_{i},\,\mu)\), depending on the coupling constants \(g_{i}\) and the RG scale \(\mu\) with two key properties: 1. it decreases monotonically under RG flow towards the IR, and 2. at the UV and IR fixed points the \(c\)-function is a (finite) constant. This mathematical statement captures the physical intuition that QFTs have more degrees of freedom in the UV than in the IR. In two-dimensional (2d) QFTs, Zamolodchikov proved a \(c\)-theorem by explicitly constructing a \(c\)-function composed of the energy-momentum tensor components [1]. At the UV- and IR-fixed points, the value of this \(c\)-function coincides with the respective values of the central charge that characterize the corresponding fixed point conformal field theories (CFT), \(c^{\text{\tiny{UV}}}\) and \(c^{\text{\tiny{IR}}}\). The \(c\)-theorem implies \(c^{\text{\tiny{UV}}}\geq c^{\text{\tiny{IR}}}\). The \(c\)-function monotonically interpolates between these fixed point values. After the advent of AdS/CFT [2], it was natural to seek a holographic version of RG flows and construct holographic versions of \(c\)-functions. Domain wall solutions in AdS (reviewed below) provide a simple geometric \(c\)-function [3]. Alternatively, Casini and Huerta (CH) proposed a \(c\)-function [4] based on entanglement entropy (EE) and one of its indispensable properties, strong subadditivity. The CH \(c\)-function is tailor-made for AdS/CFT since EE is generally hard to compute in QFTs but simple to compute on the gravity side in terms of minimal [5] or extremal [6] surfaces. The main purpose of our work is to (holographically) construct \(c\)-functions for 2d field theories with conformal Carrollian fixed points in the UV and IR. The primary motivation for pursuing this goal is a desire for a better understanding of flat space holography and Carrollian CFTs. The main tools we shall employ are flat space domain walls (which we construct and discuss in detail) and flat space holographic EE [7; 8]. This paper is organized as follows. In section II, we review AdS\({}_{3}\)/CFT\({}_{2}\)-aspects pertinent to (holographic) \(c\)-functions. In section III, we summarize BMS\({}_{3}\)/CCFT\({}_{2}\) results required for our constructions. In section IV, we construct domain walls in flat space, discuss their geometric properties, and propose a flat space holographic \(c\)-function. In section V, we prove the three theorems mentioned in the abstract. Section VI concludes with a tentative CH-inspired proposal for a Carrollian \(c\)-function. ## II AdS\({}_{3}\)/CFT\({}_{2}\) Review This section reviews AdS\({}_{3}\)/CFT\({}_{2}\)-aspects pertinent to (holographic) \(c\)-functions. In section II.1, we provide the definition and main properties of the CH \(c\)-function. In section II.2, we recall the relation to the 2d quantum null energy condition (QNEC\({}_{2}\)). Section II.3 summarizes a specific class of domain wall solutions in AdS\({}_{3}\) as an example for a holographic model with non-trivial CH \(c\)-functions. ### Casini-Huerta \(c\)-function The CH \(c\)-function [4; 9] \[c(\ell)=3\ell\,\frac{\mathrm{d}S_{0}}{\mathrm{d}\ell} \tag{1}\] is constructed from ground state EE \(S_{0}\) and depends on the size \(\ell\) of the entangling region. By construction, it is monotonic \[\frac{\mathrm{d}c(\ell)}{\mathrm{d}\ell}\leq 0 \tag{2}\] as a consequence of strong subadditivity. When the inequality (2) is saturated, integrating it twice using (1) yields the result for ground state EE in a CFT\({}_{2}\) on the plane [10; 11], \[S_{0}=\frac{c}{3}\,\ln\frac{\ell}{\varepsilon} \tag{3}\] where \(c\) is the UV fixed-point value of \(c(\ell)\) and the integration constant \(\varepsilon\) is interpreted as UV cutoff. The limit of vanishing entangling region yields the UV-value of the central charge, \[c^{\text{\tiny UV}}=\lim_{\ell\to 0}c(\ell)\,. \tag{4}\] Similarly, in cases where the theory flows to a CFT\({}_{2}\) fixed point in the IR its central charge is obtained in the limit of infinite entangling region. \[c^{\text{\tiny IR}}=\lim_{\ell\to\infty}c(\ell)\leq c^{\text{\tiny UV}} \tag{5}\] ### Relation to Quantum Null Energy Condition Inserting the definition (1) into the monotonicity condition (2) yields an inequality for up to second derivatives of EE. \[0\geq\frac{\mathrm{d}^{2}S_{0}}{\mathrm{d}\ell^{2}}-\frac{1}{\ell}\,\frac{ \mathrm{d}S_{0}}{\mathrm{d}\ell}+\frac{6}{c(\ell)}\!\left(\frac{\mathrm{d}S_{ 0}}{\mathrm{d}\ell}\right)^{2}\geq\frac{\mathrm{d}^{2}S_{0}}{\mathrm{d}\ell^{2 }}-\frac{1}{\ell}\,\frac{\mathrm{d}S_{0}}{\mathrm{d}\ell}+\frac{6}{c^{\text{ \tiny UV}}}\!\left(\frac{\mathrm{d}S_{0}}{\mathrm{d}\ell}\right)^{2} \tag{6}\] The last expression has an interpretation in terms of variations of EE with respect to null deformations of the interval and can be rewritten as \[0\geq\frac{\mathrm{d}^{2}S_{0}}{\mathrm{d}\lambda^{2}}+\frac{6}{c^{\text{ \tiny UV}}}\left(\frac{\mathrm{d}S_{0}}{\mathrm{d}\lambda}\right)^{2} \tag{7}\] where \(\lambda\) is the deformation parameter (see section 2.5 in [12]). The combinations of derivatives (7) is the right-hand side of QNEC\({}_{2}\)[13; 14; 15; 16] \[2\pi\left\langle T\right\rangle\geq\frac{\mathrm{d}^{2}S}{\mathrm{d}\lambda^{2 }}+\frac{6}{c^{\text{\tiny UV}}}\left(\frac{\mathrm{d}S}{\mathrm{d}\lambda} \right)^{2} \tag{8}\] for the ground state EE \(S_{0}\), while the left-hand side of QNEC\({}_{2}\) contains the expectation value of the null projection of the stress-energy tensor, denoted here by \(T\). Since the Poincare-invariant ground state has \(\left\langle T\right\rangle=0\), the CH inequality (2) implies QNEC\({}_{2}\) for the ground state. This relation between QNEC\({}_{2}\) and the CH \(c\)-function can guide our proposal for \(c\)-functions in non-Lorentzian QFTs, provided that quantum energy inequalities are available. In the context of flat space holography, this is indeed the case [17]. ### Domain walls in AdS\({}_{3}\) Domain walls are a specific set of geometries describing a holographic RG flow of a QFT\({}_{2}\) from a UV to an IR CFT\({}_{2}\)-fixed point. The geometry dual to such a flow has Poincare invariant slices. In adapted coordinates \[\mathrm{d}s^{2}=\mathrm{d}\rho^{2}+e^{2\mathrm{d}\rho}\left(-\mathrm{d}t^{2}+ \mathrm{d}x^{2}\right) \tag{9}\] the function \(A(\rho)\) characterizes the RG-flow. For any \(\rho=\mathrm{const.}\) we have Poincare\({}_{2}\)-invariant slices, i.e., the metric (9) has the Killing vectors \(\partial_{t}\), \(\partial_{x}\) and \(x\partial_{t}+t\partial_{x}\). Each \(\rho=\mathrm{const.}\)-slice thus induces a 2d flat-space metric \(\mathrm{d}s^{2}_{(2)}=e^{2\mathrm{d}\rho(\infty)}\left(-\mathrm{d}t^{2}+ \mathrm{d}x^{2}\right)\). There are infinitely many conformal Killing vectors for each such slice, corresponding to the conformal symmetries generated in a CFT\({}_{2}\). By convention, the asymptotic region describing the UV is reached in the limit \(\rho\to\infty\). Domain wall solutions arise, for instance, as solutions to Einstein-dilaton gravity. The bulk action \[I=\frac{1}{16\pi G_{N}}\int\mathrm{d}^{3}x\sqrt{-g}\left(R-\frac{1}{2}( \partial\phi)^{2}-V(\phi)\right) \tag{10}\] with the potential (we use unit AdS-radius) \[V(\phi)=-2+\frac{1}{2}\,m^{2}\phi^{2}+\dots \tag{11}\] yields the equations of motion \[R_{\mu\nu}-\frac{1}{2}\,g_{\mu\nu}\,R =\frac{1}{2}\,\partial_{\mu}\phi\partial_{\nu}\phi-\frac{1}{4}\,( \partial\phi)^{2}g_{\mu\nu}-\frac{1}{2}\,V(\phi)\,g_{\mu\nu} \tag{12a}\] \[\nabla^{2}\Phi =\frac{\partial V(\phi)}{\partial\phi} \tag{12b}\] Rewriting the potential in terms of a superpotential \(W\) \[V(\phi)=-\frac{1}{2}\,W(\phi)^{2}+\frac{1}{2}\,W^{\prime}(\phi)^{2} \tag{13}\] reduces the equations of motion for domain wall solutions (9) to first order equations \[\frac{\mathrm{d}A(\rho)}{\mathrm{d}\rho}=-\frac{1}{2}\,W(\phi(\rho))\qquad \qquad\frac{\mathrm{d}\phi(\rho)}{\mathrm{d}\rho}=\frac{\mathrm{d}W(\phi(\rho ))}{\mathrm{d}\phi(\rho)}\,. \tag{14}\] An example is the superpotential \[W(\phi)=-2-\frac{1}{4}\,\phi^{2}-\frac{\alpha}{8}\,\phi^{4} \tag{15}\] corresponding to a mass \(m^{2}=-\frac{3}{4}\). Integrating the equations (14) for this superpotential yields the domain wall solution \[A(\rho)=\left(1-\frac{1}{16\alpha}\right)\rho-\frac{j^{2}}{16(e^{\rho}-\alpha j ^{2})}+\frac{\log\left(e^{\rho}-\alpha j^{2}\right)}{16\alpha} \tag{16}\] and the scalar field \[\phi(\rho)=\phi_{0}+\frac{je^{-\rho/2}}{\sqrt{1-\alpha j^{2}e^{-\rho}}} \tag{17}\] where \(j\) and \(\phi_{0}\) are integration constants. For this example, the CH \(c\)-function was calculated for small \(\ell\)[12] \[c(\ell\ll 1)=c\left(1-\frac{\pi\ell}{64}+\mathcal{O}(\ell^{2})\right) \tag{18}\] and large \(\ell\) \[c(\ell\gg 1)=\frac{c}{1-\frac{1}{16\alpha}}+\ldots \tag{19}\] assuming negative \(\alpha\). Here \(c=c^{\text{\tiny UV}}=3/(2G_{N})\) takes the Brown-Henneaux value, and we have the relation \[c^{\text{\tiny in}}=\frac{c^{\text{\tiny UV}}}{1-\frac{1}{16\alpha}}<c^{\text {\tiny UV}}\,. \tag{20}\] Alternatively, there is a domain wall holographic \(c\)-function [3] \[c_{\text{\tiny in}}(\rho)=\frac{c^{\text{\tiny UV}}}{A^{\prime}(\rho)} \tag{21}\] that does not require calculating EE but only uses the (derivative of the) domain wall profile function \(A(\rho)\) as input. Since \(\lim_{\rho\to\infty}A^{\prime}(\rho)=1\) and \(\lim_{\rho\to-\infty}A^{\prime}(\rho)=1-\frac{1}{16\alpha}\) we recover the correct UV- and IR-values of the central charge. Moreover, \(A^{\prime}(\rho)\) has the correct monotonicity for a \(c\)-function. In our flat space construction below, we shall propose something analogous to the domain wall \(c\)-function (21). ## III BMS\({}_{3}\)/CCFT\({}_{2}\) Summary In this section, we summarize BMS\({}_{3}\)/CCFT\({}_{2}\) results required for our constructions of flat space holographic \(c\)-functions in later sections. In section III.1, we summarize gravity-aspects of BMS\({}_{3}\)-invariant QFTs, also known as CCFT\({}_{2}\). We collect corresponding field theory aspects in section III.2. In section III.3, we state the quantum inequalities based on EE that apply to these theories. ### Gravity aspects of BMS\({}_{3}\)/CCFT\({}_{2}\) We are interested in 2d QFTs invariant under CCFT\({}_{2}\) symmetries generated by the bmt\({}_{3}\) algebra \[[\mathrm{L}_{n},\mathrm{L}_{m}] =(n-m)\,\mathrm{L}_{n+m}+\frac{c_{\mathrm{L}}}{12}\,n(n^{2}-1)\, \delta_{n+m,\,0} \tag{22a}\] \[=(n-m)\,\mathrm{M}_{n+m}+\frac{c_{\mathrm{M}}}{12}\,n(n^{2}-1)\, \delta_{n+m,\,0}\] (22b) \[=0\,. \tag{22c}\] The generators \(\mathrm{L}_{n}\) yield a Virasoro subalgebra with central charge \(c_{\mathrm{L}}\). They are sometimes referred to as "superrotations" in a gravity context. The supertranslation generators \(\mathbb{M}_{n}\) produce a central charge \(c_{\mathrm{M}}\) in the mixed commutator. The simplest gravity dual leading to (22) as asymptotic symmetries is Einstein gravity with Barnich-Compere boundary conditions [18]. \[\mathrm{d}s^{2}=\mathcal{M}(\varphi)\,\mathrm{d}u^{2}-2\,\mathrm{d }u\,\mathrm{d}r+r^{2}\,\mathrm{d}\varphi^{2}\\ +(2\,\mathcal{L}(\varphi)+u\mathcal{M}(\varphi))\,\mathrm{d}u\, \mathrm{d}\varphi+\ldots \tag{23}\] The coordinate ranges are \(u,r\in\mathbb{R}\) and either \(\varphi\sim\varphi+2\pi\) or \(\varphi\in\mathbb{R}\). The ellipsis denotes subleading terms in a large-\(r\) expansion. The state-dependent functions \(\mathcal{L},\mathcal{M}\) appear as integrands in the boundary charges. Their (Fourier-) modes generate the bmt\({}_{3}\) algebra (22) as asymptotic symmetry algebra, with central charges \(c_{\mathrm{L}}\), \(c_{\mathrm{M}}\) the values of which depend on the gravity theory, see e.g. [19]. For Einstein gravity without cosmological constant, the Virasoro central charge \(c_{\mathrm{L}}\) vanishes since there is no dimensionless coupling constant, while the mixed central charge \(c_{\mathrm{M}}\) is non-zero [18]. We use a (standard) normalization of the generators \(\mathbb{M}_{n}\) where \(c_{\mathrm{M}}=3/G_{N}\). The null orbifold (\(\mathcal{M}=\mathcal{L}=0\)) [20] \[\mathrm{d}s^{2}=-2\,\mathrm{d}u\,\mathrm{d}r+r^{2}\,\mathrm{d}\varphi^{2} \tag{24}\] is dual to the ground state of the BMS\({}_{3}\) invariant QFT in the same way that Poincare-patch AdS\({}_{3}\) is the gravity dual of the ground state of a QFT on the plane with a CFT\({}_{2}\) fixed point in the UV. If \(\varphi\sim\varphi+2\pi\), the null orbifold has a singularity in the causal structure at \(r=0\). For our purposes, this is as irrelevant as the coordinate singularity in the Poincare patch horizon since we will construct domain wall solutions that only asymptote to the null orbifold but do not exhibit its singular behavior in the interior. Moreover, we can simply decompactify \(\varphi\). We later investigate what happens when the retarded time coordinate \(u\) gets rescaled by some factor \(\lambda\) (absorbing such factors in the state-dependent functions \(\mathcal{L}\), \(\mathcal{M}\) when possible). \[\mathrm{d}s^{2}=\mathcal{M}(\varphi)\ \mathrm{d}u^{2}-2\lambda\ \mathrm{d}u\,\mathrm{d}r+r^{2}\ \mathrm{d}\varphi^{2}\\ +\left(2\mathcal{L}(\varphi)+u\mathcal{M}(\varphi)\right)\, \mathrm{d}u\,\mathrm{d}\varphi+\ldots \tag{25}\] The superrotation charges \(\mathrm{L}_{n}\) are associated with asymptotic Killing vectors \(e^{in\varphi}\partial_{\varphi}\) and hence unaffected by a rescaling of retarded time. By contrast, the supertranslation charges \(\mathrm{M}_{n}\) are associated with asymptotic Killing vectors \(e^{in\varphi}\partial_{u}\) and thus get rescaled by \(1/\lambda\). \[[\mathrm{L}_{n},\,\mathrm{M}_{m}/\lambda]=(n-m)\,\mathrm{M}_{n+m}/\lambda+ \frac{c_{\mathrm{M}}}{12}\,n(n^{2}-1)\,\delta_{n+m,\,0} \tag{26}\] Effectively, this rescales the BMS central charge by \(\lambda\). \[[\mathrm{L}_{n},\,\mathrm{M}_{m}]=(n-m)\,\mathrm{M}_{n+m}+\frac{\lambda\,c_{ \mathrm{M}}}{12}\,n(n^{2}-1)\,\delta_{n+m,\,0} \tag{27}\] This observation will be crucial for holographic RG flows modeled by flat space domain walls, discussed in section IV. ### Field theory aspects We now summarize some aspects of CCFTs; see [21; 22] and refs. therein for more details. We start by recalling the definition of the Carrollian conformal weights, analogous to conformal weights: \[\mathrm{L}_{0}|h_{\mathrm{L}},h_{\mathrm{M}}\rangle=h_{\mathrm{L}}|h_{ \mathrm{L}},\,h_{\mathrm{M}}\rangle\qquad\quad\mathrm{M}_{0}|h_{\mathrm{L}},h_ {\mathrm{M}}\rangle=h_{\mathrm{M}}|h_{\mathrm{L}},h_{\mathrm{M}}\rangle \tag{28}\] While the interpretation of the Virasoro central charge \(c_{\mathrm{L}}\) in the \(b\mathfrak{m}_{53}\) algebra (22) is analogous to the corresponding CFT\({}_{2}\) interpretation, the interpretation of the mixed central charge \(c_{\mathrm{M}}\) is more subtle. At first glance, its precise value seems irrelevant, since a change of basis \(\mathbb{M}_{n}\to\lambda\mathbb{M}_{n}\) is an automorphism of the \(b\mathfrak{m}_{53}\) algebra upon rescaling \(c_{\mathrm{M}}\). Nevertheless, there is a Cardy-like entropy formula [23; 24] (see also [25; 26]), which for \(c_{\mathrm{L}}=0\) (and \(h_{\mathrm{M}}>0\)) reads \[S=2\pi\,h_{\mathrm{L}}\,\sqrt{\frac{c_{\mathrm{M}}}{24h_{\mathrm{M}}}}\,. \tag{29}\] The reason there is no contradiction between the appearance of \(c_{\mathrm{M}}\) in the entropy formula (29) and the fact that its value can be rescaled to an arbitrary (positive) number is decisive to understanding our RG flow results presented in later sections. The simple point is that whenever \(c_{\mathrm{M}}\) appears in dimensionless ratios, there is a meaning to this ratio. In the Cardy-like formula (29), the combination \(c_{\mathrm{M}}/h_{\mathrm{M}}\) is dimensionless. Therefore, this formula makes sense. (Another way to come to the same conclusions is to note that both \(h_{\mathrm{M}}\) and \(c_{\mathrm{M}}\) scale in the same way under the automorphism \(\mathbb{M}_{n}\to\lambda\,\mathbb{M}_{n}\), so that the entropy (29) is invariant under it.) The lesson for later is that the value of \(c_{\mathrm{M}}\) by itself is physically irrelevant, but dimensionless ratios involving \(c_{\mathrm{M}}\) can be physically relevant. We focus now on the main observable of interest, EE. It was calculated in [7] for the null orbifold, the global flat space vacuum, and for thermal states, and in [17] for any vacuum-like state, including arbitrary BMS\({}_{3}\)-descendants of the vacuum and of thermal states. We shall need only the result for the null orbifold. \[S_{\mathrm{EE}}=S_{\mathrm{L}}+S_{\mathrm{M}} \tag{30}\] with \[S_{\mathrm{L}}=\frac{c_{\mathrm{L}}}{6}\log\frac{\Delta\varphi}{\epsilon_{ \varphi}}\qquad\qquad S_{\mathrm{M}}=\frac{c_{\mathrm{M}}}{6}\left(\frac{ \Delta u}{\Delta\varphi}-\frac{\epsilon_{u}}{\epsilon_{\varphi}}\right) \tag{31}\] where \(\Delta\varphi\) (\(\Delta u\)) is the spatial (temporal) extent of the entangling region, \(\epsilon_{\varphi}\), \(\epsilon_{u}\) are UV cut-offs, and \(c_{\mathrm{L}}\), \(c_{\mathrm{M}}\) are the central charges of the \(b\mathfrak{m}_{53}\) algebra (22). The result (31) was confirmed in holographic calculations [27; 28; 29; 8]. For \(c_{\mathrm{M}}=0\) (31) coincides with one chiral half of ground state EE in a CFT\({}_{2}\) (3). The UV cut-offs \(\epsilon_{\varphi}\), \(\epsilon_{u}\) drop out in the quantum inequalities and our proposal for the \(c\)-functions discussed below, so we do not discuss them further. ### Quantum energy conditions in CCFT\({}_{2}\) By analogy to AdS\({}_{3}\)/CFT\({}_{2}\), our main interests are quantum energy conditions [17]. We define the expectation values \[2\pi\langle\mathcal{T}_{M}\rangle=\frac{c_{\mathrm{M}}}{24}\,\mathcal{M} \qquad 2\pi\langle\mathcal{T}_{L}\rangle=\frac{c_{\mathrm{L}}}{24}\mathcal{M} +\frac{c_{\mathrm{M}}}{24}(2\mathcal{L}+u\mathcal{M}^{\prime}) \tag{32}\] with conventional normalizations (primes denote \(\varphi\)-derivatives). The quantum energy condition for theories with \(c_{\mathrm{M}}=0\) \[2\pi\,\langle\mathcal{T}_{L}\rangle\geq S^{\,\prime\prime}_{\,L}+\frac{6}{c_{ \mathrm{L}}}\,S^{\,\prime 2}_{\,L} \tag{33}\] is a chiral half of the QNEC\({}_{2}\) inequalities and essentially equivalent to (8). However, we are more interested in the opposite case, when \(c_{\mathrm{L}}=0\) but \(c_{\mathrm{M}}\neq 0\). In that case, the quantum energy condition is (dots denote \(u\)-derivatives) \[2\pi\,\langle\mathcal{T}_{M}\rangle\geq\hat{S}^{\,\prime}_{\,M}+\frac{6}{c_{ \mathrm{M}}}\,S^{\,2}_{\,M}\,. \tag{34}\] The quantum inequalities above inspire the CH-like proposal of \(c\)-functions in CCFT\({}_{2}\) at the end of our paper. ## IV Domain walls in flat space In this section, we construct domain wall solutions in 3d flat space, intending to generate holographic RG-flows analogous to the ones discussed in section II.3. In section IV.1, we set the stage by deriving the possible geometries of flat space domain walls. Section IV.2 focuses on domain wall solutions in flat space Einstein-dilaton gravity. In section IV.3, we pick specific solutions that allow a holographic RG flow interpretation. ### Geometric aspects of flat space domain walls In AdS\({}_{3}\), domain walls are constructed by requiring Poincare\({}_{2}\) invariance on each slice; alternatively, we could have demanded that the conformal Killing vectors of each 2d slice generate CFT\({}_{2}\) symmetries. We follow the second approach to construct flat space domain walls and demand that the degenerate (Carrollian) induced metric has conformal Killing vectors that generate BMS\({}_{3}\) symmetries. Looking at the asymptotic expansion (23), it is suggestive to consider as ansatz the degenerate (Carrollian) induced metric \[\mathrm{d}s_{(2)}^{2}=g_{\mu\nu}^{(2)}\ \mathrm{d}u^{\mu}\ \mathrm{d}x^{\nu}=e^{2A(r)}\ \mathrm{d}\varphi^{2}+0\cdot\mathrm{d}u^{2}+0\cdot\mathrm{d}u\ \mathrm{d}\varphi\,. \tag{35}\] This ansatz ensures that even at finite values of the radial coordinate \(r\), all slices have the same features as in the asymptotic limit \(r\to\infty\). The function \(A(r)\) is arbitrary at this stage. Since the induced metric (35) is degenerate, we make sure not to use its inverse in any of our considerations. To verify this ansatz, we solve the conformal Killing equation \[\mathcal{E}^{\mu}\partial_{\mu}g^{(2)}_{\alpha\beta}+g^{(2)}_{\alpha\mu} \partial_{\rho}\mathcal{E}^{\mu}+g^{(2)}_{\beta\mu}\partial_{\alpha}\mathcal{ E}^{\mu}=g^{(2)}_{\alpha\beta}\partial_{\mu}\mathcal{E}^{\mu} \tag{36}\] for the vector field \(\xi\) using the degenerate metric (35). The result \[\xi=\left(\xi_{M}(\varphi)+u\,\xi^{\prime}_{L}(\varphi)\right)\partial_{u}+ \xi_{L}(\varphi)\,\partial_{\varphi} \tag{37}\] shows that the conformal Killing vectors (37) indeed generate centerless bm\(\mathfrak{e}_{3}\) as Lie-bracket algebra (compare e.g. with [30]). \[\left[\xi(\xi^{(1)}_{M},\,\xi^{(1)}_{L}),\,\xi(\xi^{(2)}_{M},\, \xi^{(2)}_{L})\right]_{\mathrm{L}\alpha}= \tag{38}\] \[\xi(\xi^{(1)}_{M}\xi^{(2)\,\prime}_{L}+\xi^{(1)}_{L}\xi^{(2)\, \prime}_{M}-\xi^{(2)}_{M}\xi^{(1)\,\prime}_{L}-\xi^{(2)}_{L}\xi^{(1)\,\prime} _{M},\,\xi^{(1)}_{L}\xi^{(2)\,\prime}_{L}-\xi^{(2)}_{L}\xi^{(1)\,\prime}_{L})\] Therefore, \(r=\mathrm{const.}\) slices in flat space domain walls only contain the term \(e^{2A(r)}\ \mathrm{d}\varphi^{2}\). The 3d metric describing flat space domain walls \[\boxed{\mathrm{d}s^{2}=-e^{A(r)}\,2\,\mathrm{d}u\,\mathrm{d}r+e^{2A(r)}\ \mathrm{d}\varphi^{2}} \tag{39}\] depends on one arbitrary function1 of the radial coordinate, \(A(r)\). The additional assumption implicit in (39) is that we keep Eddington-Finkelstein gauge in the interior of the bulk. This is analogous to keeping Gaussian normal coordinates in the bulk of AdS\({}_{3}\) domain walls (9). Footnote 1: One could add another function \(B(r)\) in the first term, but we have eliminated it by fixing the diffeomorphisms \(r\to f(r)\) suitably. The gauge choice (39) ensures that both metric coefficients remain bounded and never change sign, provided the function \(A(r)\) remains bounded. Let us now address curvature invariants. The Ricci tensor has a single non-zero component. \[R_{rr}=-A^{\prime\prime} \tag{40}\] Regardless of the choice of \(A(r)\), all geometries (39) have vanishing scalar curvature invariants and vanishing Cotton tensor. This means these geometries are not only locally conformally flat, but it also implies we need some Page-like curvature invariants [31] if we want to characterize these geometries. An example of such an invariant is \[P=\frac{(R_{\mu\nu}k^{\mu}k^{\nu})^{2}}{(\nabla_{\mu}\nabla_{\nu}R_{\alpha \beta})k^{\mu}k^{\nu}k^{\alpha}k^{\beta}} \tag{41}\] where \(k^{\mu}\) is any vector with non-vanishing \(r\)-component.2 However, in the following subsection, we identify an even simpler and more useful scalar invariant, namely the matter scalar, so we will not employ (41). Footnote 2: In Page’s construction, \(k^{\mu}\) had to be null, and the scalar invariants were the maximum and minimum with respect to changes of directions of \(k^{\mu}\). In our case, the Ricci tensor is so simple that the quantity \(P\) is constant not only under changes of direction but also under changes of the signature of \(k^{\mu}\) from light-like to time-like or space-like; the only requirement that \(k^{\mu}\) has to fulfill is that \(R_{\mu\nu}k^{\mu}k^{\nu}\) does not vanish unless \(R_{\mu\nu}=0\), which in our coordinates implies \(k^{\nu}\neq 0\). ### Flat space domain walls in Einstein-dilaton gravity Above, we discussed the kinematics of flat space domain walls. Here, we focus on the dynamics of these domain walls. As for AdS\({}_{3}\), we consider Einstein-dilaton gravity (10) with field equations (12). Remarkably, the field equations hold for any choice of the function \(A(r)\) in the flat space domain wall (39) provided the scalar field potential vanishes, \(V(\phi)=0\), and the scalar field obeys the ordinary differential equation \[\frac{1}{2}\,\phi^{\prime\,2}=-A^{\prime\prime}\,. \tag{42}\] The geometric reason for this surprising result is that both \((\partial\phi)^{2}\) and \(\nabla^{2}\phi\) vanish on flat space domain wall backgrounds (39) for any scalar field \(\phi\) that is independent of \(u\) and \(\varphi\). This result implies that on-shell, the combination appearing in the Ricci tensor (40) is related to (the derivative of) the scalar field. Thus, we can use the scalar field \(\phi\) as a scalar invariant that fully characterizes our geometry. (The additive integration constant contained in \(\phi\) does not play any role for geometric properties and can be chosen conveniently; we fix it by demanding \(\lim_{r\to\infty}\phi(r)\to 0\).) Moreover, we can either provide the function \(A\) as input and determine \(\phi\) by integrating once (42), or we provide \(\phi\) as input and determine \(A\) by integrating twice (42). Demanding compatibility with asymptotic flatness requires the expansion \[A(r\gg 1)=r-r_{0}+o(1) \tag{43}\] for the remaining function \(A(r)\). Introducing the new radial coordinate \(\rho=e^{r-r_{0}}\) leads to the desired asymptotic expansion of the metric \[\mathrm{d}s^{2}=-2\,\mathrm{d}u\,\mathrm{d}\rho+\rho^{2}\ \mathrm{d}\varphi^{2}+\dots \tag{44}\] In the interior the bulk metric \[\mathrm{d}s^{2}=-2e^{A(r)}\ \mathrm{d}u\,\mathrm{d}r+e^{2A(r)}\ \mathrm{d}\varphi^{2} \tag{45}\] is free from singularities as long as the function \(A(r)\) remains finite; in particular, the null orbifold singularity at \(r=0\) is absent since the factor \(e^{2A(r)}\) always is finite in the interior. The asymptotic expansion for the scalar field compatible with (43) follows from integrating the equations of motion (42). Since both the leading and the first subleading terms in (43) drop out in \(A^{\prime\prime}\), only the terms that decay at \(r\to\infty\) contribute to the scalar field. \[\phi(r\gg 1)=\phi_{0}+o(1) \tag{46}\] Without loss of generality, we set the integration constant to zero, \(\phi_{0}=0\). For example, if the sub-subleading term in \(A\) scales like \(e^{-r}\), then the first term in the large-\(r\) expansion of \(\phi\) decays like \(e^{-r/2}\). We highlight an important subtlety. Reality of the field configuration requires the inequality \[A^{\prime\prime}\leq 0 \tag{47}\] in the whole range of definition of the function \(A\). Thus, when designing flat space domain walls by choosing some function \(A(r)\), it is crucial to obey the concavity condition (47) for all values of the radial coordinate \(r\). If we slightly change the asymptotic behavior (43), \[A(r\gg 1)=\lambda^{-1}\,(r-r_{0})+o(1)\qquad\lambda\in\mathbb{R}^{+} \tag{48}\] and use the radial coordinate \(\rho=e^{(r-r_{0})/\lambda}\), the asymptotic expansion of the metric \[\mathrm{d}s^{2}=-2\lambda\ \mathrm{d}u\,\mathrm{d}\rho+\rho^{2}\ \mathrm{d} \varphi^{2}+\ldots \tag{49}\] shows that the first term is rescaled by \(\lambda\). We will exploit this property in the next subsection to establish holographic RG flows. ### Flat space holographic RG flow example We are finally able to model holographic RG flows for BMS\({}_{3}\) invariant QFTs. One possibility to generate flat space domain walls is to take any AdS\({}_{3}\) domain wall for some potential, take the scalar field appearing in that solution as input, and construct the function \(A\) by integrating twice (42). Any such choice generates a legitimate flat space domain wall; a subset of them generates an associated holographic RG flow between BMS\({}_{3}\) invariant UV and IR fixed points. We will be more precise and general about this in section V. For now, we focus on a specific example. In our first example, we pick the same scalar field as for AdS\({}_{3}\) domain walls (17) (renaming the radial \(\rho\) into \(r\)). In Fig. 1, we plot the scalar field \(\phi(r)\) and the associated bulk energy \(\frac{1}{2}\,\phi^{\prime}(r)^{2}\) for the choices \(\phi_{0}=0\), \(j=54\) and \(-\alpha\,j^{2}=1\). The blue curve depicts the scalar field and has a clear kink-like structure, interpolating between two different asymptotic values. The orange curve shows that the bulk energy is localized in the interior of the bulk. Its maximum is at \(r=\ln 2\) [for general \(\alpha,j\) the maximum is at \(r=\ln(-2\alpha j^{2})\)]. Integrating twice (42) yields \[A(r)=A_{1}\,r+A_{0}-\frac{j^{2}}{16}\left(\frac{1}{e^{r}-\alpha j^{2}}+\frac{ r-\ln\left(e^{r}-\alpha j^{2}\right)}{\alpha j^{2}}\right) \tag{50}\] with two integration constants \(A_{1}\), \(A_{0}\). To obtain the desired asymptotics (43) we fix \(A_{1}=1\), yielding \[A(r\to\infty)=r+A_{0}-\frac{j^{2}}{8}\,e^{-r}+\mathcal{O}(e^{-2r})\,. \tag{51}\] Depending on the sign of \(\alpha\), there are different possibilities. In the AdS case, we needed negative \(\alpha\) to generate domain walls with a CFT\({}_{2}\) fixed point in the IR. We check now whether something analogous is true for the corresponding flat space domain wall. For negative \(\alpha\), there is no singularity in \(A(r)\) for any finite value of \(r\). Therefore, the coordinate range of this domain wall is (\(-\infty\), \(\infty\)), and we obtain a second asymptotic region at \(r\to-\infty\). In this limit, the function (50) expands as \[A(r\to-\infty)=\left(1-\frac{1}{16\alpha}\right)r+A_{0}+\frac{1+\ln\left(- \alpha j^{2}\right)}{16\alpha}+\mathcal{O}(e^{2r})\,. \tag{52}\] Denoting the UV central charge by \(c_{\mathrm{M}}^{\mathrm{UV}}\), comparison of the two asymptotic expansions (51) and (52) yields a result for the IR central charge \[c_{\mathrm{M}}^{\mathrm{IR}}=\frac{c_{\mathrm{M}}^{\mathrm{UV}}}{1-\frac{1}{16 \alpha}}<c_{\mathrm{M}}^{\mathrm{UV}} \tag{53}\] according to the discussion at the end of section III.1. Note that the ratio \(c_{\mathrm{M}}^{\mathrm{IR}}/c_{\mathrm{M}}^{\mathrm{UV}}\leq 1\) is dimensionless, and hence equation (53) is meaningful [compare with the discussion after the Cardy-like formula (29)]. As evident from the plot in Fig. 2, the function \[c_{\mathrm{a}n}(r):=\frac{c_{\mathrm{M}}^{\mathrm{UV}}}{A^{\prime}(r)} \tag{54}\] is a \(c\)-function for this domain wall solution since it approaches the correct UV and IR values and is monotonically decreasing towards the IR. The result (53) is precisely the same relation as for the corresponding holographic RG flow in AdS\({}_{3}\) (with the Virasoro central charge replaced by the bints\({}_{3}\) central charge \(c_{\text{M}}\)), see (21). In the next section, we shall prove that this is not a coincidence but a generic feature relating AdS\({}_{3}\) and flat space domain walls and their corresponding RG flow interpretations. Before generalizing our results, consider the case of positive \(\alpha\). In AdS\({}_{3}\)/CFT\({}_{2}\) such "domain walls" do not model an RG flow from a UV to an IR fixed point, but rather an RG flow from a UV fixed point to the IR, but without CFT\({}_{2}\) fixed point in the IR. As we now show, something comparable happens in flat space. Indeed, for positive \(\alpha\) the IR boundary is at finite value of \(r\), \[r_{\text{ IR}}=\ln\left(\alpha j^{2}\right). \tag{55}\] The scalar field and the Ricci tensor are singular at the IR boundary, so in this case, the flow ends at a naked singularity on the gravity side, and there is no BMS\({}_{3}\) field theory interpretation in the IR. Finally, we consider the limiting case of vanishing \(\alpha\). Here the \(c\)-function tends to zero in the IR (which is again obtained in the limit \(r\to-\infty\)); see Fig. 3. In this case, the IR fixed point is a trivial BMS\({}_{3}\)-invariant QFT with vanishing central charge \(c_{\text{M}}=0\). All these features are analogous to corresponding AdS\({}_{3}\)/CFT\({}_{2}\) features, see e.g. the discussion in [12]. ## V Flat space holographic RG flow theorems In this section, we state and prove three theorems for RG flows modeled by domain wall solutions in 3d flat space Einstein-dilaton gravity. In section V.1, we collect some definitions used in all theorems. In section V.2, we state and prove a correspondence theorem, relating all AdS\({}_{3}\) domain walls to corresponding flat space domain walls. In section V.3, we state and prove a monotonicity theorem, showing that bulk unitarity implies a monotonically decreasing \(c\)-function. Putting together both theorems, we prove a third one that shows the equivalence of the UV/IR ratios of Virasoro and bints\({}_{3}\) central charges. ### Definitions In this whole section, we are solely concerned with holographic RG flows generated by flat space domain wall solutions described in section IV. For these domain walls, we found a \(c\)-function (54). While this definition was based on a single example studied in section IV.3, it is natural to define generically the _flat space holographic \(c\)-function_ \[c_{\text{abs}}(r):=\frac{c_{\text{M}}^{\text{UV}}}{A^{\prime}(r)}\qquad\text{ fixing }\lim_{r\to\infty}A(r)=r+\mathcal{O}(1)\,. \tag{56}\] This formally coincides with the AdS domain wall \(c\)-function discovered in the seminal work [3]. We define the term _proper domain wall solution of AdS\({}_{3}\)-Einstein-dilaton gravity_ to mean an exact solution of the equations of motion (12) with some scalar potential of the form (12) such that in domain wall coordinates (9) metric and scalar field have the following properties: 1. for \(\rho\to\infty\) the metric asymptotes to Poincare patch AdS\({}_{3}\) with unit AdS-radius and the scalar field approaches zero 2. for \(\rho\to-\infty\) the metric asymptotes to Poincare patch AdS\({}_{3}\) with AdS-radius smaller than one and the scalar field approaches a constant (that can be zero) 3. for finite values of \(\rho\) the metric function \(A(\rho)\) and the scalar field \(\phi(\rho)\) are bounded real functions; moreover, \(A(\rho)\) is at least \(C^{2}\) and \(\phi(\rho)\) at least \(C^{1}\) Similarly, we define the term _proper flat-space domain wall solution_ to mean an exact solution of the equations of motion (12) with vanishing scalar potential, \(V(\phi)=0\), such that in flat space domain wall coordinates (39) metric and scalar field have the following properties: 1. for \(r\to\infty\) the metric asymptotes to the null orbifold (44) and the scalar field approaches zero 2. for \(r\to-\infty\) the metric asymptotes to the null orbifold, with a possible rescaling of the first term as in (49) (with some positive \(\lambda\)) and the scalar field approaches a constant (that can be zero) 3. for finite values of \(r\) the metric function \(A(r)\) and the scalar field \(\phi(r)\) are bounded real functions; moreover, \(A(r)\) is at least \(C^{2}\) and \(\phi(r)\) at least \(C^{1}\) Figure 3: Plot of \(1/A^{\prime}(r)\) for \(j=1\) and \(\alpha=0\) shows the \(c\)-function vanishes in the IR By _UV (IR)_, we always mean the limits \(\rho,r\to\infty\) (\(\rho,r\to-\infty\)) in the domain wall coordinates referred to above. The Virasoro central charges appearing in domain wall solutions of AdS\({}_{3}\)-Einstein-dilaton gravity are, therefore, denoted by \(c^{\text{\tiny{UV}}}\) at the UV boundary and by \(c^{\text{\tiny{IR}}}\) at the IR boundary. The Zamolodchikov \(c\)-theorem implies \[c^{\text{\tiny{IR}}}\leq c^{\text{\tiny{UV}}}\,. \tag{57}\] Finally, note that the definitions above imply that domain walls always connect UV and IR fixed points, i.e., cases where we do not have an IR fixed point, such as the one discussed at the end of section IV.3 (positive \(\alpha\)), are excluded by our definitions of proper domain walls. ### Correspondence theorem Equipped with the definitions of section V.1, we can now formulate our first theorem. It allows to translate any proper AdS\({}_{3}\) domain wall solution into a corresponding proper flat domain wall solution. **Theorem 1** (AdS\({}_{3}\)/flat space domain wall correspondence): _Given a proper domain wall solution of AdS\({}_{3}\)-Einstein-dilaton gravity, there is a corresponding proper flat-space domain wall solution with the following properties:_ 1. _In the UV, the flat space asymptotic symmetries generate a_ \(\mathfrak{bms}_{3}\) _algebra with central charge_ \(c^{\text{\tiny{UV}}}_{M}=\frac{3}{G_{\text{\tiny{W}}}}\)_._ 2. _In the IR, the flat space asymptotic symmetries generate a_ \(\mathfrak{bms}_{3}\) _algebra with a central charge_ \(c^{\text{\tiny{EH}}}_{M}\) _that in general differs from_ \(c^{\text{\tiny{UV}}}_{M}\)_._ _Proof._ Start with some scalar field \(\phi(\rho)\) that generates a proper domain wall solution of AdS\({}_{3}\)-Einstein-dilaton gravity and define \(\phi(r)\) to be the scalar field of the corresponding proper flat space domain wall. Since by assumption, the AdS\({}_{3}\) domain wall is proper, also the flat space domain wall is proper, meaning there are no singularities at finite values of \(r\). Therefore, we need to consider only the UV and IR limits of the scalar field and the metric. By definition we have the expansions \(\phi(r\to\infty)=o(1)\) and \(\phi(r\to-\infty)=\phi_{1}+o(1)\). Since \(\phi\) is differentiable we have \(\phi^{\prime}(r\to\infty)=o(1/r)=\phi^{\prime}(r\to-\infty)\). Integrating the equation of motion (42) yields \(A(r\to\infty)=A_{1}r+A_{0}+o(1)\) and \(A(r\to-\infty)=A_{2}r+A_{3}+o(1)\). The quantities \(A_{i}\) are integration constants; only two of them can be chosen independently. Without loss of generality, we fix \(A_{1}=1\) and set \(A_{0}=-r_{0}\), thereby recovering the expansion (43). According to the discussion at the beginning of section III.1, we then recover the \(\mathfrak{bms}_{3}\) algebra as asymptotic symmetry algebra in the UV with the usual central charge \(c_{\text{\tiny{M}}}=3/G_{N}\). Similarly, we recover a \(\mathfrak{bms}_{3}\) algebra as asymptotic symmetry algebra in the IR, but with a value of the central charge that depends on \(A_{2}\), according to the discussion in the second half of section III.1. \(\square\) The correspondence theorem 1 could be extended to non-proper domain walls (those with no BMS\({}_{3}\) fixed point in the IR and instead terminate in a naked singularity), but we refrain from doing so. In the final subsection, we prove two additional theorems and start by addressing the paramount issue of monotonicity of the flat space domain wall \(c\)-function. ### Monotonicity theorem and central charge ratio equivalence **Theorem 2** (Monotonicity of \(c\)-function): _The \(c\)-function associated with any flat space domain wall solution obtained through the correspondence theorem 1 is a monotonically decreasing function when flowing from the UV to the IR._ _Proof._ Since the scalar field is real, bounded and \(C^{1}\) the metric function \(A(r)\) must obey the concavity inequality (47) for all values of \(r\). Denoting some fiducial radius as \(r_{\text{\tiny{UV}}}\) and another, smaller, fiducial radius as \(r_{\text{\tiny{IR}}}<r_{\text{\tiny{UV}}}\), integrating the concavity condition \(A^{\prime\prime}(r)\leq 0\) from \(r_{\text{\tiny{IR}}}\) to \(r_{\text{\tiny{UV}}}\) implies \[A^{\prime}(r_{\text{\tiny{UV}}})\leq A^{\prime}(r_{\text{\tiny{IR}}})\,. \tag{58}\] Inserting this inequality into the definition of the \(c\)-function (54), \[c_{\text{\tiny{obs}}}(r_{\text{\tiny{UV}}})=\frac{c^{\text{\tiny{UV}}}_{\text{ \tiny{M}}}}{A^{\prime}(r_{\text{\tiny{UV}}})}\geq\frac{c^{\text{\tiny{UV}}}_{ \text{\tiny{M}}}}{A^{\prime}(r_{\text{\tiny{IR}}})}=c_{\text{\tiny{obs}}}(r_{ \text{\tiny{IR}}}) \tag{59}\] establishes that the \(c\)-function is a monotonically decreasing function when flowing from the UV to the IR, \(c_{\text{\tiny{obs}}}(r_{\text{\tiny{UV}}})\geq c_{\text{\tiny{obs}}}(r_{\text{ \tiny{IR}}})\). \(\square\) The theorem 2 shows that (54) is, indeed, a BMS\({}_{3}\)\(c\)-function for proper flat space domain wall solutions. Note that one can consider theorem 2 to be a consequence of bulk unitarity; indeed, if we drop the assumption of the scalar field being real and allow for a purely imaginary scalar field, we can circumvent theorem 2; the price for this is effectively a switched sign in the kinetic term of the scalar field, which violates bulk unitarity. We can be more quantitative and combine both theorems to show that the ratio between IR and UV Virasoro central charges is equivalent to the corresponding ratio of \(\mathfrak{bms}_{3}\) central charges. \[1\leq\frac{c^{\text{\tiny{UV}}}}{c^{\text{\tiny{IR}}}}=\frac{c^{\text{\tiny{ UV}}}_{\text{\tiny{M}}}}{c^{\text{\tiny{UV}}}_{\text{\tiny{M}}}}\geq 1 \tag{60}\] The first inequality follows from Zamolodchikov's \(c\)-theorem [1]. The last inequality is the statement of theorem 2 that we just proved. What remains to be shown is the equality in the middle. This central charge ratio equivalence is guaranteed by the third theorem. **Theorem 3** (CFT/CCFT central charge ratio equivalence): _Given the assumptions of theorem 1, the ratio of UV/IR central charges obeys the equality in (60)._ _Proof._ For proper flat space domain walls, the UV/IR ratio of \(\mathfrak{bms}_{3}\) central charges is given by \[\frac{c_{\rm M}^{\rm UV}}{c_{\rm M}^{\rm IR}}=\frac{A^{\prime}(r\to-\infty)}{A^{ \prime}(r\to\infty)}=A_{2}\geq 1\,. \tag{61}\] The equalities follow from the proof of theorem 1 (and the discussion in section III.1). The quantity \(A_{2}\) was also defined in the proof of theorem 1. The inequality in (61) follows from theorem 2. For proper AdS\({}_{3}\) domain walls, the UV/IR ratio of Virasoro central charges is given by the ratio of UV/IR AdS radii. The AdS radii follow from the UV and IR behavior of the function \(A(\rho)\) appearing in domain wall coordinates (9). Since by assumption we set the AdS radius to unity in the UV, we must have the expansion \(A(\rho\to\infty)=\rho+\tilde{A}_{0}+o(1)\). Without loss of generality, we equate \(\tilde{A}_{0}=A_{0}\) by a constant shift of \(\rho\). In the IR we have the expansion \(A(\rho\to-\infty)=\tilde{A}_{2}\rho+\tilde{A}_{3}+o(1)\). Therefore, the UV/IR ratio of Virasoro central charges is given by \[\frac{c^{\rm UV}}{c^{\rm IR}}=\frac{A^{\prime}(\rho\to-\infty)}{A^{\prime}( \rho\to\infty)}=\tilde{A}_{2}\geq 1\,. \tag{62}\] What remains to be shown is \(A_{2}=\tilde{A}_{2}\). Since \(A(r)\) and \(A(\rho)\) have the same leading and next-to-leading order terms in the UV, it is sufficient to show that both of them obey the same second-order differential equation \(A^{\prime\prime}=-\frac{1}{2}\,(\phi^{\prime})^{2}\). For flat space domain walls, this follows from (42). For AdS\({}_{3}\) domain walls, this follows from differentiating the left equation (14) with respect to \(\rho\) and, using the chain rule, insert the right equation (14) on the right-hand side of the left equation, viz., \(\mathrm{d}^{2}A/\,\mathrm{d}\rho^{2}=-\frac{1}{2}\,\,\mathrm{d}W/\,\mathrm{d} \phi\cdot\mathrm{d}\phi/\,\mathrm{d}\rho=-\frac{1}{2}\,(\mathrm{d}\phi/\, \mathrm{d}\rho)^{2}\). Since \(A(\rho)\) and \(A(r)\) obey the same second-order differential equation and have the same linear and constant terms in the UV, these two functions must coincide for all values of the radial coordinates. This implies, in particular, \(\tilde{A}_{2}=A_{2}\). \(\Box\) In conclusion, the three theorems proven in this section provide Carrollian \(c\)-functions (56) of domain wall solutions (39) to 3d Einstein-dilaton gravity that describe flat space holographic RG flows from a Carrollian UV fixed point to a Carrollian IR fixed point. Bulk unitarity guarantees the monotonicity of our domain wall \(c\)-functions. Moreover, to every AdS\({}_{3}\) domain wall solution (reviewed in section II.3), there is a corresponding flat space domain wall solution (discussed in section IV) with the same radial profile and the same UV/IR-ratios of central charges. The principal difference is that AdS\({}_{3}\) domain walls require a scalar field potential for support, whereas flat space domain walls demand vanishing potential. The drawback of our \(c\)-functions (56) is that we need some bulk dual, which may not always be available. Thus, it would be satisfying to have an intrinsic construction for a Carrollian \(c\)-function without recourse to holography, either along the lines of Zamolodchikov's original design [1] or the CH construction reviewed in sections II.1. We tentatively follow the latter path in our final section, inspired by the relation between the CH \(c\)-function and QNEC\({}_{2}\) recapitulated in section II.2. We are guided by the considerations of sections II.1, II.2, III.2, and III.3. ## VI Tentative proposal for Casini-Huerta-inspired Carrollian \(c\)-function Without further ado, here is our tentative proposal for the Carrollian \(c\)-functions in \(\mathrm{CCFT}_{2}\): \[\boxed{c_{\rm L}(\Delta u,\,\Delta\varphi):=6\Delta\varphi\,S_{L}^{\prime}} \qquad c_{\rm M}(\Delta u,\,\Delta\varphi):=6\Delta\varphi\,\hat{S}_{M} \tag{63}\] Prime denotes \(\varphi\)-derivatives and dot \(u\)-derivatives. A sanity check that our proposal is not ruled out immediately is to consider the special case \(c_{\rm M}=0\), \(c_{\rm L}\neq 0\) corresponding to a chiral half of a \(\mathrm{CFT}_{2}\). In this case, the identity \[\frac{1}{6\Delta\varphi}\,c_{\rm L}^{\prime}=S_{L}^{\prime\prime}+\frac{6}{c_ {\rm L}}\,S_{L}^{\prime\,2} \tag{64}\] recovers the expected QNEC\({}_{2}\) combination, see the discussion in section II.2. Thus, for \(c_{\rm M}=0\), \(c_{\rm L}\neq 0\), we recover the CH \(c\)-function for a chiral half of a \(\mathrm{CFT}_{2}\). Another consistency check is that our definitions are independent of the UV cut-offs, as expected on physical grounds. Finally, even in the more interesting case \(c_{\rm L}=0\), \(c_{\rm M}\neq 0\), the \(c\)-function \(c_{\rm M}\) reproduces the quantum energy combination of terms (34). \[\frac{1}{6\Delta\varphi}\,c_{\rm M}^{\prime}=\hat{S}_{M}^{\,\prime}+\frac{6}{c_ {\rm M}}\,\hat{S}_{M}^{\,2} \tag{65}\] Thus, if the \(c\)-function \(c_{\rm M}\) is monotonic, it implies the quantum energy condition (34) for the ground state. The arguments above are neither proof of the quantum energy conditions nor proof that \(c_{\rm M}\) is a \(c\)-function; they merely show that our putative \(c\)-function in (63) is consistent with the \(\mathrm{CCFT}_{2}\) quantum energy conditions [17]. We leave applications and scrutiny of our proposal (63) to future work. ###### Acknowledgements. We thank Luis Apolo, Arjun Bagchi, Rudranil Basu, Jacqueline Caminiti, Rob Myers, and Wei Song for discussions on flat space holographic EE. This work was supported by the Austrian Science Fund (FWF), projects P 32581, P 33789, and P 36619. Some of our results were presented in March 2021 at the virtual workshop "Flat Asymptotia" organized by Aritra Banerjee, Sudip Ghosh, Slava Lysov, and Yasha Neiman. This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Colleges and Universities. The final part of this research was conducted while DG was visiting the Okinawa Institute of Science and Technology (OIST) through the Theoretical Sciences Visiting Program (TSVP) in July/August 2023. ### Note added After posting our paper on the arXiv we were apprised that two of our main results and one of our theorems were published in [32]. In particular, the flat space domain wall geometries (39) correspond to their (4.11) upon redefining our radial coordinate as \(\mathrm{d}r\to e^{-A(r)}\,\mathrm{d}r\), the domain wall \(c\)-function (56) essentially corresponds to their (4.16), and theorem 2 corresponds to their statement after (4.17). New results in our work not contained in [32] include the proof that flat space domain walls are solutions to Einstein-dilaton gravity without scalar field potential, a discussion of their curvature invariants, the flat space holographic RG flow example, theorems 1 and 3, and the tentative proposal for the CH \(c\)-function.
2309.16256
On finding dense sub-lattices as low energy states of a quantum Hamiltonian
Lattice-based cryptography has emerged as one of the most prominent candidates for post-quantum cryptography, projected to be secure against the imminent threat of large-scale fault-tolerant quantum computers. The Shortest Vector Problem (SVP) is to find the shortest non-zero vector in a given lattice. It is fundamental to lattice-based cryptography and believed to be hard even for quantum computers. We study a natural generalization of the SVP known as the $K$-Densest Sub-lattice Problem ($K$-DSP): to find the densest $K$-dimensional sub-lattice of a given lattice. We formulate $K$-DSP as finding the first excited state of a Z-basis Hamiltonian, making $K$-DSP amenable to investigation via an array of quantum algorithms, including Grover search, quantum Gibbs sampling, adiabatic, and Variational Quantum Algorithms. The complexity of the algorithms depends on the basis through which the input lattice is presented. We present a classical polynomial-time algorithm that takes an arbitrary input basis and preprocesses it into inputs suited to quantum algorithms. With preprocessing, we prove that $O(KN^2)$ qubits suffice for solving $K$-DSP for $N$ dimensional input lattices. We empirically demonstrate the performance of a Quantum Approximate Optimization Algorithm $K$-DSP solver for low dimensions, highlighting the influence of a good preprocessed input basis. We then discuss the hardness of $K$-DSP in relation to the SVP, to see if there is reason to build post-quantum cryptography on $K$-DSP. We devise a quantum algorithm that solves $K$-DSP with run-time exponent $(5KN\log{N})/2$. Therefore, for fixed $K$, $K$-DSP is no more than polynomially harder than the SVP.
Júlia Barberà Rodríguez, Nicolas Gama, Anand Kumar Narayanan, David Joseph
2023-09-28T08:48:38Z
http://arxiv.org/abs/2309.16256v1
# On finding dense sub-lattices as low energy states of a quantum Hamiltonian ###### Abstract Lattice-based cryptography has emerged as one of the most prominent candidates for post-quantum cryptography, projected to be secure against the imminent threat of large-scale fault-tolerant quantum computers. The Shortest Vector Problem (SVP) is to find the shortest non-zero vector in a given lattice. It is fundamental to lattice-based cryptography and believed to be hard even for quantum computers. We study a natural generalization of the SVP known as the \(K\)-Densset Sub-lattice Problem (\(K\)-DSP): to find the densest \(K\)-dimensional sub-lattice of a given lattice. We formulate \(K\)-DSP as finding the first excited state of a Z-basis Hamiltonian, making \(K\)-DSP amenable to investigation via an array of quantum algorithms, including Grover search, quantum Gibbs sampling, adiabatic, and Variational Quantum Algorithms. The complexity of the algorithms depends on the basis through which the input lattice is presented. We present a classical polynomial-time algorithm that takes an arbitrary input basis and preprocesses it into inputs suited to quantum algorithms. With preprocessing, we prove that \(O(KN^{2})\) qubits suffice for solving \(K\)-DSP for \(N\) dimensional input lattices. We empirically demonstrate the performance of a Quantum Approximate Optimization Algorithm \(K\)-DSP solver for low dimensions, highlighting the influence of a good preprocessed input basis. We then discuss the hardness of \(K\)-DSP in relation to the SVP, to see if there is reason to build post-quantum cryptography on \(K\)-DSP. We devise a quantum algorithm that solves \(K\)-DSP with run-time exponent \((5KN\log N)/2\). Therefore, for fixed \(K\), \(K\)-DSP is no more than polynomially harder than the SVP. The central insight we use is similar in spirit to those underlying the best known classical \(K\)-DSP algorithm due to Dadush and Micciancio. Whether the exponential dependence on \(K\) can be lowered remains an open question. ## I Introduction In 1994, Shor revolutionized quantum computing and cryptography by devising quantum algorithms that solve prime factorization and discrete logarithm in polynomial time [1] using a fault-tolerant quantum computer. These problems are believed to be intractable on classical computers. Consequently, most public-key cryptographic protocols were built on such hardness assumptions [2; 3]. The emergence of fault-tolerant quantum computers that can run Shor's algorithm poses an imminent threat to public-key cryptography based on factoring or discrete logarithms. As a preventive measure, post-quantum cryptography (PQC) is being developed with urgency, spurred on by initiatives by the National Institute of Standards and Technology (NIST). Post-quantum cryptography is built on problems believed to be hard even on fault-tolerant quantum computers. Within PQC, lattice-based cryptography (LBC) [4] is considered to be one of the most promising solutions to build quantum-safe primitives [5]. This is also supported by complexity theoretic hardness reductions [6], as well as the failure of the best-known quantum algorithmic methods in breaking lattice-based problems. Public-key encryption schemes based on lattices were pioneered by Ajtai [7], and put on sound complexity theoretic footing by Regev [6]. The Shortest Vector Problem (SVP) is to find the shortest non-zero vector of a given lattice. Regev based his schemes on the Learning With Errors (LWE) problem, the dual problem to the SVP. The security of these systems is supported by a worst-case to average case reduction from SVP to LWE like problems. Thus, the hardness of SVP like problems is the foundation of LBC. Since then, there has been a long line of LBC research leading to soon-to-be-standardized schemes such as Kyber, Dilithium, and Falcon [8; 9; 10]. The \(K\)-Densset Sub-lattice Problem (\(K\)-DSP) seeks the densest sub-lattice of a prescribed dimension \(K\) within a given lattice. It is a generalization of the SVP, since SVP is merely 1-DSP. Being a generalization, \(K\)-DSP is at least as hard as the SVP, but has received much less attention. While the primary focus of finding dense sub-lattices lies in cryptography [11], it also holds relevance in communication and information theory [12], as well as in crystallography [13]. The best-known classical algorithms for solving SVP start with classical reduction algorithms such as the LLL algorithm [14] and perform enumeration or sieving. Enumeration algorithms started with Pohst [15] and Kannan [16], and have evolved to be the best-known algorithms for SVP (see [17] and the references therein). Despite considerable interest, it is not clear if these sophisticated classical algorithms can be sped up meaningfully on a fault-tolerant quantum computer. However, the current interest is in designing quantum algorithms in the Noise Intermediate Scale Quantum (NISQ) era [18], with limited qubits that are extremely sensitive to noise and very susceptible to decoherence effects. In this context, there is a necessity to develop hardware and quantum algorithms that are able to work properly even in the presence of noise [19]. Within this framework, Varia tional Quantum Algorithms (VQAs) emerge as one of the most encouraging approaches [20]. These algorithms rely on running a parametrized quantum circuit on a quantum computer, and perform classical optimization afterwards to update the parameters of this circuit. The best known classical algorithms for \(K\)-DSP is due to Dadush-Micciancio and for \(K=\Theta(N)\) has a significantly worse complexity \(K^{O(KN)}\) than SVP solvers [21]. There have been previous works on mapping the SVP into one of finding low energy states of a Hamiltonian. Shorts vectors in the lattice map to eigenstates of the Hamiltonian with eigenvalues corresponding to the vector lengths. In [22], the Bose-Hubbard Hamiltonian is used to solve the SVP via adiabatic quantum computation. Since their lowest energy state is the zero vector, the first excited state is the target of the algorithm. The results show that, outside the adiabatic regime and for up to 4-dimensional lattices, the solution for the SVP can be found with the same probability as the ground state or the second excited state. Two quantum algorithms are also proposed in [23], where the authors consider the mapping of the SVP now into Ising coefficients which result in Hamiltonians that are more reusable in the context of gate model computers and certain annealer architectures. These algorithms are limited to low-dimensional lattices. A different approach with results for up to 28-dimensional lattices is presented in [24], employing the Ising spin Hamiltonian mapping followed by a Variational Quantum Eigensolver (VQE). See [25; 26; 27; 28] for additional quantum SVP-solving approaches. #### i.0.1 Contribution We construct a Hamiltonian from the input lattice whose eigenstates correspond to sub-lattices (of dimension at most \(K\)) with eigenvalues proportional to the covolumes of the sub-lattices. The ground energy eigenstates correspond to sub-lattices of dimension less than \(K\). The solution to the \(K\)-DSP are the first excited states, corresponding to \(K\)-dimensional sub-lattices of the lowest covolume. Through this Hamiltonian correspondence a variety of quantum algorithms ranging from adiabatic quantum computing to QAOA may be invoked to solve \(K\)-DSP. Such generality means that this Hamiltonian formulation can be useful in NISQ scenarios where VQA and QAOA type approaches yield the best results, as well as in adiabatic systems, which may represent the most challenging engineering milestones in developing fault-tolerant quantum computation. For quantum algorithms that demand that the solutions to \(K\)-DSP be encoded as the lowest energy eigenstate, we present a way to penalize the ground states (fewer than \(K\)-dimensional sub-lattices) of the original Hamiltonian. To tune the penalization accurately, we resort to estimating the spectral gap of the Hamiltonian. The difficulty of lattice problems such as the SVP or \(K\)-DSP are not intrinsic to the input lattice, but may depend on the basis through which the input lattice is presented. We present a classical polynomial time algorithm that takes the input basis and preprocesses it to be suitable for being solved using quantum algorithms. In the simplest case, this preprocessing merely LLL-reduces the input basis and feeds it to the quantum algorithm. The LLL-reduced basis is a basis for the same input lattice, but is better shaped to aid in the low energy eigenstate search. In general, the preprocessing is far more intricate and may only call the quantum algorithm on a carefully chosen lower dimensional instance of \(K\)-DSP. With the preprocessing, we prove that \(O(KN^{2})\) qubits suffice for constructing the \(K\)-DSP Hamiltonian for \(N\) dimensional input lattices with the assurance that the solution is contained in the span of the qubits (see Theorem 1). A key technical insight that leads to the bound is the invariance of the densest sub-lattice under LLL-reduction. In particular, there is always a solution that is part of an LLL-reduced basis. With the number of qubits bounded, we may invoke quantum algorithms such as Grover search [29], phase estimation followed by amplitude amplification [30], and quantum Gibbs sampling [31] to find a low energy state. These algorithms at the minimum come with a quadratic speedup over exhaustive search, but we may hope for more. The prospect of using quantum Gibbs sampling is particularly enticing, since there is recent work proving that low energy states of sparse (local) Hamiltonians drawn from random ensembles can be found efficiently [32]. We do not expect cryptographic instances of lattice problems to fall within these ensembles, but further investigation is warranted before speculation. In particular, are there natural ensembles of lattice problems, perhaps arising in chemistry or digital communication for which quantum Gibbs sampling finds densest sub-lattices fast? The aforementioned quantum algorithms require fault-tolerant quantum computers. When constrained to NISQ devises, we can look to VQAs such as QAOA to find the low energy states. We empirically investigate the performance of QAOA in small dimensions in Section IV. We finally investigate the hardness of \(K\)-DSP as \(K\) increases. Hardness cannot decrease with \(K\), since \(K_{1}\)-DSP can be embedded into \(K_{2}\)-DSP for \(K_{2}>K_{1}\) by appending short orthogonal basis vectors. But how much harder is \(K\)-DSP for large \(K\), in particular, when \(K\) approaches half the ambient dimension? If the best known \(K\)-DSP algorithms take significantly longer than the SVP algorithms, then there is cause to base post-quantum cryptosystems on \(K\)-DSP. A natural way to address this question is to try to solve \(K\)-DSP given oracle access to an SVP solver. The best known SVP solvers, classical or quantum, take exponential time in the lattice dimension. With an SVP solver, we observe that the input to quantum algorithms can be preprocessed to be an HKZ-basis, a stronger guarantee than an LLL-basis. Exploiting this stronger structure, we devise a quantum algorithm that solves \(K\)-DSP with run-time exponent \((5KN\log N)/2\) (see Theorem 2). Therefore, for fixed \(K\), \(K\)-DSP is no more than polynomially harder than the SVP. Our algorithm is close in spirit to the aforementioned best known classical \(K\)-DSP algorithm by Dadush and Micciancio. In comparison, our run-time exponent is worse by a small constant. But our algorithm is simpler in the sense of not requiring recursive calls to the SVP oracle. Whether the exponential dependence on \(K\) can be reduced remains an important open question. In Section II, we introduce relevant theoretical concepts regarding lattices and quantum algorithms. In Section III, we describe the mapping of the Densest Sublattice Problem to one of finding low energy states of a Z spins Hamiltonian. We also prove bounds on the quantum resources and then develop an approach to penalize trivial solutions by computing an upper bound on the spectral gap. In Section IV, we simulate a 2-DSP quantum solver and present the results obtained. Finally, Section V discusses the conclusions and potential directions for future research. ## II Preliminaries ### Lattices A lattice is simply a pattern of repeating points in \(N\)-dimensional space, for example, a section of the integer lattice in 3 dimensions is visualized in Fig. 1. More formally, we define a lattice as a discrete free \(\mathbb{Z}\)-sub-module of the real space \(\mathbb{R}^{N}\) for some finite dimension \(N\), endowed with the Euclidean metric. The dimension of the lattice is its rank as a \(\mathbb{Z}\)-module, which is at most \(N\). A lattice of dimension \(r\leq N\) can be described by an ordered basis \(B=(\mathbf{b}_{1},\mathbf{b}_{2},\ldots,\mathbf{b}_{r})\subset\mathbb{R}^{n}\) of \(\mathbb{R}\)-linearly independent vectors, presented in our algorithms as the matrix \(\mathbf{B}\) consisting of basis vectors as the rows. Throughout, we represent vectors as row vectors. The lattice generated by \(\mathbf{B}\) is the integer linear combinations \(\mathcal{L}(\mathbf{B}):=\sum_{i=1}^{N}x_{i}\mathbf{b}_{i}=\{\mathbf{x}\mathbf{ B}\mid\mathbf{x}\in\mathbb{Z}^{N}\}\) of the basis vectors. The computational complexity of the problems we study is not intrinsic to the input lattice but may depend on the choice of basis generating the input lattice. One notion that helps solve these problems faster is that of a 'good' basis. Informally, a basis that contains only short and nearly orthogonal vectors is said to be a good basis. On the contrary, a bad basis consists of long vectors with large colinearities. A lattice has infinitely many bases, including a small number of good ones and a very large number of bad ones. The LLL algorithm [14] is a powerful polynomial time lattice reduction technique that transforms a given basis of a lattice into a "better" basis for the same lattice, consisting of shorter basis vectors. Nevertheless, the basis returned is not good enough to solve the SVP. Such vectors that correspond to the output that LLL-reduction provides satisfy Lovasz condition. This involves a bound on the Gram-Schmidt basis lengths (\(\mathbf{b}_{i}^{*}\)). The fundamental parallelepiped of a basis (\(\mathbf{B}\)) is defined as \(P(\mathbf{B}):=\{\mathbf{x}\mathbf{B}|\mathbf{x}\in[0,1)^{\,N}\}\). The covolume \(\operatorname{vol}\left(\mathcal{L}(\mathbf{B})\right):=\sqrt{\det(\mathbf{BB} ^{T})}\) of the fundamental parallelepiped only depends on the lattice \(\mathcal{L}(\mathbf{B})\) and is invariant of the choice of the basis. It is known as the covolume of the lattice. The Gram matrix of a basis \(\mathbf{B}\), is given by \(\mathbf{G}=\mathbf{BB}^{T}\) with \(\mathbf{G}_{ij}=\mathbf{b}_{i}\mathbf{b}_{j}^{T}\) being the entries of the matrix. One of the most widely studied computational problem on lattices is the Shortest Vector Problem, or SVP. **Definition 1**.: (The Shortest Vector Problem)_. Given a basis \(\mathbf{B}=(\mathbf{b}_{1},...,\mathbf{b}_{\mathbf{N}})\) of a lattice \(\mathcal{L}(\mathbf{B})\), find the shortest non-zero vector_ \[\arg\min_{z}\{||z||_{2}:z\in\mathcal{L}(\mathbf{B})\backslash\{0\}\}. \tag{1}\] _with respect to the Euclidean norm._ Let \(\lambda_{1}(\mathcal{L}):=\min_{z}\{||z||:z\in\mathcal{L}\backslash\{0\}\}\) denote the length of the shortest vector. Minkowski's theorem [33] gives an upper bound on \(\lambda_{1}(\mathcal{L})\). A Euclidean ball \(S\subset\mathbb{R}^{N}\) of volume \(\operatorname{vol}(S)>2^{N}\sqrt{\operatorname{vol}(\mathcal{L})}\) contains at least one lattice point in \(\mathcal{L}\) that is not the origin. Then, \(\lambda_{1}(\mathcal{L})\leq\sqrt{N}\operatorname{vol}(\mathcal{L})^{1/2N}\). Note that, the LLL-algorithm cannot find a vector within the Minkowski bound efficiently. The SVP plays a significant role in PQC since it is known to be NP-hard under randomized reductions [34]. Consequently, some primitives in LBC use short vectors. Since the SVP has been of interest for the cryptology community, many generalizations have arisen from it. One of them is the Densest Sub-lattice Problem, which seeks the densest \(K\)-dimensional sub-lattice of an arbitrary lattice. This problem was initially introduced in [35], and subsequently Dadush and Miccancio developed the fastest classical algorithm for it [21]. **Definition 2**.: (The Densest Sub-lattice Problem [35])_. Given a basis \(\mathbf{B}=(\mathbf{b}_{1},...,\mathbf{b}_{N})\) that describes a lattice \(\mathcal{L}\), and an integer \(K\) such that \(1\leq K\leq N\), find a \(K\)-dimensional sub-lattice \(\hat{\mathcal{L}}\subseteq\mathcal{L}\) such that \(\operatorname{vol}(\hat{\mathcal{L}})\) is minimal._ Rankin's constants [11]\(\gamma_{N,K}(\mathcal{L})\) are a generalization of Minkowski's bound, and are defined as \[\gamma_{N,K}(\mathcal{L})=\sup_{\mathcal{L}}\left(\min\frac{\det((\mathbf{b}_{ 1},...,\mathbf{b}_{K}))}{\det(\mathcal{L})^{K/N}}\right)^{2}. \tag{2}\] They can be interpreted as a bound on the densest sub-lattice covolume. **Example 1**.: (DSP for \(N=3\) and \(K=2\)). Assuming that we are given a 3-dimensional input basis and that \(K=2\), the algorithm should return the two vectors that span the 2-dimensional densest sub-lattice. This example is represented in Fig. 1, where the red vectors represent the input and one solution of the problem could be given by the blue vectors. In this scenario, the input basis is \[\mathbf{B}=\begin{bmatrix}1&1&0\\ 0&-1&-1\\ 0&0&1\end{bmatrix}\] and a solution for the 2-DSP is given by \[\mathbf{\hat{B}}=\begin{bmatrix}0&-1&0\\ -1&0&0\end{bmatrix}\] embedded in the 2-dimensional subspace spanned by the \(x,y\) axes. Here, the rows of the transformation matrix \(\mathbf{X}\) are defined by \(\mathbf{X}=\{(0,1,1),(-1,-1,0)\}\). Notice that, the 1-dimensional densest sub-lattice of an arbitrary lattice is equivalent by definition to the SVP. The other extreme, the \((N-1)\)-DSP, is analogous to the SVP on the dual basis. In fact, for every \(K\), the \(K\)-DSP restricted to \(K\) is at least as hard as the SVP. Furthermore, we know that \(K=N/2\) is the hardest instance since we can always add orthogonal vectors to the input basis and use a half-volume oracle to solve any \(K\)-DSP\({}_{K,N}\). The best known algorithms for \(K\)-DSP with \(K\) close to \(N/2\) have \(N^{2}\) in the runtime exponent, while SVP enumeration algorithms only have an \(N\). ### Variational quantum algorithm Classical computers are unable to solve certain problems efficiently. For certain classes of problems, like period finding, we know large fault-tolerant quantum computers will offer speedups, and for other classes, we suspect they can offer speedups but have no guarantees. In the near term, however, these devices are not expected to be fault-tolerant. One approach to attempt quantum advantage in the NISQ era is Variational Quantum Algorithms (VQAs). The basic procedure for VQAs is the following: 1. Prepare an initial state with \(n\) available qubits \(|0\rangle^{n}\). 2. Apply a sequence of unitary gates to construct the ansatz \(|\psi(\mathbf{\theta})\rangle=U_{p}(\mathbf{\theta}_{p}),...,U_{1}(\mathbf{\theta}_{1})|0 \rangle^{n}\). 3. Evaluate a cost function \(C(\mathbf{\theta})\). 4. Update parameters \(\mathbf{\theta}\) via classical optimization to minimize the cost function \(C(\mathbf{\theta})\). 5. Repeat the process many times until convergence. The output of the algorithm is the parameterized quantum circuit defined by the optimal parameters. 6. Sample from the trained parameterized quantum circuit to obtain candidate solutions. Picking an ansatz depends on a variety of factors, including resources available, the structure of the problem under consideration, and others. The cost function is designed so that it is minimized at the optimal solution, thus optimizing towards the minimum cost will result in a high probability of sampling good solutions. However, it is important to remark one of the main limitations of VQAs: Barren-Plateaus phenomenon [36]. It consists of an effect produced when the gradient vanishes in the classical optimization process. As a consequence, the algorithm is not able to find the solution and the process can get trapped in local minima. One of the most widely used VQAs is the Quantum Approximate Optimization Algorithm (QAOA) [37]. Given a cost Hamiltonian that encodes the solution of a combinatorial optimization problem in its ground state, the purpose of the algorithm is to return an approximate solution to the problem. It has been shown that for low-depth QAOA, speedups can be achieved in comparison to classical algorithms for certain instances [38]. The QAOA Hamiltonian integrates the principles of the Transverse Ising model [39] and the \(Z\)-cost Hamiltonian. The Transverse Ising model represents a system of interacting spins in the presence of a transverse field induced by Pauli-\(X\) operators. The spins are influenced by the Ising coupling term and the transverse field. Thus, the Hamiltonian can leverage from the interaction between the spins and the transverse field to explore the solution space efficiently. In contrast, the \(Z\)-cost Hamiltonian drives the system towards optimal solutions by manipulating the quantum state based on the cost function. Concretely, in the case of QAOA for solving problems in the Transverse Ising model, one starts with an initial state \(|\psi_{0}\rangle\), usually a superposition of all possible solutions as represented in Fig. 2. Then, a sequence of two unitary operators \(U(C,\gamma)=\exp{(-i\gamma H)}\) and \(U(B,\beta)=\exp{(-i\beta\sum_{i=1}^{n}X_{i})}\), generated by the cost Hamiltonian and the mixer Hamiltonian, are applied Figure 1: Representation of the 2-Densest Sub-lattice Problem for \(N=3\). The red arrows represent a 3D input bad basis. One solution of the problem for \(K=2\) is given by the blue short vectors that span a 2D square lattice. times. The parameterized quantum circuit simulation allows us to evolve the quantum system under the action of the two unitaries defined, \(p\) times. At the end of this process, the state of the qubits is measured, and the cost is updated. After each iteration, the angles \((\mathbf{\gamma},\mathbf{\beta})\) are adjusted and all the steps are repeated until convergence of the cost function. When increasing \(p\), the quality of the approximation of the final state to the ground state of the cost Hamiltonian is enhanced. In the limit \(p\rightarrow\infty\), the algorithm provides a precise emulation of the adiabatic quantum algorithm thus achieving overlap with the ground state close to \(1\). ## III A quantum algorithm for the densest sub-lattice problem In [21] it is shown that if \(\mathcal{L}\) is an \(N\)-dimensional lattice, \(\hat{\mathcal{L}}\subseteq\mathcal{L}\) a \(K\)-dimensional sub-lattice of minimum determinant and \(\mathbf{v}\in\mathcal{L}\) be any lattice vector, then, either \(\hat{\mathcal{L}}\) contains all shortest lattice vectors or the length of all vectors that span the densest sub-lattice is bounded by \(\lambda_{K}(\hat{\mathcal{L}})\leq K\lambda_{1}(\mathcal{L})\). This lemma enables to develop a classical enumerative algorithm for the \(K\)-DSP with exponential running time \(K^{O(KN)}\). In this section, we derive the mapping of the Densest Sub-lattice Problem to a \(Z\) spins Hamiltonian such that its first excited state \(E_{1}\) corresponds to the densest sub-lattice of a given ambient lattice. We also provide the space requirements for the quantum algorithm, as well as an approach to penalize trivial solutions of the \(K\)-DSP by upper-bounding the spectral gap. ### Hamiltonian formulation for the \(K\)-DSP Given a basis \(\mathbf{B}\) that describes a \(N\)-dimensional lattice \(\mathcal{L}\), we seek \(K\)-linearly independent vectors \(\mathbf{v}_{1},...,\mathbf{v}_{K}\) that span a sub-lattice \(\hat{\mathcal{L}}\subseteq\mathcal{L}\) with the smallest determinant. Let us consider that \(K=2\), for simplicity, and that \(N\) can take any value. Nevertheless, one can see Appendix A for the formulation of the \(K\)-DSP for \(K=3\). Then, the goal is to find two linearly independent vectors that span the densest sub-lattice, or equivalently, find the 2-dimensional sub-lattice with the smallest covolume or determinant. Thus, we can express these two lattice points as a linear combination of input basis vectors such that \[\mathbf{v}_{1} = \mathbf{x}\mathbf{B}=x_{1}\mathbf{b}_{1}+...+x_{N}\mathbf{b}_{N},\] \[\mathbf{v}_{2} = \mathbf{y}\mathbf{B}=y_{1}\mathbf{b}_{1}+...+y_{N}\mathbf{b}_{N}. \tag{3}\] Recall that the Gram matrix is a square matrix for any \(K\) and \(N\). The Gramian, which is the determinant of the Gram matrix, is equal to the covolume squared of a lattice \(\mathcal{L}\), \[\text{vol}(\mathcal{L})=\sqrt{\det(\mathbf{G}(\mathbf{b_{1}},...,\mathbf{b_{ N}}))}. \tag{4}\] Therefore, the covolume of a sub-lattice \(\hat{\mathcal{L}}\) of dimension \(K=2\) of an \(N\)-dimensional ambient lattice \(\mathcal{L}\) can be expressed as \[\text{vol}(\hat{\mathcal{L}})^{2}=\begin{vmatrix}\mathbf{v}_{1}\mathbf{v}_{1 }\rangle&\langle\mathbf{v}_{1}\mathbf{v}_{2}\rangle\\ \langle\mathbf{v}_{2}\mathbf{v}_{1}\rangle&\langle\mathbf{v}_{2}\mathbf{v}_{2} \rangle\end{vmatrix}. \tag{5}\] Using Eq. (3) in the former determinant, the following Figure 2: Representation of the quantum circuit for the QAOA. The initial state is a superposition of all possible configurations. Then, \(p\) layers are applied, each of them composed of the cost Hamiltonian \(H\), which separates the states by their phase, and the rotation operator with the mixer Hamiltonian (\(H_{M}\)) in the exponent, transforming the phase into amplitude. In the end, the states of the qubits are measured, the output is post-processed, and the cost function is calculated using the initial parameters \((\mathbf{\gamma},\mathbf{\beta})\). Afterwards, the angles are updated and the process is repeated until convergence. relation is obtained between the covolume of the sub-lattice and the coefficients \(\mathbf{x}\) and \(\mathbf{y}\), \[\text{vol}(\hat{\mathcal{L}})^{2}=\sum_{i,j,k,l}^{N}x_{i}x_{j}y_{k}y_{l}(\mathbf{ G}_{ij}\mathbf{G}_{kl}-\mathbf{G}_{ik}\mathbf{G}_{jl}). \tag{6}\] The different inner products \(\mathbf{b}_{i}\mathbf{b}_{j}\) have been written as \(\mathbf{G}_{ij}\) and therefore, are constants given as inputs of the algorithm. The main goal of the problem is to find the integer variables \(x_{i}\) and \(y_{j}\) such that they minimize Eq. (6). This equation simulates the cost function and returns the different covolumes that can be found within the system. To transform the equation into a Hamiltonian we have to consider that the eigenvalues must correspond to the energies of the system, which in this context are the squared covolumes. Then, the eigenvectors of this Hamiltonian are given by the different sub-lattices that can be constructed. In this way, when applying the problem Hamiltonian over an eigenstate (i.e. a sub-lattice) the energy returned will be the covolume squared of the configuration considered. Note that, to be able to obtain the integer values that define the coefficients in Eq. (6), the integer coefficients need to be modified to binary variables. In [23] the authors propose a qudit mapping that will be useful for the simulation of our algorithm. The Binary-encoded qudits mapping allows to interpret a binary string of qubits as integers. Let us assume that we have four qubits per qudit available as in Fig. 3. Then, the Hilbert space of the qudit operator will be spanned by \(2^{4}\) states. The measurements are performed in the computational basis \(Z\), so return the eigenvalue \(+1(-1)\). Since we would like to work with binary strings, we can transform the \(Z\)-basis to the \((0,1)\) basis, via the operator \[\hat{O}=\frac{\mathds{1}-Z}{2}. \tag{7}\] From \(\hat{O}\) we can now construct the qudit operator \[\hat{Q}_{bin}^{(i)}=\sum_{w=0}^{m}2^{w}\hat{O}-2^{m}\mathds{1}, \tag{8}\] where the first term contributes by returning the integer associated to the \(i^{th}\) qudit state when the operator is applied, and the second term is used to shift the range down to be symmetric about zero and thus allow for negative coefficients. Rearranging Eq. (8) using Eq. (7) we obtain \[\hat{Q}_{bin}^{(i)}=-\frac{\mathds{1}}{2}-\sum_{w=0}^{m}2^{w-1}Z_{wi}, \tag{9}\] which translates the measured qubits of the grid's columns to integers in the range \([-2^{m},2^{m}-1]\). Thus, the problem Hamiltonian for the 2-DSP results in \[H_{2DSP}=\sum_{i,j,k,l=1}^{N}\hat{Q}^{(i)}\hat{Q}^{(j)}\hat{Q}^{(k)}\hat{Q}^{( l)}(\mathbf{G}_{ij}\mathbf{G}_{kl}-\mathbf{G}_{ik}\mathbf{G}_{jl}) \tag{10}\] which comes from Eq. (6) when the coefficients \(\mathbf{x},\mathbf{y}\) are replaced by the qudits operators. Expanding the Hamiltonian for \(K=2\) we obtain \[H_{2DSP}=\sum_{i,j,k,l=1}^{N}2^{-4}\left[\sum_{a,b,c,d=0}^{m}2^{ a+b+c+d}Z_{ai}Z_{bj}Z_{ck}Z_{dl}+\right.\] \[+\sum_{a,b,c=0}^{m}2^{a+b+c}(((Z_{ai}+Z_{aj})Z_{bk}+Z_{ai}Z_{bj})Z _{cl}+\] \[\left.+Z_{ai}Z_{bj}Z_{ck})+\sum_{a,b=0}^{m}2^{a+b}((Z_{ai}+Z_{aj} +Z_{ak})Z_{bl}+\right.\] \[\left.+(Z_{ai}+Z_{aj})Z_{bk}+Z_{ai}Z_{bj}+\sum_{a=0}^{m}2^{a}(Z_{ ai}+Z_{aj}+\right.\] \[\left.+Z_{ak}+Z_{al})+1\right](\mathbf{G}_{ij}\mathbf{G}_{kl}- \mathbf{G}_{ik}\mathbf{G}_{jl}), \tag{11}\] which corresponds to a fully connected Hamiltonian, and has up to 4-body interactions. The Eq. (11) is the one that will be used for the simulation of the quantum 2-DSP solver in Section IV. To generalize the problem Hamiltonian to any \(K\), consider the \(K\)-DSP for a \(N\)-dimensional lattice spanned by \(\mathbf{B}=(\mathbf{b}_{1},...,\mathbf{b}_{N})\). The goal is to find \(K\)-linearly independent vectors \(\hat{\mathbf{B}}=(\mathbf{v}_{1},...,\mathbf{v}_{K})\), which generate \(\hat{\mathcal{L}}\) with the smallest covolume. Here, the Gramian of \(\hat{\mathbf{B}}\) is given by \[\det(\mathbf{G}(\mathbf{v}_{1},...,\mathbf{v}_{K}))=\left|\begin{matrix} \langle\mathbf{v}_{1},\mathbf{v}_{1}\rangle&\ldots&\langle\mathbf{v}_{1}, \mathbf{v}_{K}\rangle\\ \vdots&\ddots&\vdots\\ \langle\mathbf{v}_{K},\mathbf{v}_{1}\rangle&\ldots&\langle\mathbf{v}_{K}, \mathbf{v}_{K}\rangle\end{matrix}\right|. \tag{12}\] The determinant of a \(K\times K\) square matrix \(\mathbf{A}\) with entries \(a_{ij}\), can be computed using Leibniz formula \[\det(\mathbf{A})=\sum_{\tau\in S_{n}}sgn(\tau)\prod_{i=1}^{K}a_{i,\tau(i)}, \tag{13}\] Figure 3: Diagram representing two \(4\times N\) dimensional grids of qubits. Each of them is associated with one of the vectors that span the sub-lattice of dimension \(K\). The columns of the 2D array represent the integer values that can take each of the coefficients \(x_{i}\) and \(y_{j}\). The columns, each of which represents a qudit, are composed of four qubits. where the sum runs over all the permutations \(\tau\) of the symmetric group \(S_{N}\), and \(sgn\in\{\pm 1\}\) is the sign of \(\tau\). One can write the generalized covolume of a \(K\)-dimensional sub-lattice in terms of the determinant of the rank \(K\) Gram matrix, formed by the \(K\) lattice vectors returned \[\text{vol}(\hat{\mathcal{L}})^{2}=\sum_{\tau\in S_{N}}sgn(\tau)\prod_{i=1}^{K} (\mathbf{v}_{i},\mathbf{v}_{\tau(i)}). \tag{14}\] To encode the Hamiltonian of the general \(K\)-DSP, one can write the vectors \(\mathbf{v}_{i}\) in terms of the qudit operators \[H_{DSP}=\sum_{\tau\in S_{N}}sgn(\tau)\prod_{i=1}^{K}\left(\sum_{\alpha,\beta=1 }^{N}\hat{Q}_{\alpha}^{(i)}\hat{Q}_{\beta}^{(\tau(i))}\mathbf{G}_{\alpha, \beta}\right) \tag{15}\] where the operators \(\hat{Q}_{\alpha}^{(i)}\) and \(\hat{Q}_{\beta}^{(\tau(i))}\) act on the qudits \(\alpha\) and \(\beta\) within a register, and \(i,\tau(i)\) index the qubit grids. Each qubit grid describes a single vector, two of which are shown in Fig. 3. Here, \(\mathbf{G}\) is the Gram matrix of the full \(N\)-dimensional input basis \(\mathbf{B}\). The output of this operation consists of two integer values associated with the classical coefficients \(x_{i,\alpha}\) and \(x_{\tau(i),\beta}\) that generate the solution vectors \(\mathbf{v}_{i}\) and \(\mathbf{v}_{\tau(i)}\). The eigenenergies of \(H_{DSP}\) are then the \(\text{vol}(\hat{\mathcal{L}})^{2}\) where the \(\hat{\mathcal{L}}\) is the sub-lattice corresponding to the relevant eigenvector. One appeal of using the Leibnitz formula for the determinant is that it gives Hamiltonians that are \(2K\)-local. Since even 3-local Hamiltonians are QMA-hard [40], in the worst case, locality does not help in the computation of low energy states. However, for local Hamiltonians drawn from certain natural random ensembles, locality does seem to help [32]. To efficiently implement the Hamiltonian, the summation over the symmetric group is done in superposition. The operators inside the summation indexed by permutations are implemented using conditional gates. An alternative to the Leibnitz formula is to use a efficient arithmetic circuit to compute the determinant, such as the one derived through Gaussian elimination. ### Classical preprocessing and spatial bound The coordinates of the unknown vectors (such as \(\mathbf{x}\) and \(\mathbf{y}\) in Eq. (3)) in the Hamiltonian formulation are apriori unbounded integers. Therefore, the naive search space is countably infinite. To translate the Hamiltonian formulation into a quantum algorithm, it is necessary to confine the solution space to be finite, preferably small. In particular, this bounds the number of qubits required by the algorithm. For the SVP, the Minkowski bound readily confines the search space. For \(K\)-DSP with \(K>1\), bounding the search space is far more intricate. We devise a polynomial-time classical preprocessing algorithm to resolve this problem. The classical preprocessing algorithm takes a lattice basis \(\mathbf{B}_{in}\) as the input and outputs a preprocessed gap-free basis \(\mathbf{B}_{P}\) in polynomial time. First, it runs the LLL algorithm on \(\mathbf{B}_{in}\) to get another basis \(\mathbf{B}_{L}\) generating the same lattice. This \(\mathbf{B}_{L}\) either has a gap or is gap-free. This notion of a Gap is one that we define and can be tested in polynomial time [41]. It is not intrinsic to a lattice and may depend on the basis. **Definition 3**.: (Gap). _A basis \(\mathbf{B}=(\mathbf{b}_{1},\ldots,\mathbf{b}_{N})\) is defined to have a Gap if there exists an index \(r\) such that, \(\max(||\mathbf{b}_{1}^{*}||,...,||\mathbf{b}_{r}^{*}||)<\min(||\mathbf{b}_{r+1 }^{*}||,...,||\mathbf{b}_{N}^{*}||)\), where \((\mathbf{b}_{1},\mathbf{b}_{2},\ldots,\mathbf{b}_{N})\) is result of the Gram-Schmidt orthogonalization obtained from \(\mathbf{B}\)._ If \(\mathbf{B}_{L}\) is a gap-free basis, the preprocessing algorithm outputs \(\mathbf{B}_{L}\) as \(\mathbf{B}_{P}\). This is the typical case, since generic lattices are gap-free as a consequence of Gaussian heuristic: the presence of a gap would mean that at least one of its projected lattices has a shortest vector much below the Gaussian heuristic, which is highly improbable. The quality of the preprocessed basis \(\mathbf{B}_{P}=\mathbf{B}_{L}\) is quantified in Lemma 1. **Lemma 1** (Gap-free-LLL).: _If a LLL-reduced basis \(\mathbf{B}\) of \(\mathcal{L}\) is gap-free, then \(||\mathbf{B}^{*}||\leq(4/3+\varepsilon)^{(N-1)/4}\text{vol}(\mathcal{L})\)_ Proof.: Let \(x_{i}\) denote the values of the \(N\) norms \(\mathbf{b}_{j}^{*}\) sorted by increasing order. Then if there exists an index \(p\) s.t. \(x_{p+1}>\sqrt{4/3}x_{p}\), let us paint in blue the indexes s.t. \(\|\mathbf{b}_{j}^{*}\|\leq x_{p}\) and in red those where \(\|\mathbf{b}_{j}\|\geq x_{p+1}\). Because of Lovasz conditions [14], a red index cannot be followed by a blue index in the basis, so the only viable coloring is to have all the blue indexes followed by all the red indexes. Then necessarily, \((\|\mathbf{b}_{1}^{*}\|,\ldots,\|\mathbf{b}_{p}^{*}\|)\) are all \(\leq x_{p}\) and \((\|\mathbf{b}_{p+1}^{*}\|,\ldots,\|\mathbf{b}_{N}^{*}\|)\) are all \(\geq x_{p+1}\), and the basis has a gap. Reciprocably, a gap-free LLL-reduced basis satisfies \(x_{i}\leq x_{i+1}\leq\sqrt{4/3}x_{i}\) for all \(i\in[1,N-1]\), and therefore, \(x_{N}/(\prod x_{j})^{1/N}=\max\mathbf{b}_{j}^{*}/\text{vol}(\mathcal{L})^{1/N} \leq\left(\frac{4}{3}\right)^{(N+1)/4}\). If \(\mathbf{B}_{L}\) is not a gap-free basis, we invoke Lemma 3 (which builds on Lemma 2) to reduce the dimension of the problem in polynomial time. In particular, Lemma 3 identifies a gap-free LLL-reduced basis as the preprocessed basis \(\mathbf{B}_{P}\) to be fed as the input to a \(K\)-DSP problem in fewer dimension. Lemma 4 quantifies the quality of \(\mathbf{B}_{P}\). **Lemma 2** (Dual of sub-lattice).: _Let \(\mathcal{L}\) be a lattice and \(\mathbf{B}\) its basis, and \(\hat{\mathcal{L}}\subseteq\mathcal{L}\) a sub-lattice of dimension \(D\). Let \(K\) be the smallest index s.t. \(\hat{\mathcal{L}}\subseteq L(\mathbf{b}_{1},\ldots,\mathbf{b}_{K})\), then there exists a basis \(\mathbf{C}=(\mathbf{c}_{1},\ldots,\mathbf{c}_{D})\) of \(\hat{\mathcal{L}}\) such that \(\|\mathbf{c}_{D}^{*}\|\geq\|\mathbf{b}_{K}^{*}\|\)._ Proof.: There exists a \(D\times K\) integer matrix \(\mathbf{V}\) s.t. \(\mathbf{V}(\mathbf{b}_{1},\ldots,\mathbf{b}_{K})\) is a basis of \(\hat{\mathcal{L}}\), furthermore, the last column of \(\mathbf{V}\) is non-zero. Let \(\mathbf{V}^{\prime}\) be the Hermite normal form of \(\mathbf{V}\), then \(\mathbf{C}=\mathbf{V}^{\prime}(\mathbf{b}_{1},\ldots,\mathbf{b}_{K})\) is still a basis of \(\hat{\mathcal{L}}\), and \(\mathbf{V}^{\prime}\) has a single non-zero integer coefficient in its last column, in position \((D,K)\). It follows that \(\mathsf{span}(\mathbf{c}_{1},\ldots,\mathbf{c}_{D-1})\subseteq\mathsf{span}( \mathbf{b}_{1},\ldots,\mathbf{b}_{K-1})\) and \(\mathbf{c}_{K}=\mathbf{v}_{D,K}\mathbf{b}_{K}+\mathbf{t}\) where \(\mathbf{t}\in\mathbf{span}(\mathbf{b}_{1},\ldots,\mathbf{b}_{K-1})\). Therefore, \(\|\mathbf{c}_{K}^{*}\|\geq\|\mathbf{v}_{D,K}\mathbf{b}_{K}^{*}\|\geq\|\mathbf{b }_{K}^{*}\|\). **Lemma 3** (Gap-dimension-reduction).: _If \(\mathbf{B}\) is a basis of \(\mathcal{L}\) and has a gap, then the \(K\)-DSP on \(\mathbf{B}\) reduces to a lower dimension for all \(K\)_ Proof.: Assume that the input basis has a gap at index \(p\) and call \(T=\max_{i\in[1,p]}||\mathbf{b}_{i}^{*}||\) and \(\mathbf{E}=\mathbf{span}(\mathbf{b}_{1},\ldots,\mathbf{b}_{p})\). Let \((\mathbf{c}_{1},\ldots,\mathbf{c}_{K})\) be a dual-HKZ-reduced basis [42] of a sub-lattice with minimal covolume, \(\mathbf{F}=\mathbf{span}(\mathbf{c}_{1},\ldots,\mathbf{c}_{K})\), and let \(\pi_{K-1}\) be the projection on the orthogonal of \(\mathbf{span}(\mathbf{c}_{1},\ldots,\mathbf{c}_{K-1})\). If \(K\leq p\), we will prove that \(\mathbf{F}\subseteq\mathbf{E}\). Suppose by contradiction that this inclusion does not hold. In this case, \(\|\mathbf{c}_{K}^{*}\|\) is maximal across all bases, so by Lemma 2, it is \(>T\). Also, at least one of the vectors \(\mathbf{b}_{1},\ldots,\mathbf{b}_{p}\) is not in \(\mathbf{F}\): it can either be a dimension argument if \(p>K\), or due to the non-inclusion assumption when \(p=K\). So at least one of the \(\pi(\mathbf{b}_{i})\) is non zero. Let \(j\) be the smallest of such index, we therefore have \(\|\pi(\mathbf{b}_{j})\|\leq\|\mathbf{b}_{j}^{*}\|\leq T\). If we replace \(\mathbf{c}_{K}\) with \(\mathbf{b}_{j}\) in the basis \(\mathbf{C}\), we would obtain a sub-lattice of shorter covolume, which contradicts that \(\mathbf{C}\) is a \(K\)-DSP solution. Therefore, \(\mathbf{F}\subseteq\mathbf{E}\). If \(K=p\), because \(\mathbf{E}\) and \(\mathbf{F}\) have also the same dimension, they are equal, and \(\mathbf{b}_{1},\ldots,\mathbf{b}_{p}\) is a solution of the \(K\)-DSP. If \(K<p\), we just proved that the \(K\)-DSP solution is a sub-lattice of \((\mathbf{b}_{1},\ldots,\mathbf{b}_{p})\), so we reduce the problem to a smaller dimension. If \(K>p\), we prove by duality that \(\mathbf{E}\subset\mathbf{F}\), so by projecting the input basis on the orthogonal of \(\mathbf{E}\), we are reduced to solve the smaller \((K-p)\)-DSP on the \((N-p)\)-dimensional projected basis. **Lemma 4** (Relative-basis-size).: _If \(\mathbf{B}\) and \(\mathbf{C}\) are two bases of \(\mathcal{L}\), and \(\mathbf{C}\) has a gap at index \(K\), then \(\operatorname{vol}(\pi_{K}(\mathbf{C}))^{1/N-K}\leq\|\mathbf{B}^{*}\|\) where \(\pi_{K}\) is the orthogonal projection over \((\mathbf{c}_{1},\ldots,\mathbf{c}_{K-1})^{\perp}\)._ Proof.: By definition, the family \(\mathbf{F}=\pi_{K}(\mathbf{B})\) generates the same lattice as \(\pi_{K}(\mathbf{C})\), and \(\|\mathbf{F}^{*}\|\leq\|\mathbf{B}^{*}\|\). If we use the LLL algorithm on \(\mathbf{F}\), we obtain a basis \(\mathbf{F}^{\prime}\) of \(\pi_{K}(\mathbf{C})\) with \(\|\mathbf{F}^{\prime*}\|\leq\|\mathbf{B}^{*}\|\). In particular, \(\operatorname{vol}(\pi_{K}(\mathbf{C}))^{1/N-K}\) is the geometric mean of \(\|\mathbf{F}^{\prime*}_{i}\|\), so \(\operatorname{vol}(\pi_{K}(C))^{1/N-K}\leq\|\mathbf{F}^{\prime*}\|\leq\| \mathbf{B}^{*}\|\) **Corollary 1**.: _If \(\mathbf{B}\) and \(\mathbf{C}\) are two bases of \(\mathcal{L}\) and \(\mathbf{C}\) is LLL-reduced, then \(||\mathbf{C}^{*}||\leq(4/3+\varepsilon)^{(N-1)/4}||\mathbf{B}^{*}||\)._ Proof.: This is a consequence of the previous Lemma 4, when applied to the highest gap of \(\mathbf{C}\) if \(\mathbf{C}\) has a gap, or the whole basis if \(\mathbf{C}\) is gap-free. In the proof of Theorem 1, a key insight is that there is an output basis that is a solution to \(K\)-DSP that is LLL-reduced, due to the invariance of \(K\)-DSP solutions under LLL-iterations. Furthermore, irrespective of if the original input basis \(\mathbf{B}_{in}\) is gap-free, the output \(\mathbf{B}_{P}\) of the preprocessing is always LLL-reduced and gap-free. Therefore, both the input basis \(\mathbf{B}_{P}\) and the output basis \(\mathbf{B}_{out}\) of the quantum algorithm can be constrained to be LLL-reduced without loss of generality. This structure, in concert with the quality assurance of the input basis \(\mathbf{B}_{P}\) from Lemmas 1 and 4 allows us to bound the qubits. **Lemma 5**.: (Unimodular-transformation-bound) _If \(\mathbf{B}_{P}\) is a gap-free LLL basis of \(\mathcal{L}\) and \(\mathbf{C}\) is an LLL-reduced basis of the same lattice, then the unimodular transformation \(\mathbf{U}\) s.t. \(\mathbf{UB}_{P}=\mathbf{C}\) satisfies \(\|\mathbf{U}\|_{\infty}\leq N(4/3+\varepsilon)^{3(N-1)/4}\)._ Proof.: Since \(\mathbf{B}_{P}\) is an LLL-reduced gap-free basis, so is its (reversed) dual \(\mathbf{B}_{P}^{-t}\). Therefore, by Lemma 1, we have \(\|\mathbf{B}_{P}^{*}\|\leq(4/3+\varepsilon)^{(N-1)/4}\mathrm{vol}(\mathcal{L})\) and \(\|\mathbf{B}_{P}^{*-t}\|\leq(4/3+\varepsilon)^{(N-1)/4}\mathrm{vol}(\mathcal{L})^ {-1}\). By Corollary 1, \(\|\mathbf{C}^{*}\|\leq(4/3+\varepsilon)^{(N-1)/4}\|\mathbf{B}_{P}^{*}\|\). We prove the Lemma by converting the Gram Schmidt norm to the spectral norm and multiplying them together. The previous Lemma 5 allows us to bound the size of the unimodular transformation \(\mathbf{U}\) from the input basis to the output basis. The entries of the matrix \(\mathbf{U}\) consist of qubits, which are the target coefficients we are seeking. Therefore, determining an upper limit on these entries will enable us to bound the total number of qubits required for the quantum solver. **Theorem 1**.: (Bound on the number of qubits). _Let \(N\)-dimensional lattice \(\mathcal{L}\) be the input to the quantum algorithm for the \(K\)-DSP as the span of a gap-free LLL-reduced basis \(\mathbf{B}_{P}=(\mathbf{b}_{1},...,\mathbf{b}_{N})\). Then, a \(KN^{2}\) qubit Hilbert search space is sufficient to ensure that at least one exact solution of the \(K\)-DSP is contained._ Proof.: The worst case in number of qubits will occur when the input basis has no gaps, since the dimension of the problem cannot be reduced. Let the input of the quantum solver be a gap-free LLL-reduced basis of dimension \(N\). If we LLL-reduce a basis that is a solution of the \(K\)-DSP, it will still be a solution to the problem. Therefore, there exists an LLL-reduced basis that is the solution of the \(K\)-DSP. A solution output basis can be denoted as \(\mathbf{B}_{out}\) as \[\begin{bmatrix}\mathbf{v}_{1}\\ \vdots\\ \mathbf{v}_{K}\end{bmatrix}=\begin{bmatrix}x_{11}&x_{12}&\cdots&x_{1N}\\ x_{21}&x_{22}&\cdots&x_{2N}\\ \vdots&\vdots&\ddots&\vdots\\ x_{K1}&x_{K2}&\ldots&x_{KN}\end{bmatrix}\begin{bmatrix}\mathbf{b}_{1}\\ \vdots\\ \mathbf{b}_{N}\end{bmatrix}. \tag{16}\] The linear system in Eq. (16) can also be expressed in terms of matrices as \(\mathbf{B}_{out}=\mathbf{XB}_{P}\), where \(\mathbf{B}_{out}\) and \(\mathbf{X}\) are \(K\times N\) matrices, and \(\mathbf{B}_{P}\) is an \(N\times N\) matrix, considering the worst-case assumption. Then, we have the inequality relating the infinity norm and the spectral matrix norm \[||\mathbf{X}||_{\infty}\leq||\mathbf{B}_{out}||_{sp}||\mathbf{B}_{P}^{-1}||_{ sp}, \tag{17}\] Hence, using the bound on the basis \(\mathbf{B}_{P}\) and \(\mathbf{B}_{out}\) that do not have a gap, and the inequality obtained in Lemma 5, we can write \[||\mathbf{X}||_{\infty}\leq N\left(\frac{4}{3}+\varepsilon\right)^{3(N-1)/4}. \tag{18}\] The coefficients that describe the vectors that span the densest sub-lattice must be bounded by \(-2^{m}\leq x_{i}\leq 2^{m}-1\) for all \(i=1,...,N\), where \(m\) are the qubits that suffice to represent the coefficients \(x_{i}\). Consequently, the total number of qubits can be expressed as \[n=K\sum_{i=1}^{N}m=K\sum_{i=1}^{N}[\log 2^{m}]\leq K\log\prod_{i=1}^{N}2^{m}. \tag{19}\] Using Eq. (18) we can write \[\prod_{i=1}^{N}2^{m} \leq\prod_{i=1}^{N}N\left(\frac{4}{3}+\varepsilon\right)^{3(N-1)/ 4}= \tag{20}\] \[=N^{N}\left(\frac{4}{3}+\varepsilon\right)^{3(N^{2}-N)/4}. \tag{21}\] Taking the logarithm of Eq. (21) and substituting into Eq. (19), we obtain that \(\frac{3KN^{2}}{4}\log\left(\frac{4}{3}+\varepsilon\right)-\frac{3KN}{4}\log \left(\frac{4}{3}+\varepsilon\right)+N\log N\) qubits suffices to find the solution of the \(K\)-DSP using the quantum solver. We can calculate the run-time in the context of Groverization and considering LLL preprocessing. However, the bound obtained can be reduced by allowing more substantial reductions in the input basis. Therefore, we present the Grover speedup for the latter case. The pseudocode of the full algorithm for the \(K\)-DSP is provided in Algorithm 1, incorporates the preprocessing step, involving LLL reduction, and the QAOA steps. #### iv.2.1 Preprocessing with an SVP oracle If we are allowed to use an SVP oracle, we can detect if there exists a basis with a gap in the lattice: it suffices to running dual-HKZ followed by HKZ [42]. These alternative algorithms enable more powerful reductions of the input basis, although they may require certain assumptions. Then, only the gap-free case subsets in our Theorem, and it is possible to decrease the \((4/3+\varepsilon)^{(N-1)/4}\) term that arises from the LLL-bound in Theorem 1. In particular, the number of qubits becomes \(O(5KN\log N)\), which is an alternative construction of Dadush-Micciancio Lemma (see Appendix B for the proof). **Theorem 2** (Runtime of Groverized Exhaustive Search).: _Let \(N^{5KN}\) be the size of the search space, and let \(M\) be the number of solutions in the space. Then, the runtime of Groverized exhaustive search for finding a solution of the \(K\)-DSP is_ \[O\left(\frac{N^{5KN/2}}{\sqrt{M}}\right).\] Proof.: Consider the scenario where more powerful reduction algorithms are permitted in the classical preprocessing step. Then, the number of qubits required to ensure that at least one solution of the \(K\)-DSP is found within the search space is \(O(5KN\log N)\), where \(N\) and \(K\) are the ambient lattice and the densest sub-lattice dimensions, respectively. Consequently, the database consists of \(2^{5KN\log N}\) elements, which can also be expressed as \(N^{5KN}\). Drawing upon Grover's algorithm for unstructured search [29], only \(O\left(N^{5KN/2}M^{-1/2}\right)\) queries are necessary to maximize the probability of obtaining the target state, where \(M\) is the number of solutions. In Theorem 2, we establish that the runtime for Groverizing the exhaustive search is bounded by \(O\left(N^{5KN/2}M^{-1/2}\right)\). For the best-known classical algorithm, the runtime is \(O(K^{KN})\). When comparing both, we observe that the runtime for the classical algorithm scales as \(2^{KN\log K}\), while the quantum algorithm has slightly worse performance due to an additional logarithmic term in the exponent, resulting in \(2^{5KN\log N/2-1/2\log M}\). The observation that our quantum solver appears moderately slower than the classical one could be attributed to certain aspects of the proof. It is worth noting that, in certain scenarios, the Half-Volume Problem is suspected to be easier than the Shortest Vector Problem (SVP), particularly in the context of overstretched NTRU lattices [10, 43]. Therefore, one possible explanation for the presence of this extra term may be linked to the slower preprocessing associated with the utilization of the SVP oracle. A similar run-time guarantee as in Theorem 2 is obtained when the first excited states of the Hamiltonian are found using quantum Gibbs sampling [31]. ### Ground state penalization As has been previously mentioned, the Hilbert space of the problem contains trivial solutions such as the zero vector or sub-lattices spanned by linearly dependent vectors. In this section, we propose a method to penalize the energy of the trivial solutions, although many other approaches could be considered [44]. The easiest way is to merely project into sub-space of non-trivial solutions, since the sub-space of trivial solutions has a simple algebraic characterization of having determinant zero. But this destroys the locality of the Hamiltonian. There is an incentive to keep the locality, either for ease of implementation or for run-time guarantees such as the one offered by quantum Gibbs sampling [31, 32]. Let the problem Hamiltonian for the 2-Densest Sub-lattice Problem be the one presented in Eq. (10). This Hamiltonian is a sum of up to 4-body hermitian matrices. Its eigenvectors are the different sub-lattices that can be generated with the available qubits of the problem, and the eigenvalues are the squared covolumes of respective sub-lattices. Thus, the ground state space of \(H_{DSP}\), which corresponds to the eigenvalue \(0\), is composed of trivial solutions such as the sub-lattice spanned by zero vectors and sub-lattices spanned by \(K\)-linearly dependent vectors. ``` 0:\(\mathbf{B}_{in}\subseteq\mathcal{R}^{N}\), \(K\colon 0<K<\dim(\mathbf{B}_{in})\) 0:\(\mathbf{B}_{out}\subseteq\mathcal{R}^{K}\) a linear subspace. 1:\(\mathbf{B}_{L}\leftarrow\) Run LLL on \(\mathbf{B}_{in}\) 2:if\(\mathbf{B}_{L}\) is gap-free then 3:\(\mathbf{B}_{P}\leftarrow\mathbf{B}_{L}\) 4:else if\(\mathbf{B}_{L}\) has gaps then 5: By Lemma 3, reduce the dimension of the \(K\)-DSP 6:\(\mathbf{B}_{P}\leftarrow\mathbf{B}_{L}\subseteq\mathcal{R}^{(N-p)}\) 7:\(n\gets KN^{2}\) 8:\(\mathbf{G}\leftarrow\mathbf{B}_{P}\mathbf{B}_{P}^{T}\) 9:procedureMakeQaoA(\(H_{M},H_{DSP},\boldsymbol{\theta}\colon\operatorname{array},p\colon\operatorname{ int}\)) 10:\(|\psi\rangle\gets H^{\otimes n}|0\rangle^{\otimes n}\) 11:for\(t=1\) to \(p\)do 12:\(|\psi\rangle\gets H_{M}H_{DSP}|\psi\rangle\) 13:return\(|\psi\rangle\) 14:procedureTrainQaoA(\(|\psi\rangle\colon\operatorname{array}\)) 15: old = 0 16:while True do 17:\(|\psi\rangle\leftarrow\)MakeQaoA(\(\boldsymbol{\theta}\)) 18:\(\boldsymbol{\theta}_{opt}\leftarrow\)Compute gradient of \(C(\boldsymbol{\theta})\) 19: error \(\leftarrow\langle\psi|C(\boldsymbol{\theta}_{opt})|\psi\rangle\) 20:if\(|\operatorname{error-old}|<\operatorname{tol}\)then 21:break 22: old = error 23:return\(\boldsymbol{\theta}_{opt}\) 24:\(\boldsymbol{\theta}_{opt}\leftarrow\)TrainQaoA(\(|\psi\rangle\)) 25:\(|\psi\rangle\leftarrow\)MakeQaoA(\(H_{M},H_{DSP},\boldsymbol{\theta}_{opt},p\)) 26:\(\mathbf{B}_{out}\leftarrow\)Measure \(|\psi\rangle\) in the \(Z\) basis and post-processing 27:return\(\mathbf{B}_{out}\) ``` **Algorithm 1** A quantum algorithm for the \(K\)-DSP The main goal is to find the first excited state of \(H_{DSP}\). Therefore, we need to penalize somehow the ground state of the Hamiltonian. To achieve this, we can add an extra term to the Hamiltonian, similarly to the Projection Lemma in [45]. In this context, we can write \[H=H_{DSP}+re^{-sH_{DSP}}, \tag{22}\] where \(re^{-sH_{DSP}}\) corresponds to the penalizing term, and \(r\) and \(s\) are positive constants to be determined. The extra term has vanishingly small eigenvalues for excited configurations, while its eigenvalue is equal to \(1\) when considering trivial solutions. Therefore, the eigenvalues of the total Hamiltonian \(H\) for the excited state space of \(H_{DSP}\) are nearly the same as the eigenvalues of \(H_{DSP}\). However, the eigenvalues of \(H\) for the ground state space of \(H_{DSP}\) are now \(r\). Since \(r\) and \(s\) are parameters to be tuned, we can associate them with a value such that the energy of the first excited state of \(H_{DSP}\) is lower than the energy of trivial configurations. It is important to note several things here. Firstly, since the problem Hamiltonian is Hermitian, it is also the case for the extra term and thus, for the total Hamiltonian. Secondly, both terms of \(H\) share the same eigenvectors, so \(H\) has the same eigenvectors as the \(H_{DSP}\). Finally, the addition of the second term implies that the Hamiltonian in Eq. (22) does not need to be \(4\)-body. The Hamiltonian proposed in Eq. (22) has such a complex shape that cannot be implemented using QAOA. Therefore, for the purpose of simulation, a second-degree approximation could be considered such that \[H\approx r\mathds{1}+(1-rs)H_{DSP}+r\frac{s^{2}}{2}H_{DSP}H_{DSP}. \tag{23}\] This approximation is equivalent to considering \(H=(H_{DSP}-E)^{2}\) with some \(E\) that depends on \(r\) and \(s\), together with an overall shift and re-scaling. Thus, any of the formulations is feasible for the QAOA emulation. ### Spectral gap bounding The optimal values for \(r\) and \(s\), or an approximate value for \(E\) need to be determined to achieve such penalization. A tight lower bound on the spectral gap would be enough to either tune the parameters or specify \(E\). However, proving lower bounds on the spectral gap is considered a hard problem. To solve this issue, we can devise our algorithm with a promised spectral gap \(\epsilon_{DSP}\) as a parameter. The true spectral gap can be readily estimated using binary search, by using the parameterized algorithm as an oracle. In this scenario, the first excited state energy, which is equivalent to the covolume squared of the densest sub-lattice, corresponds to the spectral gap. Therefore, the upper limit of the binary search algorithm can be set by bounding the Gramian composed of the set of vectors that span the densest sub-lattice. **Lemma 6**.: (Upper bound on the spectral gap of \(H_{DSP}\)). _Let the input of the algorithm be an \(N\)-dimensional LLL-reduced basis that defines a lattice \(\mathcal{L}\). Then, the spectral gap of the problem Hamiltonian \(H_{DSP}\) is bounded by \(\Delta E_{DSP}\leq\left(\frac{4}{3}+\varepsilon\right)^{K(N-1)}\operatorname{ vol}(\mathcal{L})^{2K}\)._ Proof.: From Rankin's constant definition in Eq. (2), \[\det(\mathbf{G}(\mathbf{v}_{1},...,\mathbf{v}_{K}))\leq\gamma_{N,K}\text{vol}( \mathcal{L})^{2K/N}, \tag{24}\] since \(\det(\mathbf{G}(\mathbf{v}_{1},...,\mathbf{v}_{K}))=\text{vol}(\hat{\mathcal{L }})^{2}\). Rankin's constants are upper bounded by \[\gamma_{N,K}\leq\left(\frac{\prod_{i=1}^{K}||\mathbf{v}_{i}||}{\text{vol}( \mathcal{L})^{K/N}}\right)^{2}, \tag{25}\] as shown in [46]. Previously mentioned, if we LLL-reduce a basis that is a solution of the \(K\)-DSP, it remains a valid solution to the problem. Therefore, there exists an LLL-reduced basis that is the solution of the \(K\)-DSP. Hence, the length of the vectors \(||\mathbf{v}_{i}||\) that span the solution of the problem is bounded by Corollary 1. Therefore, \[\det(\mathbf{G}(\mathbf{v}_{1},...,\mathbf{v}_{K}))\leq\left(\frac{4}{3}+ \varepsilon\right)^{K(N-1)}\text{vol}(\mathcal{L})^{2K}. \tag{26}\] ## IV Experimental results In this section, we present the results obtained after running a quantum emulation of the QAOA algorithm on a classical computer to discuss the performance of the \(K\)-DSP quantum solver for \(K=2\). While these results are low-dimensional, they are illustrative of what we can obtain in higher dimensions. The results are presented for \(N=3\) and \(N=4\), as a function of \(p\) and the quality of the bases. Note that, relevant lattices in cryptography have dimensions up to \(400\). However, the rank of the input lattices has been limited to \(4\), since the number of qubits scales as \(O(KN^{2})\), as shown in Theorem 1. Thus, we fix the number of qubits per qudit to \(2\). Theorem 1 and Lemma 6 have been proven for LLL-reduced input bases. Nevertheless, for experiments in low-dimensional lattices (therefore classically simulable), we create input bases in a different manner, as for low dimensions LLL returns trivially short bases which are not illustrative of the capabilities of the quantum \(K\)-DSP algorithm. Instead, we first generate short bases, then we scramble them by multiplying with random unimodular matrices. In essence, we make the short basis slightly worse for the simulations. The effect is that the optimal solutions we seek from the algorithm are not trivial ones. Although we evaluate the spatial scaling performance of the QAOA for the 2-DSP, it is not possible to extrapolate the success probability heuristic to cryptographically relevant dimensions, though applying Grover search to the search space does give some complexity scaling. In Fig. 4 we present the histograms obtained after training our quantum algorithm using the _Keras_ library for the classical optimization of the cost function described in Eq. (6) for \(N=3\) and for different values of \(p\). First, the QAOA was trained using the expectation value of the energy as the cost function. Then, we drew \(10,000\) samples, which were measured in the \(Z\) basis. The top row figures in Fig. 4 show the results when using a \(3\)-dimensional short basis that defines the integer lattice. In contrast, the bottom row plots in Fig. 4 exhibit the outcomes when the short basis was multiplied by a unimodular matrix to obtain a lower quality basis (long basis) for the same lattice as input. Recall that, classically, it is more challenging to find the solutions for the \(K\)-DSP problem when given bad input bases. The difficulty of the QAOA in finding ground state solutions increases with the number of qubits due to the exponentially increasing search space. Nevertheless, Figure 4: The number of occurrences (left y-axis) and probability (right y-axis) represented by the blue bars for each eigenvalue of the Hamiltonian in Eq. (10) for a 3D lattice. The number of layers increases from left to right with values set to \(p=0,1,3,5\). The figures include the average energy calculated from \(10,000\) samples. The red dashed line points out the location of the solution of the \(K\)-DSP equivalent to the densest sub-lattice. In the top subplots it has been used the 3D good basis as input, while in the bottom ones the input is a worse basis achieved when multiplying the good basis by a unimodular matrix. as the formulation for the ground state penalization has not been implemented, the outputs contain trivial solutions (i.e. the zero vector or linearly dependent vectors) which correspond to the lowest energy states. In both top and bottom sets of histograms in Fig. 4, the \(x\)-axis depicts the covolumes squared of the sub-lattices, which are equivalent to the eigenvalues of the Hamiltonian in Eq. (10). The blue bars represent the number of occurrences of the covolumes squared after 10,000 samples in the left \(y\)-axis, and the equivalent probability (occurrences divided by 10,000) in the right-hand \(y\)-axis. The thin red dashed line shows the location of the state that corresponds to the densest sub-lattice. The \(x\)-axis has been truncated and so a small number of very high energy results are not shown in order to improve readability of the lower energy samples. The number of layers increases from left to right and the left-hand subplot in each row (\(p=0\)) corresponds to uniform random sampling from the search space, thus representing a benchmark against which the nonzero \(p\) can be compared. Analyzing the results, we can observe that in both sets the average energy of the system decreases with \(p\) as expected (from 13.16 at \(p=0\) to 5.41 at \(p=5\) for the canonical basis, and from 28.46 to 18.32 for the worse basis). In the top row figures, the average expected value of the energy has been reduced by 60%, while in the bottom ones only by 35%. This shows that the algorithm's performance is notably enhanced when using better bases. This underscores the importance of classical preprocessing through LLL-reduction in cases where the algorithm is executed on more challenging lattices. The trend is also visible on the blue bars, since the probability mass is more concentrated to the left-hand side of the subplots for the higher \(p\) values. In this way, the ground state reaches the highest probability among all possible solutions for \(p=5\) in both the top and bottom subplots. At \(p=5\), the histograms exhibit a distribution similar to the Gibbs distribution [47], where the number of occurrences decreases with the energy of the different eigenvalues. Notice that, while for good bases almost all the data is concentrated in the first 10 excited states, for bad bases the occurrences are more evenly distributed across the different bins, which involves a reduction of the number of occurrences in respect the top row figures for each state. Therefore, it is easier for the algorithm to find low energy solutions when the input consists of short and close to orthogonal vectors, as it clearly presents better performance in this situation. Nevertheless, the ideal QAOA Figure 5: The number of occurrences (left y-axis) and probability (right y-axis) represented by the blue bars for each eigenvalue of the Hamiltonian in Eq. (10) for a 4D lattice. The number of layers increases from left to right with values set to \(p=0,1,3,5\). The figures include the average energy calculated from \(10,000\) samples. The red dashed line points out the location of the solution of the \(K\)-DSP equivalent to the densest sub-lattice. In the top subplots it has been used the 4D good basis as input, while in the bottom ones the input is a worse basis achieved when multiplying the good basis by a unimodular matrix. output would consist in obtaining the ground state with probability \(1\) (the adiabatic limit), represented by a blue bar of height \(10,000\) at the \(0\) value on the \(x\)-axis. This would also be the ideal scenario for \(K\)-DSP when ground-state penalization is implemented. In the same way as for \(N=3\), in Fig. 5, we present the histograms obtained when using as input a \(4\)-dimensional good basis in the right hand-side figure, and a bad basis in the left hand-side figure. The \(x\)-axis has also been truncated to enhance the legibility of the data. The behavior is quite similar to the one in Fig. 4: the average energy of the system decreases with \(p\) in both sets, being lower for the top row histograms than for the bottom set at each \(p\). The occurrences at high-energy states decreases as \(p\) increases, while low energy configurations become more common. The taller blue bars on the left side of the subplots at higher \(p\) indicate higher probability density in this region. As was observed in Fig. 4, the outputs are less concentrated around low energy values for the bad bases. Nevertheless, noticeable differences arise when considering different dimensions of the input basis. Firstly, since solving the \(2\)-DSP using a \(4\)-dimensional basis requires a larger search space, it results in a more complex QAOA circuit. The results for \(N=4\) have been obtained using \(16\) qubits, compared to \(12\) qubits for \(N=3\). In Appendix C, we can observe that the QAOA circuit is relatively complex even for a \(6\)-qubit system. For \(4\)-dimensional good bases, the improvement in getting low energy configurations when increasing \(p\) is more modest than for \(3\)-dimensional good bases, while for \(4\)-dimensional bad bases is barely discernible. We can zoom in on the results by analyzing the probability for low energy configurations, and comparing their behavior with increasing \(p\). The probability of obtaining states in ranges \(\mathrm{vol}^{2}(\hat{\mathcal{L}})\leq 5,10,20\) is represented in Tab. 1. We show the probability with respect to \(p\) and the dimension of the input basis, as well as its quality. The table illustrates that, increasing the number of layers from \(p=1\) to \(p=5\) in all cases, enhances the probability to obtain low energy states. Notice that, when using \(3\)-dimensional good bases, we obtain the best performance of our quantum solver since the probability goes from \(0.40\) to \(0.50\) for \(\mathrm{vol}^{2}(\hat{\mathcal{L}})\leq 5\) and from \(0.81\) to \(0.88\) for \(\mathrm{vol}^{2}(\hat{\mathcal{L}})\leq 20\). This means that in the range \(\mathrm{vol}^{2}(\hat{\mathcal{L}})\leq 20\), the \(81\%\)-\(88\%\) of the outcomes are found within this low energy range versus the \(10\%-20\%\) obtained between \(20\leq\mathrm{vol}^{2}(\hat{\mathcal{L}})\leq 261\). Still, for bad bases, the impact of increasing the number of layers on the probability of obtaining low energy states is significantly lower. The worst case performance is found for a \(4\)-dimensional bad basis where only \(26\%-30\%\) are within the range \(\mathrm{vol}^{2}(\hat{\mathcal{L}})\leq 20\). This likely indicates that much higher depths are required to see the same improvements, which would imply that any classical LLL-type preprocessing could reduce the work and improve the performance of the quantum \(K\)-DSP algorithm. For bad bases, the probability at each \(p\) is approximately \(0.1\) lower compared to good bases. This behavior was expected because the average covolume of a sub-lattice when randomly sampled will be higher for bad input bases and also the integer values that represent the coefficients, both introducing additional costs. ### Algorithm scaling Here, we discuss the running time of our quantum algorithm and the iterations required for convergence during the training of the QAOA across various values of \(p\). The results depicted in Fig. 6 were derived from averaging the results after reiterating six times the QAOA training, generating \(10,000\) samples in two distinct dimensions: \(N=3\) (represented by blue dots) and \(N=4\) (represented by red squares). Furthermore, we include the results obtained when using as input good bases (depicted by dashed lines), and when considering bad bases as the input of the algorithm (showed in solid lines). The \(y\)-axis in both figures is on a logarithmic scale. The top figure in Fig. 6 illustrates the running time for the four cases. The bottom figure of Fig. 6 shows the number of iterations required to reach convergence for \(3\)-dimensional and \(4\)-dimensional integer lattices, using good and bad bases as inputs. In the top subplot of Fig. 6, we can observe that the running time of the QAOA increases exponentially with the circuit depth \(p\), and with the dimension \(N\) of the input basis (as each line for \(4\)D bases is sits around one or \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Range**} & \multirow{2}{*}{**Layers**} & \multicolumn{2}{c}{\(\mathbf{N=3}\)} & \multicolumn{2}{c}{\(\mathbf{N=4}\)} \\ \cline{3-6} & & good & bad & good & bad \\ \hline \multirow{4}{*}{\(\mathrm{vol}(\hat{\mathcal{L}})^{2}\leq 5\)} & \(1\) & \(0.399\) & \(0.319\) & \(0.215\) & \(0.068\) \\ \cline{2-6} & \(3\) & \(0.429\) & \(0.325\) & \(0.236\) & \(0.072\) \\ \cline{2-6} & \(5\) & \(0.499\) & \(0.342\) & \(0.264\) & \(0.090\) \\ \hline \multirow{4}{*}{\(\mathrm{vol}(\hat{\mathcal{L}})^{2}\leq 10\)} & \(1\) & \(0.651\) & \(0.475\) & \(0.349\) & \(0.123\) \\ \cline{2-6} & \(3\) & \(0.675\) & \(0.480\) & \(0.363\) & \(0.130\) \\ \cline{2-6} & \(5\) & \(0.737\) & \(0.501\) & \(0.379\) & \(0.159\) \\ \hline \multirow{4}{*}{\(\mathrm{vol}(\hat{\mathcal{L}})^{2}\leq 20\)} & \(1\) & \(0.812\) & \(0.676\) & \(0.595\) & \(0.257\) \\ \cline{2-6} & \(3\) & \(0.841\) & \(0.673\) & \(0.606\) & \(0.260\) \\ \cline{2-6} & \(5\) & \(0.881\) & \(0.689\) & \(0.668\) & \(0.302\) \\ \hline \hline \end{tabular} \end{table} Table 1: Probability of obtaining low energy configurations within the ranges \(\mathrm{vol}^{2}(\hat{\mathcal{L}})\leq 5,10,20\) as a function of the number of layers \(p\), the dimension \(N\), and the quality of the input basis for \(3\)D and \(4\)D integer lattices. der of magnitude higher than the same line for 3D bases). Notice that, as the value of \(p\) increases, the slopes that represent the data for good input bases remain constant between the two, and also between the 3-dimensional bad basis case. However, for 4-dimensional bad bases the slope appears to differ from the others with increasing \(p\). While it is not possible to extrapolate based upon these heuristic results, they reflect the widely known fact that training VQA's becomes expensive fast as the system complexity grows. ## V Conclusions While SVP is a widely studied problem in lattice literature, and is closely linked to recently standardized cryptosystems such as Kyber, Dilithium and Falcon [8; 10], its generalization, \(K\)-DSP has remained much less tested. Due to the more general structure of \(K\)-DSP, further examination may enable deeper insights into the hardness of lattice problems. The work presented here serves as a platform to enable further exploration of this problem, and gives some indication of the complexity underlying the structure of mathematical lattices. The formulation of the \(K\)-DSP Hamiltonian, which becomes extremely complex even for small systems as in Appendix C, serves as a tool with which we can probe the problem from a quantum angle. The bounding of the search space for the algorithm defined naturally lead to a Grover-based quantum time complexity, even if no such result exists for QAOA or AQC-based applications of the Hamiltonian defined here. We showed how to incorporate such a Groverized \(K\)-DSP search under certain conditions into a global hybrid algorithm, however we believe that with stronger conditions on input bases, and reduced spatial requirements, future works could investigate faster hybrid quantum/classical algorithms for tackling \(K\)-DSP. The experimental results presented demonstrate the ability of our quantum routine to sample dense lattices in low dimensions, but in order for future versions to be of practical use, many more tricks from state-of-the-art QAOA literature would need to be applied, such as improved cost functions like CVaR [48] or Gibbs objective functions [47], and parameter initialization [49; 50]. Furthermore, for high \(K\), the many-body \(Z\) interactions present in the Hamiltonians will require expensive decompositions in order to implement on any future architectures [51], or architectures will have to adapt to be more accommodating to many-body interactions [52; 53; 54]. This 'Hamiltonian engineering', whether directly of the \(K\)-DSP Hamiltonian defined here or via construction of new Hamiltonians is thus another area for future work both from the quantum lattice algorithm, and the hardware design angles. ###### Acknowledgements. We thank Martin R. Albrecht for introducing us to this problem, and Milos Prokop for the suggestions provided during the Hamiltonian formulation. We also thank Martin Ganahl and Rishi Sreedhar for discussions regarding the QAOA simulation, and Stefan Leichenauer for ideas on ground state penalization. Figure 6: Scaling behavior of QAOA for the 2-DSP. The running time of the algorithm in logarithmic scale is represented in the top figure for a 3D (in blue dots) and a 4D (in red squares) lattice described by good bases (in dashed lines), and bad bases (in solid lines). In the bottom plot we show the number of epochs in logarithmic scale required to train the QAOA to obtain the optimal parameters for the parameterized quantum circuit.
2309.17148
Configuration spaces of labeled points on a circle with two anchors
In this paper we calculate the homology of configuration spaces of $n$ points on a circle, subject to the condition that two pre-determined points are included in the configuration. We make use of discrete Morse theory both to determine the Betti numbers, as well as to provide an explicit combinatorial description of the bases both for homology and cohomology.
Dmitry N. Kozlov
2023-09-29T11:21:38Z
http://arxiv.org/abs/2309.17148v1
# Configuration spaces of labeled points on a circle with two anchors ###### Abstract. In this paper we calculate the homology of configuration spaces of \(n\) points on a circle, subject to the condition that two pre-determined points are included in the configuration. We make use of discrete Morse theory both to determine the Betti numbers, as well as to provide an explicit combinatorial description of the bases both for homology and cohomology. ## 1. Anchored configuration spaces ### Definition Configuration spaces constitute a well-studied class of topological spaces. Given an arbitrary topological space \(X\) and a positive integer \(n\), a configuration space of \(n\) labeled points in \(X\) consists of all tuples \((x_{1},\ldots,x_{n})\) of distinct points in \(X\), with the topology inherited from that of the direct product \(X^{n}\). In this paper, motivated by some problems in logistics, we study a variation of this class. The two main differences are that first, the points \(x_{i}\) do not need to be distinct, and second, that one requires, that a certain pre-determined discrete set of points is covered by each such tuple. The formal definition is as follows. **Definition 1.1**.: _Let \(X\) be a non-empty topological space, let \(S\) be a set of \(m\) points in \(X\), \(m\geqslant 0\), and let \(n\) be an arbitrary positive integer. An_ **anchored configuration space**_, denoted \(\Sigma(X,S,n)\), is defined as the subspace of the direct product \(X^{n}\), consisting of all tuples \((x_{1},\ldots,x_{n})\), such that \(S\subseteq\{x_{1},\ldots,x_{n}\}\)._ For simplicity, in this paper we will assume that \(X\) can be given a structure of CW complex. We make the following observations. * When \(m=0\), we simply have \(\Sigma(X,\emptyset,n)=X^{n}\). Therefore, we can henceforth assume that \(m\geqslant 1\). * If \(n<m\), we have \(\Sigma(X,S,n)=\emptyset\) * If \(n=m\), the space \(\Sigma(X,S,n)\) is a collection of \(n!\) points equipped with discrete topology. Furthermore, it is convenient to extend the Definition 1.1, and to declare \(\Sigma(X,\emptyset,0)\) to be a topological space consisting of a single point. **Definition 1.2**.: _Let \(n\) be a positive integer, and let \(1\) be an arbitrary non-empty set. An \(l\)**-partition** of \(n\) is collection of nonnegative integers \(\{n_{i}\}_{i\in l}\), such that \(\sum_{i\in l}n_{i}=n\)._ Note, that all but finitely many of \(n_{i}\) must be equal to \(0\). We may think about an \(l\)-partition as a function \(\varphi\) from \(1\) to the set of nonnegative integers, by setting \(\varphi(i)\coloneqq n_{i}\). ### The case of the disconnected spaces It turns out, that one can always assume that \(X\) is connected, due to the following simple observation. **Proposition 1.3**.: _Let \(X\) be a topological space, let \(S\) be a discrete subset of \(X\) and let \(n\) be a positive integer. Assume \(X=\cup_{i\in I}X_{i}\) is a decomposition into connected components.1 For all \(i\in I\), set \(S_{i}:=X_{i}\cap S\)._ Footnote 1: Since \(X\) is assumed to be a CW complex, these are the same as its path-connected components. _The space \(\Sigma(X,S,n)\) is homeomorphic to the space_ \[\prod_{\varphi}\prod_{i\in I}\Sigma(X_{i},S_{i},\varphi(i)),\] _where the disjoint union is taken over all \(I\)-partitions \(\varphi\)._ **Proof.** For any tuple of points \((x_{1},\ldots,x_{n})\in X^{n}\), and for any \(i\in I\), set \(\varphi(i):=|X_{i}\cap\{x_{1},\ldots,x_{n}\}|\). This gives a decomposition of \(\Sigma(X,S,n)\) into a disjoint union with components indexed by \(I\)-partitions. When such an \(I\)-partition is fixed, choosing points within each \(X_{i}\) is independent of each other, so we get a direct product of spaces. Finally, each such space, up to homeomorphism, depends only on the value of \(\varphi(i)\), not on the indices of points which landed in \(X_{i}\), so it is homeomorphic to \(\Sigma(X_{i},S_{i},\varphi(i))\). ### The case \(n=m+1\) Now that we know that we can assume that \(X\) is connected, we can see how far we can get with the cases when the difference between \(m\) and \(n\) is small. As mentioned, for \(m=n\) we get a finite discrete space, so the first interesting case is when \(n=m+1\). The following graph plays the key role in the topology of \(\Sigma(X,S,m+1)\). **Definition 1.4**.: _Let \(m\) be a positive integer. The graph \(\operatorname{\mathsf{Hom}}(K_{m},K_{m+1})\) is defined as follows:_ * _the vertices of_ \(\operatorname{\mathsf{Hom}}(K_{m},K_{m+1})\) _are indexed by injections_ \(f:[m]\hookrightarrow[m+1]\)_;_ * _the vertices indexed by_ \(f\) _and_ \(g\) _are connected by an edge if the values of the functions_ \(f\) _and_ \(g\) _differ for precisely one argument._ The graph \(\operatorname{\mathsf{Hom}}(K_{m},K_{m+1})\) is \(m\)-regular. It has \((m+1)!\) vertices and \(\frac{m}{2}(m+1)!\) edges. There are different ways to view the graph \(\operatorname{\mathsf{Hom}}(K_{m},K_{m+1})\). For instance, our notation comes from the fact that it is a special case of a more general construction of the so-called \(\operatorname{\mathsf{Hom}}\)-complexes. Given any two graphs \(T\) and \(G\), one can construction a prodsimplicial complex \(\operatorname{\mathsf{Hom}}(T,G)\), whose cells are multihomomorphisms from \(T\) to \(G\). When the graphs \(T\) and \(G\) are complete graphs with \(m\) and \(m+1\) vertices, we obtain the graph from Definition 1.4. We refer to [1, 1, 2] for precise definitions and further background for this area. From another angle, \(\operatorname{\mathsf{Hom}}(K_{m},K_{m+1})\) can be viewed as a Cayley graph of the symmetric group \(\mathcal{S}_{m+1}\) for the system of generators consisting of \(m\) transpositions \(\{(i,m+1)\,|\,1\leq i\leq m\}\); the reader is referred to [12] for background on Cayley graphs. **Proposition 1.5**.: _Let \(X\) be a non-empty connected topological space, and let \(S\) be a set of \(m\) points in \(X\), \(m\geq 1\). The space \(\Sigma(X,S,m+1)\) is homotopy equivalent to a wedge of \((m+1)!\) copies of \(X\) with the graph \(\operatorname{\mathsf{Hom}}(K_{m},K_{m+1})\)._ _In particular, the homology groups of \(\Sigma(X,S,m+1)\) with integer coefficients are given by the formula_ \[H_{k}(\Sigma(X,S,m+1))\approx\left\{\begin{array}{ll}\sum_{(m+1)!}H_{k}(X),& \text{for $k\neq 1$};\\ \mathbf{Z}^{\zeta_{m}}\oplus\sum_{(m+1)!}H_{1}(X),&\text{for $k=1$},\end{array}\right.\] _where \(\zeta_{m}=\frac{1}{2}(m+1)!(m-2)+1\)._ **Proof.** Set \(T\coloneqq\Sigma(X,S,m+1)\), and let \((x_{1},\ldots,x_{m+1})\) denote an arbitrary point of \(T\). We know that at most one of the points \(x_{1},\ldots,x_{m+1}\) does not belong to \(S\). Write \(S=\{s_{1},\ldots,s_{m}\}\) Assume \(f:[m]\hookrightarrow[m+1]\) is an arbitrary injection. Set \[X_{f}\coloneqq\{(x_{1},\ldots,x_{m+1})\,|\,s_{i}=x_{f(i)},\text{ for all }1\leq i \leq m\}.\] In other words, \(X_{f}\) is the locus of those configurations where the point occupying \(s_{i}\) is \(x_{f(i)}\), and for \(r=[m+1]\setminus\operatorname{Im}f\), the point \(x_{r}\) can be chosen arbitrarily. Clearly, \(T=\cup_{f}X_{f}\), where the union ranges over all such injections. Each \(X_{f}\) is isomorphic to \(X\). Assume \(f\) and \(g\) are two such injections. If their values differ for two or more arguments, the corresponding subspaces \(X_{f}\) and \(X_{g}\) do not intersect. Assume they differ for exactly one argument, say \(f(k)\neq g(k)\), and \(f(i)=g(i)\), for \(i\neq k\). In particular, \([m+1]\setminus\operatorname{Im}f=g(k)\) and \([m+1]\setminus\operatorname{Im}g=f(k)\). Then \(X_{f}\) intersects with \(X_{k}\) in the single point \(p_{f,g}=(x_{1},\ldots,x_{m+1})\) determined by \(x_{g(k)}=s_{k}\) and \(x_{f(i)}=s_{i}\), for all \(i\in[m]\). Inspired by homotopy colimits, we can now deform the space \(\Sigma(X,S,m+1)\) as follows. First, replace each intersection point \(p_{f,g}\) by an interval \(I_{f,g}\), with one end attached to \(X_{f}\) at its copy of \(p_{f,g}\), and the other end attached to \(X_{g}\), again at the respective copy of \(p_{f,g}\). This gives us a homotopy equivalent space where the \((m+1)!\) copies of \(X\) are connected by intervals. We have assumed that \(X\) is connected and that it has CW structure, so it must be path-connected. Next, choose now in each \(X_{f}\) an arbitrary base point \(b_{f}\). Let the endpoints of each interval \(I_{f,g}\) slide inside the spaces \(X_{f}\) and \(X_{g}\) to the respective base points. Again, this produces a homotopy equivalent topological space \(Y\). It can be given the following description: take a certain connected graph \(\Gamma\) and attach a copy of \(X\) to each of its vertices. Since this graph is connected, we then see that \(Y\) is homotopy equivalent to the wedge of this graph with \((m+1)!\) copies of \(X\). The graph can then be replaced by a wedge of circles. To complete the proof, we need to understand the structure of \(\Gamma\). The vertices of \(\Gamma\) are indexed by the injections \(f:[m]\hookrightarrow[m+1]\). These are connected by an edge if and only if the spaces \(X_{f}\) and \(X_{g}\). The condition for that gives precisely the graph \(\operatorname{Hom}(K_{m},K_{m+1})\). Note that even when \(X\) is connected and \(m=1\), different choices of \(S\) may result in non-homeomorphic topological spaces. Take, for instance, \(X\) to be the unit interval \([0,1]\), and set \(n\coloneqq 2\). When \(S=\{0\}\), the space \(\Sigma(X,S,2)\) is homeomorphic to a closed interval, whereas when \(S=\{1/2\}\), the space \(\Sigma(X,S,2)\) is homeomorphic to the plus sign. ### Anchored configuration spaces of graphs The case we are mostly interested in is when \(X\) is a graph. It can be thought of in terms of applications to logistics as follows. Imagine, that we have \(n\) unique resources, and that they need to be distributed among \(m\) locations. Imagine furthermore, that the locations, to which the resources are distributed, are connected by a graph network, and that each resource can be shifted from its location to a neighboring one. Simultaneous multiple shifts of different resources are allowed, as long as at any point of this shifting procedure in each node there remain some resources, which are not being moved. The spaces \(\Sigma(X,S,n)\) are combinatorial cubical complexes which model this situation by introducing higher-dimensional parameter spaces which encode the interplay of such shifts. The situation when \(X\) is a tree, which for our purposes here is the same as finite \(1\)-dimensional contractible simplicial complex, was studied in [10], where the spaces \(\Sigma(X,S,2)\) were called the _Stirling complexes_. The following result has been proved there. **Theorem 1.6**.: _([10, Theorem 2.5]). Assume \(X\) is a tree. Let \(S\) be a subset of the set of the vertices of \(X\), \(|S|=m\geqslant 2\), and let \(n\) be an integer, \(n\geqslant m\). The anchored configuration space \(\Sigma(X,S,n)\) is homotopy equivalent to a wedge of \((n-m)\)-dimensional spheres._ _Let \(f(m,n)\) denote the number of these spheres. Then \(f(m,n)\) is given by the following formula_ \[f(m,n)=(m-1)^{n}-\binom{m}{1}(m-2)^{n}+\binom{m}{2}(m-3)^{n}+\ldots \\ +(-1)^{m-1}\binom{m}{m-3}2^{n}+(-1)^{m}\binom{m}{m-2}. \tag{1.1}\] Note, that Theorem1.6 says in particular that in this case the homotopy type of \(\Sigma(X,S,n)\) only depends on the cardinality of \(S\), not on the specific choice of the anchor points. Once the case of trees (and, due to Proposition1.3, of forests) is understood, it is natural to let \(X\) be a circle graph. This is the same as to consider \(\Sigma(X,S,n)\), when \(X=S^{1}\). Clearly, in this setting the space \(\Sigma(X,S,n)\) is uniquely determined by \(n\)_up to homeomorphism_. When the set \(S\) consists of a single point, the space \(\Sigma(X,S,n)\) is homotopy equivalent to a punctured torus of dimension \(n\). Indeed, it can be seen as a cubical complex obtained from the canonical cubical decomposition of the \(n\)-torus2 by deleting the top-dimensional cube. Accordingly, the Betti numbers are given by \(\beta_{d}(\Sigma(X,S,n))=\binom{n}{d}\), for \(d=0,\ldots,n-1\). Footnote 2: View \(S^{1}\) as a CW complex with a single vertex, and consider the direct product of these CW complexes The main result of this paper is determining the Betti numbers of \(\Sigma(X,S,n)\), for the case when \(|S|=2\). In what follows, we refer to [11, 1, 12] for the background in algebraic topology. ## 2. Cubical structure on \(\Sigma_{2}(S^{1},n)\) From now on, we shall exclusively deal with the case of configurations of \(n\) points on a circle, subject to the condition that two fixed points must occur among the points in the configuration. For any \(n\geqslant 2\), we let \(\Omega_{n}\) denote this space \(\Sigma(X,S,n)\), where \(|S|=2\). Our purpose is to determine the Betti numbers of \(\Omega_{n}\). By recording which of the points in the configuration can be found in the anchor set, we see that the topological space \(\Omega_{n}\) has a canonical cubical complex structure. In this structure, the cubes are indexed by the \(4\)-tuples \((A,B,C,D)\), where 1. each \((A,B,C,D)\) is an ordered set partition of the set \([n]=\{1,\ldots,n\}\); 2. the sets \(A\) and \(C\) are non-empty. The sets \(A\) and \(C\) give us the set of points in the anchor set, while the sets \(B\) and \(D\) tell us on which side of the anchor points lie the remaining configuration points; see Figure 2.1. When \(\sigma\) is a cube of \(\Omega_{n}\), and \(\sigma=(A,B,C,D)\), to express the dependence on \(\sigma\), we shall also write \(\sigma=(A(\sigma),B(\sigma),C(\sigma),D(\sigma))\). This way we shall be able to refer to single parts of \(\sigma\) when necessary. For brevity we use the following notations: * for \(S\subseteq[n]\) and \(x\in[n]\setminus S\), we let \(S+x\) denote \(S\cup x\); * for \(S\subseteq[n]\) and \(x\in S\), we let \(S-x\) denote \(S\setminus x\); * for \(S\subseteq[n]\) and \(x\in[n]\setminus S\), we write \(x<S\), if \(x<y\), for all \(y\in S\); same way we write \(x>S\), if \(x>y\), for all \(y\in S\). Note, that these abbreviations do not override the regular set-theoretical notations, so, when convenient, we shall still use the latter ones. The cubical boundary operator over \(\mathbb{Z}_{2}\) is given by \[\partial(A,B,C,D)=\sum_{x\in B}(A+x,B-x,C,D)+\sum_{x\in B}(A,B-x,C+x,D)\\ +\sum_{x\in D}(A+x,B,C,D-x)+\sum_{x\in D}(A,B,C+x,D-x). \tag{2.1}\] The dimension of the cube indexed by \((A,B,C,D)\) is equal to \(|B|+|D|\). Accordingly, for \(0\leq d\leq n-2\), the number of \(d\)-cubes in \(\Omega_{n}\) is equal to \(\binom{n}{d}2^{d}(2^{n-d}-2)=\binom{n}{d}(2^{n}-2^{d+1})\), so its \(f\)-vector is \[f(\Omega_{n})=(2^{n}-2,n(2^{n}-4),\binom{n}{2}(2^{n}-8),\ldots,\binom{n}{3}(2 ^{n}-2^{n-2}),\binom{n}{2}2^{n-1}).\] This gives us the following formula for the Euler characteristic of \(\Omega_{n}\): \[\mu(\Omega_{n})=\sum_{d=0}^{n-2}(-1)^{d}\binom{n}{d}(2^{n}-2^{d+1 })=2^{n}\sum_{d=0}^{n-2}(-1)^{d}\binom{n}{d}-2\sum_{d=0}^{n-2}(-1)^{d}\binom{n }{d}2^{d}\\ =2^{n}(-1)^{n}(n-1)-2(-1)^{n}(2^{n-1}n+1-2^{n})=(-1)^{n}(2^{n}-2).\] This formula will be confirmed when we have calculated the Betti numbers of \(\Omega_{n}\). Figure 2.1. Encoding of the cubes of \(\Omega_{n}\). ## 3. Discrete Morse theory for chain complexes Discrete Morse theory, [14, 15, 16, 17], has emerged in many situations in combinatorial topology, [18, 19], as a useful tool for calculating homology groups, and perhaps even understanding the homotopy type of the involved spaces. It has been argued in [15] and [16, Section 11.3] that an algebraic point of view on this theory is rather beneficial. We now sketch the basic tenets of that approach as applied to our context. **Definition 3.1**.: _Let \(\mathcal{C}=(C_{*},\partial_{*})\) be a chain complex of vector spaces over \(\mathbb{Z}_{2}\), and let \(\{\Gamma_{d}\}_{d}\) be a collection of sets, such that \(\Gamma_{d}\) is a basis of \(C_{d}\), for all \(d\). The union \(\Gamma=\cup_{d}\Gamma_{d}\) is called a_ **basis of \(\mathcal{C}\)**_._ Assume now we are given a chain complex \(\mathcal{C}\) over \(\mathbb{Z}_{2}\) and a basis \(\Gamma\). We set \(\Gamma_{d}\coloneqq\Gamma\cap C_{d}\), for all \(d\). Each chain \(\sigma\in C_{d}\) is a sum of some of the elements of \(\Gamma_{d}\). The set of these elements is called the support of \(\sigma\), and is denoted by \(\operatorname{supp}\sigma\), so we have \(\sigma=\sum_{\tau\in\operatorname{supp}\sigma}\tau\), for all \(C_{d}\). **Definition 3.2**.: _When \(\sigma,\tau\in\Gamma\), we say that \(\sigma\)_**covers \(\tau\)** _if \(\tau\) is contained in the support of \(\sigma\). In such a case, we write \(\sigma\succ\tau\)._ Obviously, when \(\sigma\in\Gamma_{d}\) and \(\sigma\succ\tau\), we must have \(\tau\in\Gamma_{d-1}\). Reversely, we shall say \(\tau\)_is covered by \(\sigma\)_, and write \(\tau\prec\sigma\). **Definition 3.3**.: _An_ **acyclic matching** _on \(\Gamma\) is an involution \(\mu:\mathcal{M}\to\mathcal{M}\), where \(\mathcal{M}\) is some subset of \(\Gamma\), such that_ 1. _for each_ \(\alpha\in\mathcal{M}\)_, the element_ \(\mu(\alpha)\) _either covers_ \(\alpha\)_, or is covered by_ \(\alpha\)_;_ 2. _there do not exist distinct_ \(a_{1},\dots,a_{t}\in\Gamma\)_, such that_ \(t\geqslant 2\)_, and_ (3.1) \[\mu(a_{t})\succ a_{1}\prec\mu(a_{1})\succ a_{2}\prec\mu(a_{2})\succ\dots \prec\mu(a_{t-1})\succ a_{t}\prec\mu(a_{t}).\] We decompose \(\mathcal{M}=\mathcal{M}^{\dagger}\cup\mathcal{M}^{\dagger}\), where \(\mathcal{M}^{\dagger}\) consists of all \(\alpha\), such that \(\mu(\alpha)\) is covered by \(\alpha\) in \(\Gamma\), while \(\mathcal{M}^{\dagger}\) consists of all \(\alpha\), such that \(\mu(\alpha)\) covers \(\alpha\) in \(\Gamma\). When we need to be specific, we also write \(\mu_{+}(\alpha)\) instead of \(\mu(\alpha)\), for \(\alpha\in\mathcal{M}^{\downarrow}\), and we write \(\mu_{-}(\alpha)\), when \(\mu\in\mathcal{M}^{\dagger}\). Furthermore, we let \(\operatorname{Crit}\) denote the complement of \(\mathcal{M}\), and set \(\operatorname{Crit}_{d}\coloneqq\Gamma_{d}\cap\operatorname{Crit}\), for all \(d\). We say that the basis elements in \(\mathcal{M}\) are _matched_ and the basis elements in \(\operatorname{Crit}\) are _critical_. **Definition 3.4**.: _Let \(\sigma\) be a critical basis element. An_ **alternating path** _starting at \(\sigma\) is any sequence of basis elements_ \[\sigma\succ\tau_{1}\prec\mu_{+}(\tau_{1})\succ\dots\succ\tau_{q}\prec\mu_{+} (\tau_{q})\succ\tau, \tag{3.2}\] _where \(q\) is a nonnegative integer and \(\tau\) is a critical basis element._ Given an alternating path (3.2), we set \(p_{\bullet}\coloneqq\sigma\) and \(p^{\bullet}\coloneqq\tau\). **Definition 3.5**.: _Assume \(\mathcal{C}\) is a chain complex over \(\mathbb{Z}_{2}\) and assume \(\Gamma\) is a basis of \(\mathcal{C}\), with an acyclic matching \(\mu:\mathcal{M}\to\mathcal{M}\). The_ **Morse complex**_\(\mathcal{C}^{M}=(C_{*}^{M},\partial_{*}^{M})\) is defined as follows:_ * _for every_ \(d\)_,_ \(C_{d}^{M}\) _is a vector space over_ \(\mathbb{Z}_{2}\)_, with a basis_ \(\{c_{\sigma}^{M}\}_{\sigma\in\operatorname{Crit}_{d}}\) _indexed by the critical basis elements of dimension_ \(d\)_;_ * _for each_ \(\sigma\in\operatorname{Crit}_{d}\) _the value of the boundary operator on_ \(c_{\sigma}^{M}\) _is given by_ \[\partial_{d}^{M}(c_{\sigma}^{M})\coloneqq\sum_{p}c_{p^{\bullet}}^{M},\] _where the sum is taken over all alternating paths_ \(p\) _satisfying_ \(p^{\bullet}=\sigma\) It is a known fact, see, e.g., [14, Section 11.3], [14, Chapter 15], that \((C_{*}^{\mathbb{M}},\mathfrak{d}_{*}^{\mathbb{M}})\) is a well-defined chain complex. We now proceed with the core of the algebraic discrete Morse theory. The following sequence of statements is derived from [14, Theorem 11.24] and [14, Chapter 15]. Assume, as above, that \(\mathcal{E}\) is a chain complex over \(\mathbb{Z}_{2}\), \(\Gamma\) is a basis of \(\mathcal{E}\), and \(\mu:\mathcal{M}\to\mathcal{M}\) is an acyclic matching. Each \(C_{d}\) is then given a basis, which can be seen as a disjoint union of the following three sets: \(\operatorname{Crit}_{d}\), \(\mathcal{M}_{d}^{\uparrow}\coloneqq\mathcal{M}^{\uparrow}\cap\Gamma_{d}\), and \(\mathcal{M}_{d}^{\downarrow}\coloneqq\mathcal{M}^{\downarrow}\cap\Gamma_{d}\). The next theorem produces a _new basis_ with the same indexing sets, but with much improved boundary values, so that the chain complex splits as a direct sum. **Theorem 3.6**.: _For each \(d\), there exist sets of vectors_ \[B_{d}=\{b_{\sigma}\}_{\sigma\in\operatorname{Crit}_{d}},\quad U_{d}=\{u_{ \tau}\}_{\tau\in\mathcal{M}_{d}^{\uparrow}},\quad L_{d}=\{l_{\rho}\}_{\rho\in \mathcal{M}_{d}^{\downarrow}},\] _such that the following statements hold:_ 1. \(B_{d}\cup U_{d}\cup L_{d}\) _is a basis of_ \(C_{d}\)_, for all_ \(d\)_;_ 2. \(\operatorname{supp}b_{\sigma}\cap\operatorname{Crit}_{d}=\sigma\)_, for all_ \(\sigma\in\operatorname{Crit}_{d}\)_;_ 3. \(\mathfrak{d}b_{\sigma}=\sum_{p}b_{p_{\bullet}}\)_, where the sum is taken over all alternating paths_ \(p\) _satisfying_ \(p^{\bullet}=\sigma\)_;_ 4. \(\mathfrak{d}u_{\tau}=l_{\mu_{-}(\tau)}\)_, for all_ \(\tau\in\mathcal{M}_{d}^{\uparrow}\)_;_ 5. \(\mathfrak{d}l_{\rho}=0\)_, for all_ \(\rho\in\mathcal{M}_{d}^{\downarrow}\)_._ **Proof.** See [14, Theorem 11.24] or [14, Chapter 15]. Note, how the proof there follows a procedure closely resembling the Gaussian elimination. **Corollary 3.7**.: _The chain complex \(\mathcal{E}\) splits as a direct sum \(\mathcal{B}\oplus\mathcal{R}\) of chain complexes, where \(\mathcal{B}\) is isomorphic to the Morse chain complex \(\mathcal{E}^{\mathbb{M}}\), and \(\mathcal{R}\) is acyclic.3_ Footnote 3: Note the different use of the word _acyclic_: here it means that the homology groups of \(\mathcal{R}\) vanish. _In particular, we have \(H_{*}(\mathcal{E})\cong H_{*}(\mathcal{E}^{\mathbb{M}})\)._ Corollary 3.7 says that the Morse complex calculates the homology of the original complex \(\mathcal{E}\). When the Morse boundary operator vanishes, much more detailed information can be obtained about the homology and the cohomology generators. Recall that the cochains can be seen as functions which can be evaluated on chains. Furthermore, for a basis element \(\sigma\in\Gamma_{d}\), we let \(\sigma^{*}\) denote the cochain which evaluates to \(1\) on \(\sigma\), and evaluates to \(0\) on any other element of \(\Gamma_{a}\). We call \(\sigma^{*}\) the _dual_ of \(\sigma\). **Theorem 3.8**.: _Assume that the boundary operator in the Morse complex \(\mathcal{E}^{\mathbb{M}}\) vanishes. Then, the following hold._ 1. _The chains_ \(\{b_{\sigma}\}_{\sigma\in\operatorname{Crit}_{d}}\) _from Theorem_ 3.6 _form a basis for_ \(H_{d}(\mathcal{E})\)_._ 2. _The cochains_ \(\{\sigma^{*}\}_{\sigma\in\operatorname{Crit}_{d}}\) _form a basis for_ \(H^{d}(\mathcal{E})\)_, for all_ \(d\)_._ 3. _Any collection of cycles_ \(\{d_{\sigma}\}_{\sigma\in\operatorname{Crit}_{d}}\)_, such that_ \(\sigma\) _is the unique critical basis element in the support of_ \(d_{\sigma}\)_, for all_ \(\sigma\in\operatorname{Crit}_{d}\)_, forms a basis for_ \(H_{d}(\mathcal{E})\)_._ **Proof.** Statement (1) is immediate. To see (2), note that for every \(\sigma\in\operatorname{Crit}_{d}\), the cochain \(\sigma^{*}\) evaluates to \(1\) on \(b_{\sigma}\), and evaluates to \(0\) on \(b_{\tau}\), for any \(\tau\in\operatorname{Crit}_{d}\), \(\tau\neq\sigma\). Therefore, \(\{\sigma^{*}\}_{\sigma\in\operatorname{Crit}_{d}}\) is a basis for \(H^{d}(\mathcal{E})\). Finally, to see (3), note that for every \(\sigma\in\operatorname{Crit}_{d}\), the cochain \(\sigma^{*}\) evaluates to \(1\) on \(d_{\sigma}\), while for any other \(\tau\in\operatorname{Crit}_{d}\), \(\tau\neq\sigma\), the cochain \(\tau^{*}\) evaluates to \(0\) on \(d_{\sigma}\). The algebraic framework of chain complexes can readily be adapted to cubical complexes. So assume \(\mathcal{K}\) is a cubical complex. For \(0\leqslant d\leqslant\dim\mathcal{K}\), we let \(\mathcal{K}(d)\) denote the set of all \(d\)-dimensional cubes in \(\mathcal{K}\). We consider the cubical chain complex of \(\mathcal{K}\) over \(\mathbb{Z}_{2}\), \(\mathcal{C}\coloneqq C_{*}(\mathcal{K};\mathbb{Z}_{2})\). In \(\mathcal{C}\), each chain group \(C_{d}(\mathcal{K};\mathbb{Z}_{2})\) is a vector space over \(\mathbb{Z}_{2}\) with basis \(\mathcal{K}(d)\). We can then apply the algebraic discrete Morse theory, by taking \(\Gamma_{d}\coloneqq\mathcal{K}(d)\), for all \(d\). Note that here the covering relation comes from the following combinatorial framework. **Definition 3.9**.: _For a cubical complex \(\mathcal{K}\), we let \(\mathcal{F}(\mathcal{K})\) denote its **face poset**. Its elements are the non-empty cubes of \(\mathcal{K}\), which are partially ordered by inclusion._ The covering relation in the partial order on \(\mathcal{F}(\mathcal{K})\) is denoted by \(\succ\). The acyclicity condition can then be formulated using this combinatorial notion of covering. ## 4. Acyclic matching ### Defining a matching on \(\mathcal{F}(\Omega_{n})\) We proceed to describe a specific matching rule for the cubical complex \(\Omega_{n}\). From now on, we set \(\mathcal{K}\coloneqq\Omega_{n}\). First, we define two pivot functions \[\alpha,\beta:\mathcal{F}(\mathcal{K})\to[n],\] as follows: \[\alpha(\sigma) \coloneqq\min(A(\sigma)+B(\sigma))\] \[\beta(\sigma) \coloneqq\max(B(\sigma)+C(\sigma)).\] Clearly, either \(\alpha(\sigma)\in A\) or \(\alpha(\sigma)\in B\), and, similarly, either \(\beta(\sigma)\in B\) or \(\beta(\sigma)\in C\). Now, for all \(0\leqslant d\leqslant n-2\), we define the following pairs of sets: \[\mathcal{M}_{1}^{\uparrow}(d) \coloneqq\{\sigma\in\Omega_{n}(d)\,|\,\alpha(\sigma)\in B( \sigma)\},\] \[\mathcal{M}_{1}^{\downarrow}(d) \coloneqq\{\sigma\in\Omega_{n}(d)\,|\,\alpha(\sigma)\in A( \sigma)\text{ and }|A(\sigma)|\geqslant 2\}.\] and \[\mathcal{M}_{2}^{\uparrow}(d) \coloneqq\{\sigma\in\Omega_{n}(d)\,|\,A(\sigma)=\alpha(\sigma) \text{ and }\beta(\sigma)\in B(\sigma)\},\] \[\mathcal{M}_{2}^{\downarrow}(d) \coloneqq\{\sigma\in\Omega_{n}(d)\,|\,A(\sigma)=\alpha(\sigma), \beta(\sigma)\in C(\sigma),\,|C(\sigma)|\geqslant 2,\text{ and }\beta(\sigma)>\alpha(\sigma)\}.\] The reason for our choice of notations will become clear shortly. For now, note that the four sets \(\mathcal{M}_{1}^{\uparrow}(d)\), \(\mathcal{M}_{1}^{\downarrow}(d)\), \(\mathcal{M}_{2}^{\uparrow}(d)\), and \(\mathcal{M}_{2}^{\downarrow}(d)\) are disjoint for all \(d\). For any \(0\leqslant d\leqslant n-3\), we define functions \[f: \mathcal{M}_{1}^{\downarrow}(d)\to\mathcal{M}_{1}^{\uparrow}(d+1),\] \[g: \mathcal{M}_{2}^{\downarrow}(d)\to\mathcal{M}_{2}^{\uparrow}(d+1),\] by setting \[f(\sigma) \coloneqq(A(\sigma)-\alpha(\sigma),B(\sigma)+\alpha(\sigma),C( \sigma),D(\sigma)),\text{ and }\] \[g(\sigma) \coloneqq(A(\sigma),B(\sigma)+\beta(\sigma),C(\sigma)-\beta( \sigma),D(\sigma)),\] so, in words, the function \(f\) moves \(\alpha(\sigma)\) from \(A\) to \(B\), while the function \(g\) moves \(\beta(\sigma)\) from \(C\) to \(B\). Note, that \(f(\sigma)\) covers \(\sigma\) and \(g(\sigma)\) covers \(\sigma\), for all \(\sigma\)'s in the definition domain of the respective function. **Proposition 4.1**.: _The functions \(f\) and \(g\) are well-defined bijections, whose inverses are given by_ \[f^{-1}(\sigma) \coloneqq(A(\sigma)+\alpha(\sigma),B(\sigma)-\alpha(\sigma),C( \sigma),D(\sigma)),\text{ and }\] \[g^{-1}(\sigma) \coloneqq(A(\sigma),B(\sigma)-\beta(\sigma),C(\sigma)+\beta( \sigma),D(\sigma)).\] Proof.: Let us start with \(f\). If \(\sigma\in\mathcal{M}_{1}^{\downarrow}(d)\), then \(|A(\sigma)|\geqslant 2\), so \(A(\sigma)-\alpha(\sigma)\neq\emptyset\). This means that \(f(\sigma)\in\mathcal{X}(d+1)\). Furthermore, since \(A(\sigma)+B(\sigma)=A(f(\sigma))+B(f(\sigma))\), we have \(\alpha(\sigma)=\alpha(f(\sigma))\). This means that \(\alpha(f(\sigma))\in B(f(\sigma))\), so \(f(\sigma)\in\mathcal{M}_{1}^{\uparrow}(d+1)\), and hence \(f\) is well-defined. Finally, again since \(\alpha(\sigma)=\alpha(f(\sigma))\), the inverse of \(f\) is as stated in the formulation of the proposition. Now, let us consider the function \(g\). Obviously, the functions \(g\) and \(g^{-1}\), as the latter is defined in the proposition, are inverses of each other, as long as they are well-defined. First, we see that \(g\) is well-defined. Take \(\sigma\in\mathcal{M}_{2}^{\downarrow}(d)\). We have \(|C(\sigma)|\geqslant 2\), so \(C(\sigma)-\beta(\sigma)\neq\emptyset\), and therefore \(g(\sigma)\in\mathcal{X}(d+1)\). We have \(B(\sigma)+C(\sigma)=B(g(\sigma))+C(g(\sigma))\), so \(\beta(\sigma)=\beta(g(\sigma))\), which implies \(\beta(g(\sigma))\in B(g(\sigma))\). Clearly, \(A(g(\sigma))=A(\sigma)=\alpha(\sigma)\). Furthermore, \[\alpha(g(\sigma))=\min(\alpha(\sigma)+B(\sigma)+\beta(\sigma))=\alpha(\sigma),\] since \(\alpha(\sigma)<\beta(\sigma)\) and \(\alpha(\sigma)<B(\sigma)\). It follows that \(g(\sigma)\in\mathcal{M}_{2}^{\uparrow}(d+1)\). Finally, let us see that \(g^{-1}\) is well-defined. Take \(\sigma\in\mathcal{M}_{2}^{\uparrow}(d)\). It is obvious that \(g^{-1}(\sigma)\in\mathcal{X}(d-1)\), and that \(|C(g^{-1}(\sigma))|\geqslant 2\). Again, since \(B(\sigma)+C(\sigma)=B(g^{-1}(\sigma))+C(g^{-1}(\sigma))\), we have \(\beta(\sigma)=\beta(g^{-1}(\sigma))\), and so \(\beta(g^{-1}(\sigma))\in C(g^{-1}(\sigma))\). Similar to the above, \(\alpha(g^{-1}(\sigma))=\min(\alpha(\sigma)+B(\sigma)-\beta(\sigma))=\alpha(\sigma)\), since \(\alpha(\sigma)<\beta(\sigma)\) and \(\alpha(\sigma)<B(\sigma)\). It follows, that \(A(g^{-1}(\sigma))=\alpha(g^{-1}(\sigma))\), and that \(\alpha(g^{-1}(\sigma))<\beta(g^{-1}(\sigma))\). Thus \(g^{-1}(\sigma)\in\mathcal{M}_{2}^{\downarrow}(d)\), and the proof is finished. We now set \[\mathcal{M}^{\downarrow}(d) \coloneqq\mathcal{M}_{1}^{\downarrow}(d)\cup\mathcal{M}_{2}^{ \downarrow}(d),\text{ for all }d=0,\ldots,n-3;\] \[\mathcal{M}^{\uparrow}(d) \coloneqq\mathcal{M}_{1}^{\uparrow}(d)\cup\mathcal{M}_{2}^{ \uparrow}(d),\text{ for all }d=1,\ldots,n-2;\] \[\mathcal{M}_{1}^{\uparrow} \coloneqq\cup_{d=1}^{n-2}\mathcal{M}_{1}^{\uparrow}(d),\quad \mathcal{M}_{2}^{\downarrow}\coloneqq\cup_{d=1}^{n-2}\mathcal{M}_{2}^{ \downarrow}(d);\] \[\mathcal{M}_{2}^{\downarrow} \coloneqq\cup_{d=0}^{n-3}\mathcal{M}_{1}^{\downarrow}(d),\quad \mathcal{M}_{2}^{\downarrow}\coloneqq\cup_{d=0}^{n-3}\mathcal{M}_{2}^{ \downarrow}(d);\] \[\mathcal{M}^{\downarrow} \coloneqq\mathcal{M}_{1}^{\downarrow}\cup\mathcal{M}_{2}^{ \downarrow}\coloneqq\cup_{d=0}^{n-3}\mathcal{M}^{\downarrow}(d),\quad\mathcal{M }^{\uparrow}\coloneqq\mathcal{M}_{1}^{\uparrow}\cup\mathcal{M}_{2}^{ \downarrow}=\cup_{d=1}^{n-2}\mathcal{M}^{\uparrow}(d);\] \[\mathcal{M}_{1} \coloneqq\mathcal{M}_{1}^{\uparrow}\cup\mathcal{M}_{1}^{\uparrow}, \quad\mathcal{M}_{2}\coloneqq\mathcal{M}_{2}^{\uparrow}\cup\mathcal{M}_{2}^{ \downarrow};\] \[\mathcal{M} \coloneqq\mathcal{M}^{\downarrow}\cup\mathcal{M}^{\uparrow}= \mathcal{M}_{1}\cup\mathcal{M}_{2}.\] By Proposition 4.1, we know that functions \(f\) and \(g\) define a bijection from \(\mathcal{M}^{\downarrow}\) to \(\mathcal{M}^{\uparrow}\), whose inverse is given by the combination of the inverses of \(f\) and of \(g\). We denote this bijection by \(\mu_{+}\), its inverse by \(\mu_{-}\), and the resulting involution of \(\mathcal{M}\) simply by \(\mu\). ### Critical cubes of \(\mathcal{M}\) To describe the set of critical cubes of \(\mathcal{M}\), let us define the following three sets \[C_{1} \coloneqq\{\sigma\in\mathcal{F}(\Omega_{n})\,|\,A(\sigma)=\alpha( \sigma),\,B(\sigma)=\emptyset,\,|C(\sigma)|\geqslant 2,\,\text{and}\,\beta(\sigma)< \alpha(\sigma)\},\] \[C_{2} \coloneqq\{\sigma\in\mathcal{F}(\Omega_{n})\,|\,A(\sigma)=\alpha( \sigma),\,B(\sigma)=\emptyset,\,\text{and}\,C(\sigma)=\beta(\sigma)\},\] \[C_{3} \coloneqq\{\sigma\in\mathcal{F}(\Omega_{n})\,|\,A(\sigma)=\alpha( \sigma),\,C(\sigma)=\beta(\sigma),\,B(\sigma)\neq\emptyset,\,\text{and}\,\, \alpha(\sigma)<B(\sigma)<\beta(\sigma)\},\] see Figure 4.1. Note that all the cubes in \(C_{2}\cup C_{3}\) have dimension \(n-2\), whereas each cube \(\sigma\in C_{1}\) has dimension \(0\leqslant|D(\sigma)|\leqslant n-3\). As in the general case, let \(\operatorname{Crit}\) denote the set of the critical cubes with respect to the matching \(\mathcal{M}\). **Proposition 4.2**.: _We have \(\operatorname{Crit}=C_{1}\cup C_{2}\cup C_{3}\)._ **Proof.** Clearly, any critical cube \(\sigma\) must satisfy \(A(\sigma)=\alpha(\sigma)\), or else it would belong to the set \(\mathcal{M}_{1}\). This is satisfied for all \(\sigma\in C_{1}\cup C_{2}\cup C_{3}\), so we only need to consider \(\sigma\)'s, for which \(A(\sigma)=\alpha(\sigma)\). Let us now look at those \(\sigma\), for which \(B(\sigma)=\emptyset\). Such a cube is critical if and only if it does not belong to \(\mathcal{M}_{2}^{\downarrow}\). This is the case if and only if at least one of the following conditions is satisfied: 1. either \(C(\sigma)=\beta(\sigma)\), 2. or \(C(\sigma)-\beta(\sigma)\neq\emptyset\), but \(\beta(\sigma)<\alpha(\sigma)\). The first case describes the set \(C_{2}\), whereas the second case describes \(C_{1}\). Now look at those \(\sigma\) for which \(B(\sigma)\neq\emptyset\). By construction, we have \(\alpha(\sigma)<B(\sigma)\), so \(\alpha(\sigma)<\beta(\sigma)\). If \(\beta(\sigma)\in B(\sigma)\), we have \(\sigma\in\mathcal{M}_{2}^{\uparrow}\). If \(\beta(\sigma)\in C(\sigma)\), but \(|C(\sigma)|\geqslant 2\), we have \(\sigma\in\mathcal{M}_{2}^{\downarrow}\). So, the only option remaining is that \(\beta(\sigma)=C(\sigma)\), in which case \(\sigma\) is critical, and we have precisely described the set \(C_{3}\). **Proposition 4.3**.: _For \(d=0,\ldots,n-3\), the number of critical cubes of dimension \(d\) is given by \(\binom{n}{d}\). The number of critical cubes of dimension \(n-2\) is equal to \(2^{n}+\binom{n-1}{2}-2\)._ **Proof.** For \(d=0,\ldots,n-3\), the critical cubes of dimension \(d\) are precisely those cubes in \(C_{1}\), for which \(|D(\sigma)|=d\). Since \(\alpha(\sigma)>C(\sigma)\), the choice of \(D(\sigma)\) determines the cube uniquely. Therefore, the number of such cubes is simply \(\binom{n}{d}\). The number of critical cubes of dimension \(n-2\) is equal to \(|C_{2}|+|C_{3}|\). For \(\sigma\in C_{2}\), the numbers \(\alpha(\sigma)\) and \(\beta(\sigma)\) can be chosen independently, so \(|C_{2}|=n(n-1)\). For \(\sigma\in C_{3}\) again the choice of \(D(\sigma)\) determines \(\sigma\) uniquely, since in this case \(\alpha(\sigma)=\min([n]\setminus D(\sigma))\) and \(\beta(\sigma)=\max([n]\setminus D(\sigma))\). The only condition is that \(|D(\sigma)|\leqslant n-3\). It follows that \(|C_{3}|=2^{n}-1-n-\binom{n}{2}\). Adding this to \(n(n-1)\) yields the desired formula. Figure 4.1. The 3 types of critical cells with respect to \(\mathcal{M}\). ### The acyclicity of \(\mathcal{M}\) Let us now show that our matching satisfies the key property required by discrete Morse theory. **Proposition 4.4**.: _The matching \(\mathcal{M}\) is acyclic._ **Proof.** Assume \(\mathcal{M}\) is not acyclic, and pick a cycle \[\sigma_{1}\prec\mu_{+}(\sigma_{1})\succ\sigma_{2}\prec\mu_{+}(\sigma_{2}) \succ\cdots\succ\sigma_{q}\prec\mu_{+}(\sigma_{q})\succ\sigma_{1}, \tag{4.1}\] where \(q\geq 2\), and \(\sigma_{1},\ldots,\sigma_{q}\) are distinct cubes of dimension \(d\), where \(0\leq d\leq n-3\). We traverse this cycle from left to right. For convenience, set \(\sigma_{q+1}:=\sigma_{1}\). By definition of the matching, we have \(D(\sigma_{i})=D(\mu_{+}(\sigma_{i}))\), for \(1\leq i\leq q\). On the other hand, since \(\mu_{+}(\sigma_{i})\succ\sigma_{i+1}\), we have \(D(\mu_{+}(\sigma_{i}))\subseteq D(\sigma_{i+1})\), for all \(i\). Therefore, we conclude \[D(\sigma_{1})\subseteq D(\sigma_{2})\subseteq\cdots\subseteq D(\sigma_{q}) \subseteq D(\sigma_{1}),\] which of course implies \[D(\sigma_{1})=\cdots=D(\sigma_{q})=D(\mu_{+}(\sigma_{1}))=\cdots=D(\mu_{+}( \sigma_{q})).\] Let us now see what happens to the structure of the cube labels when one follows the cycle. Consider an arbitrary index \(i\), such that \(\sigma_{i}\in\mathcal{M}_{1}^{\downarrow}\). For brevity, say \(\sigma_{i}=(A,B,C,D)\). We have \[\mu_{+}(\sigma_{i})=(A-\alpha(\sigma_{i}),B+\alpha(\sigma_{i}),C,D).\] What are the options for \(\sigma_{i+1}\)? Assume \[\sigma_{i+1}=(A-\alpha(\sigma_{i})+y,B+\alpha(\sigma_{i})-y,C,D).\] Since \(\sigma_{i}\neq\sigma_{i+1}\), we have \(y\neq\alpha(\sigma_{i})\). Then \(\alpha(\sigma_{i+1})=\alpha(\sigma_{i})\), and, since \(\alpha(\sigma_{i})\in B(\sigma_{i+1})\), we have \(\sigma_{i+1}\in\mathcal{M}_{1}^{\uparrow}\), which is a contradiction. We must therefore have \[\sigma_{i+1}=(A-\alpha(\sigma_{i}),B+\alpha(\sigma_{i})-y,C+y,D),\] for some \(y\in B+\alpha(\sigma_{i})\). If \(y\neq\alpha(\sigma_{i})\), then again \(\alpha(\sigma_{i+1})=\alpha(\sigma_{i})\), this time since \[\alpha(\sigma_{i})\in A(\sigma_{i+1})+B(\sigma_{i+1})\subseteq A(\sigma_{i})+ B(\sigma_{i}).\] This means that \(\sigma_{i+1}\in\mathcal{M}_{1}^{\uparrow}\), again leading to a contradiction. We therefore conclude that the only option is \[\sigma_{i+1}=(A-\alpha(\sigma_{i}),B,C+\alpha(\sigma_{i}),D).\] Let us now consider the case \(\sigma_{i}\in\mathcal{M}_{2}^{\downarrow}\), so we can write \(\sigma_{i}=(\alpha(\sigma_{i}),B,C,D)\), with \(|C|\geq 2\), \(\beta(\sigma_{i})\in C\), and \(\beta(\sigma_{i})>\alpha(\sigma_{i})\). By construction \(\alpha(\sigma_{i})<B\). We then have \[\mu_{+}(\sigma_{i})=(\alpha(\sigma_{i}),B+\beta(\sigma_{i}),C-\beta(\sigma_{i }),D).\] Assume first \[\sigma_{i+1}=(\alpha(\sigma_{i}),B+\beta(\sigma_{i})-x,C-\beta(\sigma_{i})+x, D).\] Since \(\sigma_{i}\neq\sigma_{i+1}\) we have \(x\neq\beta(\sigma_{i})\). We then have \(\alpha(\sigma_{i})<B+\beta(\sigma_{i})-x\), because \(\alpha(\sigma_{i})<\beta(\sigma_{i})\) and \(\alpha(\sigma_{i})<B\). This means that \(\alpha(\sigma_{i+1})=\alpha(\sigma_{i})\). On the other hand, \(\beta(\sigma_{i})=\beta(\sigma_{i+1})\), so \(\sigma_{i+1}\in\mathcal{M}_{2}^{\uparrow}\), yielding a contradiction. We can therefore conclude that in this case \[\sigma_{i+1}=(A+x,B-x+\alpha(\sigma_{i}),C-\alpha(\sigma_{i}),D),\] where \(x\) may or may not be equal to \(\alpha(\sigma_{i})\). These considerations imply that the matchings of the two types alternate along the cycle (4.1). We can therefore assume that \(q=2t\) for some \(t\geq 1\), and that \(\sigma_{1},\sigma_{3},\ldots,\sigma_{2t-1}\in\mathcal{M}_{1}^{\downarrow}\), whereas \(\sigma_{2},\sigma_{4},\ldots,\sigma_{2t}\in\mathcal{M}_{2}^{\downarrow}\). To finish the argument let us look closely at each of these alternating steps. Pick \(1\leq k\leq t\), and consider \(\sigma_{2k-1}\), say \(\sigma_{2k-1}=(A,B,C,D)\). The argument that follows is illustrated by Figure 4.2 for the case \(k=1\). Since \(\sigma_{2k-1}\in\mathcal{M}_{1}^{\downarrow}\), we have \(|A|\geq 2\) and \(\alpha(\sigma_{2k-1})\in A\). Accordingly, by our argument above, the cycle continues with \[\mu_{+}(\sigma_{2k-1}) =(A-\alpha(\sigma_{2k-1}),B+\alpha(\sigma_{2k-1}),C,D),\] \[\sigma_{2k} =(A-\alpha(\sigma_{2k-1}),B,C+\alpha(\sigma_{2k-1}),D).\] Now, \(\sigma_{2k}\in\mathcal{M}_{2}^{\downarrow}\), so we have \(A-\alpha(\sigma_{2k-1})=\alpha(\sigma_{2k})\), or equivalently \(A=\{\alpha(\sigma_{2k-1}),\alpha(\sigma_{2k})\}\). Note, that \(\alpha(\sigma_{2k-1})<\alpha(\sigma_{2k})\). By definition of \(\mu\) this element is matched to \[\mu_{+}(\sigma_{2k})=(\alpha(\sigma_{2k}),B+\beta(\sigma_{2k}),C+\alpha(\sigma _{2k-1})-\beta(\sigma_{2k}),D).\] Again, by our argument above, we have \[\sigma_{2k+1}=(\alpha(\sigma_{2k})+x,B+\beta(\sigma_{2k})-x,C+\alpha(\sigma_{2 k-1})-\beta(\sigma_{2k}),D),\] where \(x\) is some element from \(B+\beta(\sigma_{2k})\). Note that \(\alpha(\sigma_{2k})<B\), by definition of the function \(\alpha()\), and \(\alpha(\sigma_{2k})<\beta(\sigma_{2k})\), because \(\sigma_{2k}\in\mathcal{M}_{2}^{\downarrow}\). It follows that \(\alpha(\sigma_{2k})<x\), so \(\alpha(\sigma_{2k})=\alpha(\sigma_{2k+1})\). Therefore \(\sigma_{2k+1}\) is matched to \[\mu_{+}(\sigma_{2k+1})=(x,B+\beta(\sigma_{2k})-x+\alpha(\sigma_{2k}),C+\alpha( \sigma_{2k-1})-\beta(\sigma_{2k}),D).\] Finally, \(\alpha(\sigma_{2k-1})<\alpha(\sigma_{2k})\) together with \(\alpha(\sigma_{2k})=\alpha(\sigma_{2k+1})\) implies that \(\alpha(\sigma_{2k-1})<\alpha(\sigma_{2k+1})\), for all \(k\). This leads to a contradiction as we follow one turn of the cycle. Figure 4.2. Closer look at the cycle at \(\sigma_{1}\). ## 5. Homology calculation In this section we shall apply the statements from Section3 to our matching. To start with, we need to extend our terminology. **Definition 5.1**.: _Let \(\sigma\) be an arbitrary cube of \(\Omega_{n}\). A \(\Lambda\)**-path** starting at \(\sigma\) is either \(\sigma\) itself, if \(\sigma\) is critical, or a sequence of cubes_ \[\sigma=\tau_{1}\prec\mu_{+}(\tau_{1})\succ\cdots\succ\tau_{q}\prec\mu_{+}(\tau _{q})\succ\tau, \tag{5.1}\] _where \(q\) is a positive integer and \(\tau\) is a critical cube. Clearly, in the latter case, we must have \(\sigma\in\mathcal{M}^{\downarrow}\)._ We say that the \(\Lambda\)-path shown in (5.1) _ends at the cube \(\tau\)_. If the \(\Lambda\)-path consists of a single cube, we say that this path _ends at \(\sigma\)_. Note that removing the starting cube from an alternating path results in a \(\Lambda\)-path. Likewise, when \(q\geqslant 1\) in (5.1), the removal of the first two cubes from an alternating path, results in a new alternating path. We shall next prove that the boundary operator in the Morse chain complex is trivial, and the following lemmata will provide the key building block of the argument. **Lemma 5.2**.: _Assume \(\sigma=\tau_{1}\prec\mu_{+}(\tau_{1})\succ\cdots\succ\tau_{q}\prec\mu_{+}( \tau_{q})\succ\tau\) is a \(\Lambda\)-path, then \(B(\tau_{1})=\cdots=B(\tau_{q})=B(\tau)=\emptyset\)._ **Proof.** By our construction and the description of the set of critical cubes, we have the following facts: 1. since \(\tau\) is critical, and the dimension of \(\tau\) is at most \(n-3\), we have \(B(\tau)=\emptyset\); 2. \(|B(\tau_{i})|+1=|B(\mu_{+}(\tau_{i}))|\), for all \(1\leqslant i\leqslant q\); 3. the difference \(|B(\mu_{+}(\tau_{i}))|-|B(\tau_{i+1})|\) is either \(0\) or \(1\), for all \(1\leqslant i\leqslant q\). It follows that, for all \(1\leqslant i\leqslant q\), the difference \(|B(\tau_{i+1})|-|B(\tau_{i})|\) equals to either \(1\) or \(0\), in particular, \(|B(\tau_{i})|\leqslant|B(\tau_{i+1})|\). Combining this with the fact that \(B(\tau)=\emptyset\), we can conclude that \(B(\tau_{i})=\emptyset\), for all \(1\leqslant i\leqslant q\). **Lemma 5.3**.: _Assume \(\sigma\) is a cube of \(\Omega_{n}\), such that \(\sigma\not\in\mathcal{M}^{\uparrow}\), and \(B(\sigma)=\emptyset\). Then, there exists a unique \(\Lambda\)-path starting at \(\sigma\). This path will end in a critical cube \(\tau=(x,\emptyset,A(\sigma)+C(\sigma)-x,D(\sigma))\), where \(x=\max(A(\sigma)+C(\sigma))\)._ **Proof.** If \(\sigma\) is critical, then we must have \(A(\sigma)=\alpha(\sigma)\), and \(\alpha(\sigma)>C(\sigma)\), so the statement of the lemma is correct. Assume now that \(\sigma\in\mathcal{M}^{\downarrow}\). By Lemma5.2, we know that in any alternating path \[\sigma=\tau_{1}\prec\mu_{+}(\tau_{1})\succ\cdots\succ\tau_{q}\prec\mu_{+}( \tau_{q})\succ\tau\] we must have \(B(\tau_{1})=\cdots=B(\tau_{q})=B(\tau)=\emptyset\). We shall now prove the statement by induction on \(|A(\sigma)|\). Consider first the basis case when \(|A(\sigma)|=1\), say \(A(\sigma)=x\). If \(x>C(\sigma)\), then \(\sigma\) is critical, contradicting our assumption that \(\sigma\in\mathcal{M}^{\downarrow}\). Therefore, we must have \(\beta(\sigma)>x\). The only \(\Lambda\)-path starting from \(\sigma\) is then \[\sigma\prec(x,\beta(\sigma),C(\sigma)-\beta(\sigma),D(\sigma)) \succ(x+\beta(\sigma),\emptyset,C(\sigma)-\beta(\sigma),D(\sigma))\\ \prec(\beta(\sigma),x,C(\sigma)-\beta(\sigma),D(\sigma))\succ( \beta(\sigma),\emptyset,C(\sigma)-\beta(\sigma)+x,D(\sigma)),\] with the last cube being critical and satisfying the conditions of the lemma. Next, we prove the induction step. Assume that \(A(\sigma)=\{x_{1},\ldots,x_{k}\}\), with \(k\geqslant 2\), \(x_{1}<\cdots<x_{k}\), and assume that the statement has been proved for smaller values of \(k\). As we have shown in the proof of Proposition 4.4, the only way a \(\Lambda\)-path can start from \(\sigma\) is \[\sigma\prec(A(\sigma)-x_{1},x_{1},C(\sigma),D(\sigma))\succ(A(\sigma)-x_{1}, \emptyset,C(\sigma)+x_{1},D(\sigma)).\] We can now use the induction assumption to conclude that there is a unique \(\Lambda\)-path starting at the cube \((A(\sigma)-x_{1},\emptyset,C(\sigma)+x_{1},D(\sigma))\). This path will end at the cube \((y,\emptyset,A(\sigma)+C(\sigma)-y,D(\sigma))\), where \(y=\max(A(\sigma)+C(\sigma))\). Clearly, this is the unique \(\Lambda\)-path starting at \(\sigma\). We can now use the technical Lemmata 5.2 and 5.3 to prove the main result of this section. **Theorem 5.4**.: _The boundary operator in the Morse chain complex is trivial._ Proof.: Let \(\sigma\) be a critical cube of dimension \(d\), \(d\geqslant 1\), say \(\sigma=(A,B,C,D)\). Consider an alternating path \[\sigma\succ\tau_{1}\prec\mu_{+}(\tau_{1})\succ\cdots\succ\tau_{q}\prec\mu_{+}( \tau_{q})\succ\tau,\] where \(q\) is a nonnegative integer and \(\tau\) is a critical cube of dimension \(d-1\). For convenience, we set \(\tau_{q+1}:=\tau\). By Lemma 5.2 we know that \(B(\tau_{1})=\cdots=B(\tau_{q})=B(\tau)=\emptyset\). Let us first consider the case when \(B(\sigma)\neq\emptyset\). In particular, the cube \(\sigma\) has dimension \(n-2\). If \(|B(\sigma)|\geqslant 2\), then \(|B(\tau_{1})|\geqslant 1\), contradicting the fact that \(B(\tau_{1})\) is empty. Therefore, in this case, there are no alternating paths originating at \(\sigma\) at all. We can therefore assume that \(B(\sigma)\) consists of a single element, say \(B(\sigma)=x\). We then have two options: either \(\tau_{1}=(A+x,\emptyset,C,D)\) or \(\tau_{1}=(A,\emptyset,C+x,D)\). By Lemma 5.3 there exists a unique \(\Lambda\)-path starting from each one. Both paths will end at the same critical cube \((y,\emptyset,A+C+x-y,D)\), where \(y=\max(A+C+x)\). By Definition 3.5 the boundary operator of the Morse complex evaluates to \(0\) on \(\sigma\). Let us now consider the case when \(B(\sigma)=\emptyset\). Assume \(D(\sigma)=\{x_{1},\ldots,x_{k}\}\). Considering the alternating paths starting from \(\sigma\), there are \(2k\) possibilities for the first step. The cube \(\tau_{1}\) is either \((A+x_{i},\emptyset,C,D-x_{i})\) or \((A,\emptyset,C+x_{i},D-x_{i})\), for \(1\leqslant i\leqslant k\). By the argument above, once \(\tau_{1}\) is chosen, the rest of the alternating path can be chosen uniquely. Accordingly we will have \(2k\) alternating paths starting at \(\sigma\). These paths will end at the cubes \((y_{i},\emptyset,A+C+x_{i}-y_{i},D)\), where \(y_{i}=\max(A+C+x_{i})\), with exactly two paths ending in each such cube. It follows from Definition 3.5 that the boundary operator of the Morse complex evaluates to \(0\) on this \(\sigma\) as well. We can now state the first main theorem of this paper. **Theorem 5.5**.: _The Betti numbers of \(\Omega_{n}\) are given by the following formulae:_ \[\beta_{d} =\binom{n}{d},\text{ for }0\leqslant d\leqslant n-3; \tag{5.3}\] \[\beta_{n-2} =2^{n}+\binom{n-1}{2}-2. \tag{5.2}\] Proof.: Combine Theorem 3.8(1) with Proposition 4.3. The reader is invited to see how Theorem 5.5 confirms the already derived value for the Mobius function of \(\Omega_{n}\) via the Euler-Poincare formula. ## 6. Explicit homology basis In this section we would like to describe an explicit homology basis for \(\Omega_{n}\). To this end, let us define the following chains in \(C_{*}(\Omega_{n})\). **Definition 6.1**.: _Assume \(A\) and \(C\) are two disjoint non-empty subsets of \([n]\). Set \(S:=[n]\setminus(A\cup C)\). We define_ \[\rho_{A,C}:=\sum_{B\cup D=S}(A,B,C,D).\] Clearly, \(\rho_{A,C}\) is a chain of dimension \(|S|=n-|A|-|B|\). **Proposition 6.2**.: _The chain \(\rho_{A,C}\) is a cycle for all choices of \(A\) and \(C\)._ **Proof.** Simple-minded boundary calculation gives \[\begin{split}\partial(\rho_{A,C})&=\partial(\sum_{ B\cup D=S}(A,B,C,D))=\sum_{B\cup D=S}\partial(A,B,C,D)\\ &=\sum_{B\cup D=S\atop x\in B}(A+x,B-x,C,D)+\sum_{B\cup D=S\atop x \in B}(A,B-x,C+x,D)\\ &+\sum_{B\cup D=S\atop x\in D}(A+x,B,C,D-x)+\sum_{B\cup D=S\atop x \in D}(A,B,C+x,D-x).\end{split} \tag{6.1}\] All the terms on the right hand side of Equation (6.1) are either of the type \((A+x,B^{\prime},C,D^{\prime})\) or \((A,B^{\prime},C+x,D^{\prime})\), where \(x\in S\), and \(B^{\prime}\cup D^{\prime}=S-x\). Each term of the type \((A+x,B^{\prime},C,D^{\prime})\) occurs twice: once in first sum, and once in the third one. Likewise, each term of the type \((A,B^{\prime},C+x,D^{\prime})\) occurs once in the second sum, and once in the third. In any case, the total sum vanishes in \(\mathbb{Z}_{2}\). We can then see which critical cubes are contained in the support of this cycle. **Proposition 6.3**.: _The presence of the critical cubes in the support of \(\rho_{A,C}\) is described by the following statements, where as before we set \(S:=[n]\setminus(A\cup C)\)._ 1. _If_ \(|A|\geqslant 2\)_, the support of_ \(\rho_{A,C}\) _contains no critical cubes._ 2. _If_ \(A=a\) _and_ \(|C|\geqslant 2\)_, then the support of_ \(\rho_{A,C}\) _contains no critical cubes unless_ \(a>C\)_. In the latter case, it contains a unique critical cube_ \((a,\emptyset,C,S)\)_._ 3. _If_ \(A=a\)_,_ \(C=c\)_, and_ \(c-a\leqslant 1\)_, the support of_ \(\rho_{A,C}\) _contains a unique critical cube_ \((a,\emptyset,c,S)\)_._ 4. _If_ \(A=a\)_,_ \(C=c\)_, and_ \(c-a\geqslant 2\)_, the support of_ \(\rho_{A,C}\) _contains the critical cubes_ \((a,T,c,S\setminus T)\)_, for any_ \(T\subseteq S\)_, such that either_ \(T=\emptyset\) _or_ \(a<T<c\)_._ **Proof.** This follows from applying case-by-case analysis to the description of the critical cubes. Note, that (2) and (3) of Proposition 6.3 means that we have the homology generators dual to the cochains \((a,\emptyset,C,D)^{*}\), where 1. either \(|C|\geqslant 2\), 2. or \(C=c\) and \(c-a\leqslant 1\). The first case covers all the homology in dimensions \(n-3\) and lower. The second case covers some of the homology in dimension \(n-2\), so that we are left to deal with the case \(c-a\geqslant 2\). This case is not quite as straightforward. **Definition 6.4**.: _Assume \(G\) is an arbitrary subset of \([n]\), such that \(|G|\geqslant 2\). Set \(a:=\min G\), \(c:=\max G\)._ _Consider a decomposition of the set \([n]\) as a disjoint union \(E\coprod I\coprod G\), where \(E=[n]\setminus[a,c]\), and \(I=[a,c]\setminus G\). We then set_ \[\gamma_{G}\coloneqq\sum_{i,j\in[a,c]}(i,B,j,D),\] _with two additional conditions on the summands: \(D\subseteq I\cup E\) and \(B\subseteq G\cup E\), see Figure 6.1._ Clearly, \(\gamma_{G}\) is a chain for dimension \(n-2\). **Proposition 6.5**.: _For every set \(G\), such that \(|G|\geqslant 2\), the chain \(\gamma_{G}\) is a cycle._ **Proof.** Just as in Equation (6.1), the straightforward boundary calculation yields \[\begin{split}\partial(\gamma_{G})&=\partial(\sum_{ \begin{subarray}{c}i,j\in[a,c]\\ B\subseteq C\cup E\end{subarray}}(i,B,j,D))=\sum_{\begin{subarray}{c}i,j\in[a, c]\\ B\subseteq C\cup E\end{subarray}}\partial(i,B,j,D)\\ &=\sum_{\begin{subarray}{c}i,j\in[a,c]\\ B\subseteq C\cup E\end{subarray}}(i+x,B-x,j,D)+\sum_{\begin{subarray}{c}i,j\in[a, c]\\ B\subseteq C\cup E\end{subarray}}(i,B-x,j+x,D)\\ &+\sum_{\begin{subarray}{c}i,j\in[a,c]\\ x\in B\subseteq C\cup E\end{subarray}}(i+x,B,j,D-x)+\sum_{\begin{subarray}{c}i, j\in[a,c]\\ x\in B\subseteq C\cup E\end{subarray}}(i,B,j+x,D-x).\end{split} \tag{6.2}\] Let us analyze the right hand side of Equation (6.2). All the terms in the sums are either of the type \((i+k,B^{\prime},j,D^{\prime})\) or \((i,B^{\prime},j+k,D^{\prime})\). Due to symmetry, it is enough to consider the terms \((i+k,B^{\prime},j,D^{\prime})\). These are subject to the conditions: 1. \(D^{\prime}\subseteq I\cup E\) and \(B^{\prime}\subseteq G\cup E\), 2. at least one of the elements \(i\) and \(k\) lies in \([a,c]=G\cup I\). Since we can swap \(i\) and \(k\), we can assume that either \(i\in I\), or, \(i\in G\) and \(k\notin I\). Label the sums on the right hand side of Equation (6.2) with \(I-IV\). It is enough to show that each term \((i+k,B^{\prime},j,D^{\prime})\) either appears once in two of these sums, or twice in one of the sums. The following table summarizes the \(5\) possible cases. Figure 6.1. Graphic description of the conditions on the labels of the cubes in the chain \(\gamma_{G}\). **Proposition 6.6**.: _The support of \(\gamma_{G}\) contains a unique critical cube_ \[\sigma_{G}\coloneqq(\min G,G-\min G-\max G,\max G,I+E).\] Proof.: To start with, it is clear that \(\sigma_{G}\) is critical and that it actually belongs to the support of \(\gamma_{G}\). Let \(\tau=(i,B,j,D)\) be some other critical cube in the support of \(\gamma_{G}\). By definition of \(\gamma_{G}\), we know that \(\min G,\max G\in B+i+j\). On the other hand, since \(\tau\) is critical we must have \(i=\min(B+i+j)\) and \(j=\max(B+i+j)\). It follows that \(i\leqslant\min G\) and \(j\geqslant\max G\). On the other hand, we know that \(i,j\notin E\), so we must have \(i=\min G\) and \(j=\max G\). Finally, if \(B\cap E\neq\emptyset\), then we cannot have both \(\min G=\min(B+i+j)\) and \(\max G=\max(B+i+j)\). It follows, that \(\sigma_{G}\) is the only critical cube in the support of \(\gamma_{G}\). We finish by stating the second main theorem of this paper. **Theorem 6.7**.: _For \(d=0,\ldots,n-3\) a basis for the homology group \(H_{d}(\Omega_{n})\) is given by the set \(\{\rho_{a,C}\,|\,a>C,\,n-|C|-1=d\}\)._ _The basis for \(H_{n-2}(\Omega_{n})\) is given by the set_ \[\{\gamma_{G}\,|\,|G|\geqslant 2\}\cup\{\rho_{a,c}\,|\,a>c\}.\] Proof.: This follows from Theorem 3.8(3). ## 7. Final remarks It is not difficult to adopt our calculations to the case of other coefficients. All one needs to do is to make a compatible choice of incidence coefficients and then trace through our proofs, using the general definition if the boundary operator in the Morse chain complex. Finally, we would like to remark that it would be interesting to generalize our results to the case of more than 2 points on a circle.
2309.08068
A Comparison of Two Lattice Boltzmann Models for Electrodynamics
In recent years, various Lattice Boltzmann models for electrodynamics have been developed as alternatives to classical methods such as Finite Difference Time Domain (FDTD) and Finite Element Methods (FEM). However, there has been a lack of systematic comparisons between these models. This paper addresses this gap by comparing two specific Lattice Boltzmann models, published by Mendoza and Mu\~noz (MM), and Hauser and Verhey (HV), respectively. To compare the models, we utilize time and memory as indicators, considering the same achieved error, in four standard tests: a dielectric pulse traveling through two interfaces, the skin effect, the Hertz dipole, and a dielectric pulse traveling through several interfaces. The results indicate that both methods accurately simulate the tests and exhibit convergence as the mesh is refined. However, the MM method outperforms the HV method regarding time, while its memory efficiency was lower. The modified Hauser-Verhey model demonstrates itself to be a promising alternative to the Mendoza-Mu\~noz model. These findings contribute to the ongoing development and optimization of numerical methods for electromagnetics simulations.
Jorge I. Rubiano-Murcia, Alejandro M. Salas-Estrada, Jose D. Hernandez-Ortega
2023-09-14T23:55:56Z
http://arxiv.org/abs/2309.08068v1
# A Comparison of Two Lattice Boltzmann Models for Electrodynamics ###### Abstract In recent years, various Lattice Boltzmann models for electrodynamics have been developed as alternatives to classical methods such as Finite Difference Time Domain (FDTD) and Finite Element Methods (FEM). However, there has been a lack of systematic comparisons between these models. This paper addresses this gap by comparing two specific Lattice Boltzmann models, published by Mendoza and Munoz (MM), and Hauser and Verhey (HV), respectively. To compare the models, we utilize time and memory as indicators, considering the same achieved error, in four standard tests: a dielectric pulse traveling through two interfaces, the skin effect, the Hertz dipole, and a dielectric pulse traveling through several interfaces. The results indicate that both methods accurately simulate the tests and exhibit convergence as the mesh is refined. However, the MM method outperforms the HV method regarding time, while its memory efficiency was lower. The modified Hauser-Verhey model demonstrates itself to be a promising alternative to the Mendoza-Munoz model. These findings contribute to the ongoing development and optimization of numerical methods for electromagnetics simulations. Lattice Boltzmann, Maxwell's equations, distribution function, convergence analysis. ## I Introduction The Lattice-Boltzmann method (LBM), although a recent numerical approach compared to other alternatives, has proven to be a powerful tool for simulating processes modeled by conservation equations [1]. Among the use cases, we could highlight applications in computational fluid dynamics where LBM allows for the simulation of complex geometries and multi-phase flows [2]. Similarly, it has also been successfully deployed to simulate magnetohydrodynamics [3], the wave equation [4], Poisson's equation [5], and linear and non-linear Schrodinger equations [6]. More recently, however, it was further extended by Mendoza and Munoz to integrate the electrodynamic Maxwell's equations [7]. Following their publication, numerous authors proposed alternative formulations to recover Maxwell's equations. For example, Succi and collaborators [8] proposed a scheme to simulate three-dimensional wave propagation in dispersive media; Hauser and Verhey [9, 10] similarly treated complex media while significantly reducing the complexity of the scheme, compared with previous approaches. In this same fashion, other alternatives were presented for dispersive media [11, 12] and electromagnetic waves in one-dimensional photonic crystals [13]. All these publications perform some cornerstone simulations such as point dipole antennas, the Skin effect, and medium changes, and some of them carry out encouraging comparisons with well-established methods such as FDTD [7, 10]. Nevertheless, there is no systematic comparison between Lattice-Boltzmann formulations for Maxwell's equations in the current literature. What's more concerning, many previous publications suggest some degree of suitability over competing alternatives without solid arguments. In this work, we fill this literature gap by comparing the schemes' CPU time and memory requirement in [7] and [9]. For this, we simulated a Gaussian pulse crossing an interface, the Skin effect, the dipole radiation, and a one-dimensional highly non-uniform media, for different refinements of the grid. ### _Bhatnagar Gross and Krook (BGK) scheme_ The Boltzmann equation governs the evolution of an equilibrium function \(f\) in the phase space, representing a scalar of the probability distribution of physical quantities in the system. In the Bhatnagar-Gross-Krook (BGK) scheme, the collision operator \(\Omega\) is proportional to the deviation of the equilibrium function, driving the system towards thermal equilibrium. The Boltzmann equation can be expressed as follows, see [8, 14]. \[\frac{\partial f}{\partial t}+\vec{v}\cdot\nabla f=\Omega(f)+T, \tag{1}\] where \(\Omega(f)\) represents the collision operator and \(T\) denotes the source terms that can be included if necessary. In numerical computations, the velocity set of vectors is discretized into a finite set of directions, while the physical space is discretized into a mesh. The Boltzmann equation can be solved using a two-step process: collision and advection. This can be achieved through the BGK model in the context of the lattice Boltzmann method. The discretized versions of the collision and advection steps can be expressed as follows: Collision step: \[f^{*}(\vec{x},\vec{v},t+\Delta t)=f(\vec{x},\vec{v},t)-\frac{\Delta t}{\tau} \left(f(\vec{x},\vec{v},t)-f_{\text{eq}}(\vec{x},\vec{v})\right)+T, \tag{2}\] Advection step: \[f(\vec{x}+\Delta t\vec{v},\vec{v},t+\Delta t)=f^{*}(\vec{x},\vec{v},t+\Delta t), \tag{3}\] where \(\Delta t\) is the time step, \(\tau\) is the relaxation time, and \(f_{\text{eq}}\) is the equilibrium distribution function. These equations represent the numerical implementation of the BGK scheme within the lattice Boltzmann method. ### _Maxwell's equations_ For linear media, Maxwell's equations can be written in terms of the electric field \(\vec{E}\) and magnetic field \(\vec{B}\), as well as the electric displacement field \(\vec{D}\) and magnetic induction field \(\vec{H}\). The equations are as follows, see [15]. 1. Gauss's law for electric fields: \[\nabla\cdot\vec{D}=\rho,\] (4) where \(\rho\) is the electric charge density. 2. Gauss's law for magnetic fields: \[\nabla\cdot\vec{B}=0.\] (5) 3. Faraday's law of electromagnetic induction: \[\nabla\times\vec{E}=-\frac{\partial\vec{B}}{\partial t}.\] (6) 4. Ampere's law with Maxwell's addition: \[\nabla\times\vec{H}=\vec{J}+\frac{\partial\vec{D}}{\partial t},\] (7) where \(\vec{J}\) is the electric current density. These equations are supplemented by the continuity equation: \[\nabla\cdot\vec{J}=-\frac{\partial\rho}{\partial t}, \tag{8}\] which expresses the conservation of charge. It's important to note that the electric displacement field \(\vec{D}\) is related to the electric field \(\vec{E}\) by: \[\vec{D}=\epsilon\vec{E}, \tag{9}\] where \(\epsilon\) is the permittivity of the medium. Similarly, the magnetic induction field \(\vec{H}\) is related to the magnetic field \(\vec{B}\) by: \[\vec{H}=\frac{1}{\mu}\vec{B}, \tag{10}\] where \(\mu\) is the permeability of the medium. In addition, the Guass's laws can be obtained from the Faraday's law, Ampere-Maxwell law and continuity equation, see [7, 9]. These equations govern the behavior of electromagnetic fields in linear media. _Remark: Throughout the entire text, the permittivity and permeability are denoted as \(\epsilon=\epsilon_{r}\epsilon_{0}\) and \(\mu=\mu_{r}\mu_{0}\), where \(\epsilon_{0}\) and \(\mu_{0}\) are the permittivity and permeability in vacuum. Meanwhile, \(\epsilon_{r}\) and \(\mu_{r}\) represent the relative permittivity and relative permeability, respectively._ ## II MM model The model introduced by M. Mendoza and D. Munoz in [7], henceforth referred to as the MM model, utilizes a local basis consisting of electric vectors \(\vec{e}^{p}_{ij}\) and velocity vectors \(\vec{v}^{p}_{i}\) within a cubic cell D3Q13. Additionally, magnetic vectors \(\vec{b}^{p}_{ij}\) are employed in a D3Q7 cell. The planes are labeled as \(p=1,2,3\), while \(i=1,2,3,4\) represents the discretized velocity directions, with four directions per plane. Each velocity vector corresponds to two electric vectors denoted by \(j=0,1\). In this framework, the direction of the velocity vectors can be interpreted as the direction of energy flux, i.e., the pointing vector. Specifically, the velocity vectors are defined as follows: \[\begin{split}\vec{v}^{0}_{i}&=\sqrt{2}(\cos((2i-1) \pi/4),\sin((2i-1)\pi/4),0)\,,\\ \vec{v}^{1}_{i}&=\sqrt{2}(\cos((2i-1)\pi/4),0,\sin(( 2i-1)\pi/4))\,,\\ \vec{v}^{2}_{i}&=\sqrt{2}(0,\cos((2i-1)\pi/4),\sin( (2i-1)\pi/4))\,,\\ \vec{v}_{0}&=(0,0,0)\,.\end{split} \tag{11}\] The electric and magnetic vectors are given by \[\begin{array}{ll}\vec{e}^{p}_{i0}=\frac{1}{2}\vec{v}^{p}_{[(i+2)}& \quad\text{mod }4]+1\quad,\quad\vec{e}^{p}_{i1}=\frac{1}{2}\vec{v}^{p}_{[i\text{ mod }4]+1}\\ &\quad\vec{b}^{p}_{ij}=\vec{v}^{p}_{i}\times\vec{e}^{p}_{ij}.\end{array} \tag{12}\] These vector sets exhibit appropriate sum relations, as demonstrated in Equation (11) of reference [7]. For the electric and magnetic fields, there exists a distribution function \(f^{p(r)}_{ij}\) associated with each \(p,i,j\). Here, \(r=0\) denotes the electric field, while \(r=1\) represents the magnetic field. Moreover, two distribution functions are assigned to the rest direction for both electric and magnetic fields. Consequently, the total number of distribution functions amounts to \(2\times 2\times 3\times 4+2=50\). The macroscopic fields are derived from the distribution functions through the following equations: \[\begin{split}\epsilon_{r}\vec{E}^{\prime}=\vec{D}^{\prime}=\sum _{i=1}^{4}\sum_{p=0}^{2}\sum_{j=0}^{1}f^{p(0)}_{ij}\vec{e}^{p}_{ij}\\ \vec{B}=\sum_{i=1}^{4}\sum_{p=0}^{2}\sum_{j=0}^{1}f^{p(1)}_{ij}\vec{b}^{p}_{ij} \\ \rho_{c}=f^{(0)}_{0}+\sum_{i=1}^{4}\sum_{p=0}^{2}\sum_{j=0}^{1}f^{p(0)}_{ij} \end{split} \tag{13}\] It is remarkable to say that the displacement field \(\vec{D}^{\prime}\) is an auxiliary field. The real field which satisfies Maxwell's equations is \[\frac{\vec{D}}{\epsilon_{r}}=\vec{E}=\vec{E}^{\prime}-\frac{\mu_{0}}{4\epsilon _{r}}\overrightarrow{J}\,, \tag{14}\] where \(\vec{J}\) is the current source in Maxwell's equation. In particular, for ohmic elements \(\vec{J}=\sigma\vec{E}\), that is, \[\vec{J}=\frac{\sigma}{1+\frac{\mu_{0}\sigma}{4\epsilon_{r}}}\frac{\vec{D}^{ \prime}}{\epsilon_{r}}\,. \tag{15}\] Furthermore, the \(H-\)field is given by \[\vec{H}=\frac{\vec{B}}{\mu_{r}}\,. \tag{16}\] To avoid dissipative effects, the dynamics of the Boltzmann equation adopt the BGK scheme with a collision time \(\tau=1/2\) The equilibrium functions are defined as follows: \[\begin{split} f^{p(0)\text{eq}}_{ij}(\vec{x},t)&=\frac{1 }{16}\vec{v}^{p}_{i}\cdot\vec{J}+\frac{\epsilon_{r}}{4}\vec{E}\cdot\vec{e}^{p}_ {ij}+\frac{1}{8\mu_{r}}\vec{B}\cdot\vec{b}^{p}_{ij}\,,\\ f^{p(1)\text{eq}}_{ij}(\vec{x},t)&=\frac{1}{16} \vec{v}^{p}_{i}\cdot\vec{J}+\frac{1}{4}\vec{E}\cdot\vec{e}^{p}_{ij}+\frac{1}{8 }\vec{B}\cdot\vec{b}^{p}_{ij}\,,\\ f^{(0)\text{eq}}_{0}(\vec{x},t)&=f^{(1)\text{eq} }_{0}(\vec{x},t)=\rho_{c}\,.\end{split} \tag{17}\] The advection collision terms are applied conventionally. The BGK collision step: \[\begin{split} f^{p(r)\prime}_{ij}(\vec{x},t)&=2f^{p( r)eq}_{ij}(\vec{x},t)-f^{p(r)}_{ij}(\vec{x},t)\,,\\ f^{0\prime}_{0}(\vec{x},t)&=2f^{0eq}_{0}(\vec{x}, t)-f^{0}_{0}(\vec{x},t)\,.\end{split} \tag{18}\] The advection step: \[\begin{split} f^{0}_{0}\left(\vec{x},t+\Delta t \right)&=f^{0\prime}_{0}(\vec{x},t)\,,\\ f^{p(r)}_{ij}\left(\vec{x}+\vec{v}_{i}\Delta t,t+\Delta t \right)&=f^{p(r)\prime}_{ij}(\vec{x},t)\,.\end{split} \tag{19}\] Interpreting the physical meaning of the equilibrium functions is not straightforward. However, M. Mendoza and D. Munoz claim that these functions can be regarded as perturbations in the energy density. ### _Maxwell's equations obtained by the MM model_ Finally, the MM model successfully reproduces the Maxwell equations for linear non-dispersive media. Its proof is based on the Chapman-Enskog expansion in [7]. \[\begin{split}\frac{\partial\rho_{c}}{\partial t}+\nabla\cdot \vec{J}=0\,,\\ \nabla\times\vec{E}=-\frac{\partial\vec{E}}{\partial t}\,,\\ \nabla\times\vec{H}=\vec{J}+\frac{\partial\vec{D}}{\partial t}. \end{split} \tag{20}\] to ensure compliance with Gauss's laws, the equations in the MM model must be satisfied at time \(t=0\). In the case of vacuum, the speed of electromagnetic waves in the MM model is equal to \(1/\sqrt{2}\) in automaton units, where \(\epsilon_{0}=1\) and \(\mu_{0}=2\). It is worth noting that exceeding this speed limit can lead to numerical instabilities. The authors of the model explain that this phenomenon is attributed to the CFL (Courant-Friedrichs-Lewy) condition. The CFL condition sets a constraint on the time step in numerical simulations to maintain stability. ## III HV model The model proposed by A. Hauser and L. Verhey in [9], referred to as the HV model, was introduced based on the model proposed by Y. Liu and G. Yan in [16]. According to the authors, unlike the MM model, the HV model remains stable even in the presence of non-smooth transitions at interfaces between media with different permeability and permittivity. The discretized set of velocity directions is represented by a D3Q7 cubic cell, with velocity vectors \(\vec{v}_{i}\) for \(i=1,...,6\), and \(\vec{v}_{0}\) as the rest vector. Similarly, for each \(i\geq 1\), there is a pair of electromagnetic vectors \(\vec{e}_{i}\) and \(\vec{b}_{i}\). Furthermore, for each electromagnetic vector, there are three scalar distribution functions associated with the \(x\), \(y\), and \(z\) components, resulting in the representation of the distribution function as 3D vectors. The velocity vectors can be expressed as: \[\begin{split}\vec{v}_{1}&=(1,0,0),\quad\quad\vec{v} _{2}=(0,1,0),\quad\quad\vec{v}_{3}=(-1,0,0),\\ \vec{v}_{4}&=(0,-1,0),\quad\vec{v}_{5}=(0,0,-1), \quad\vec{v}_{6}=(0,0,1).\end{split} \tag{21}\] In total, there are \(2\times 6\times 3=36\) scalar distribution functions or 12 vector distribution functions. In the original paper [9], there is an error in equation (B1b). The correct equation should be: \[\sum_{i=1}^{6}\vec{v}_{\alpha,i}\cdot\vec{v}_{\beta,i}=2\delta_{\alpha\beta}. \tag{22}\] This error impacts the computations in the paper, including the distribution function. Therefore, considering the correction, the equilibrium distributions are given by: \[\begin{split}\vec{e}^{\text{eq}}_{i}&=\frac{1}{6} \left(\vec{D}-3\vec{v}_{i}\times\frac{\vec{B}}{\mu}\right),\\ \vec{h}^{\text{eq}}_{i}&=\frac{1}{6}\left(\vec{B}+3 \vec{v}_{i}\times\frac{\vec{D}}{\epsilon}\right).\end{split} \tag{23}\] Note that the equilibrium functions bear similarities to Maxwell's equations when considering \(\nabla\) as a vector that is a scalar multiple of \(\vec{v}_{i}\). This notion is particularly relevant in the context of harmonic plane waves, where \(\nabla\) is parallel to the wave vector (and pointing vector). This observation suggests that the equilibrium functions can be interpreted as perturbations of the Maxwell's equations themselves. Similar to the MM model, we use \(r=0,1\) to denote the electric and magnetic vectors, respectively, such that \(\vec{f}^{(0)}_{i}=\vec{e}_{i}\) and \(\vec{f}^{(1)}_{i}=\vec{b}_{i}\). The macroscopic fields can be obtained from the distribution functions as follows: \[\begin{split}\epsilon\vec{E}&=\vec{D}(\mathbf{r},t)= \sum_{i=1}^{6}\vec{e}_{i}(\mathbf{r},t),\\ \mu\vec{H}&=\vec{B}(\mathbf{r},t)=\sum_{i=1}^{6}\vec{h}_{ i}(\mathbf{r},t).\end{split} \tag{24}\] Similarly to the MM model, the relaxation time \(\tau=1/2\) is used, and the BGK collision step is applied: \[\begin{split}\vec{f}^{(r)^{\prime}}_{i}(\vec{x},t)=2\vec{f}^{(r) eq}_{i}(\vec{x},t)-\vec{f}^{(r)}_{i}(\vec{x},t).\end{split} \tag{25}\] The advection step is given by: \[\begin{split}\vec{f}^{(r)}_{i}(\vec{x}+\vec{v}_{i}\Delta t,t+ \Delta t)=\vec{f}^{(r)^{\prime}}_{i}(\vec{x},t).\end{split} \tag{26}\] ### _Maxwell's equations obtained by the HV model_ As demonstrated in [9], the equations obtained through the Chapman-Enskog expansion are: \[\begin{split}\nabla\times\vec{E}=-\frac{\partial\vec{H}}{\partial t }\,,\\ \nabla\times\vec{H}=\frac{\partial\vec{D}}{\partial t}\end{split} \tag{27}\] Following the proposal in [8, 14], we introduce a source term \(\vec{T}_{i}\) for each \(i\) in the electric distribution function (\(r=0\)), defined as: \[\vec{T}_{i}=-\frac{1}{2}\left(\vec{J}\cdot\vec{v}_{i}\right)\vec{v}_{i}, \tag{28}\] then the equation (25) is modified for \(r=0\) as: \[\vec{f}_{i}^{(0)^{\prime}}(\vec{x},t)=2\vec{f}_{i}^{(0)eq}(\vec{x},t)-\vec{f}_ {i}^{(0)}(\vec{x},t)+\vec{T}_{i}. \tag{29}\] Thus, we obtain the Maxwell's equations: \[\begin{split}\nabla\times\vec{E}&=-\frac{\partial \vec{E}}{\partial t}\,,\\ \nabla\times\vec{H}&=\vec{J}+\frac{\partial\vec{D} }{\partial t}\end{split} \tag{30}\] The HV model with the additional source term is referred to as the modified HV model. In the HV model, the speed of electromagnetic waves in the vacuum is equal to \(1/3\) in lattice units. Values greater than this limit can result in numerical instabilities. However, the authors of the model do not provide an explanation for this specific limit. ## IV Numerical test and comparison ### _Gaussian Pulse Crossing Dielectric Interface_ For the first comparison, we simulated a Gaussian pulse of the form \[\begin{split}\vec{B}&=E_{0}/C\exp(-(z-z_{0})^{2}/ (2\alpha^{2}))\,\hat{y}\,.\\ \vec{E}&=E_{0}\exp(-(z-z_{0})^{2}/(2\alpha^{2}))\, \hat{x}\,,\end{split} \tag{31}\] where \(\alpha=0.05\cdot L_{z}/\sqrt{2}\) and \(z_{0}=L_{z}/2-L_{z}/6\). The pulse travels from the vacuum \(\epsilon_{1,r}=1\) to a media with \(\epsilon_{2,r}=\epsilon_{r}=2.5\) and \(\mu_{r}=1\). According to the theory, the reflected and transmitted amplitude are such that \[\frac{A_{ref}}{A_{inc}}=\frac{\sqrt{r}-1}{\sqrt{r}+1}\,, \tag{32}\] \[\frac{A_{trans}}{A_{inc}}=\frac{2}{\sqrt{r}+1}\,, \tag{33}\] being \(r=\epsilon_{2,r}/\epsilon_{1,r}\) and, \(A_{ref}\) and \(A_{trans}\) the amplitude of the reflected and transmitted pulse, respectively [15]. Our simulations consisted of a one-dimensional grid, progressively refined while keeping the physical size unchanged. In figure (1), the shape of the transmitted and reflected pulse is depicted for different refinements using the MM model. For each grid, we measured the CPU time only for the integration of Maxwell's equations with the LBM, i.e., our measurement excluded the CPU time for the initialization of variables or printing. Similarly, we also gauged the relative error in the amplitude of the simulated reflected and transmitted pulse, obtaining the curves in figure (2). Importantly, the time measurement was made several times for each refinement to compute an average and error bars. The plots show that the MM scheme requires less CPU time for a given relative error. Remarkably, the relation CPU time-error for the transmitted pulse is erratic for both simulations. Furthermore, the error bars for both formulations are generally small, suggesting that this behavior is intrinsic in the two schemes. Fig. 1: Normalized electric amplitudes \(A/A_{0}\) of a Gaussian pulse crossing a dielectric interface simulated using the MM model for different grid refinements labeled by the number of cells \(N_{z}\). The region where \(z/L_{z}>0.5\) corresponds to a dielectric medium with \(\epsilon_{r}=2.5\), while the region where \(z/Lz<0.5\) corresponds to the vacuum (\(\epsilon_{r}=1.0\)). The reflected and transmitted pulse can be observed. Fig. 2: CPU time versus simulation error in the reflected electric amplitude for the dielectric pulse test. Regarding the behavior of the relative error for different refinement levels, it was observed in both schemes that the error decreases following a power law as the grid refinement is increased. However, it is notable that the MM model exhibits the most significant reduction in numerical errors with the implementation of grid refinement. A quick measurement of the memory used by the distribution functions was also conducted as a function of the relative errors (figure 3). The memory requirement was observed to increase for lower relative errors in both schemes. However, the MM model exhibits a higher memory requirement than the HV model due to the difference in the number of distribution functions, as mentioned in Sections II and III. ### _Skin Effect_ The Skin effect describes the exponential decay of a plane wave's amplitude after penetrating a conducting material. Theoretically, the amplitude of the electric field inside the conductor is given by the expression: \[A_{\text{Theo}}=A_{0}\exp(-z/\delta) \tag{34}\] where \(A_{0}\) represents the amplitude outside of the conductor, and \(\delta\) denotes the skin thickness, which can be expressed as [17, p. 130]: \[\delta=\sqrt{\frac{2}{\sigma\mu\omega}}\sqrt{\sqrt{1+\left(\frac{\omega \epsilon}{\sigma}\right)^{2}+\frac{\omega\epsilon}{\sigma}}}\,. \tag{35}\] where \(\omega\) corresponds to the angular frequency of the wave. We simulate a plane wave in one dimension, imposing the electromagnetic field at \(z=0\) with a wavevector to the right. \[\vec{B} =E_{0}/C\sin(\omega t)\hat{y} \tag{36}\] \[\vec{E} =E_{0}\sin(\omega t)\hat{x}\,,\] where \(\omega=2\pi/T\), and the period is \(T=17.68\cdot 10^{-3}L_{z}/C\). A conductive medium is placed at \(z/L_{z}\geq 0.25\) with conductivity \(\sigma=0.1\cdot\epsilon/T\). The behavior of the oscillating wave and its amplitude after penetrating the conductor is presented in the figure (4) for the HV model. We proceeded similarly to the Gaussian pulse to compare the errors and computing time. We utilized a one-dimensional grid with several refinements and measured the time and error for each grid, computing an average and error for the time. The only difference lies in the error measurement, which was modeled with a cost function \[C=\sum_{j}\left(\frac{A_{\text{sim}}-A_{\text{theo}}}{A_{\text{theo}}}\right) ^{2} \tag{37}\] where \(A_{\text{sim}}\) and \(A_{\text{theo}}\) are the simulated and theoretical electric field amplitudes after penetrating the conductor, and the sum is carried out over the cells occupied by the conductor. As figure (5) shows, where both time measurements overlap, the MM scheme produces a smaller cost; additionally, the polynomial fit shows that for ever smaller costs, the MM model requires less computation time. Fig. 4: Normalized electric field amplitude inside a conductor simulated with the HV model. A plane wave encounters a conductive medium at \(z/L_{z}\geq 0.25\) with conductivity \(\sigma=0.1\cdot\epsilon/T\). Fig. 3: Memory usage of the distribution functions for both models as a function of the relative errors for the dielectric pulse test. ### _Electric Dipole Radiation_ An oscillating electric or magnetic dipole represents one of the simplest antenna systems available and is extensively treated in many textbooks [15], the reason why is an ideal example to test performance for simulation of radiating systems. The simulation involves a Hertz dipole with specific parameters. Instead of a point source, a localized density current associated with a dipole is simulated. The current density is given by \[J=J_{0}\exp\left(-\alpha\left(\vec{x}-\vec{x_{d}}\right)^{2}\right)\sin(\omega t )\hat{z}. \tag{38}\] where \(\alpha=0.5\), the amplitude of the current density \(J_{0}\) is set to 0.0001 and \(\vec{x_{d}}\) are the dipole coordinates. The period \(T\) is calculated as \(\frac{17.68}{100.0}\frac{Lz}{C}\), and \(\omega\) is set to \(\frac{2\pi}{T}\). The associated amplitude of the dipole is [7][15, Chapter 9] \[p=\frac{J_{0}}{\omega}\left(\frac{\pi}{\alpha}\right)^{1.5}. \tag{39}\] The vacuum impedance \(Z_{0}\) is calculated as \(\sqrt{\frac{\mu_{0}}{\epsilon_{0}}}\), and the wavelength \(\lambda\) is determined as \(c\cdot T\), being \(c\) the speed of light. The maximum simulated time is \(t_{\text{max}}=T\cdot\frac{70}{25.0}\) in the time test comparisons. To obtain the radiation patterns, measurements are taken at a radius \(R=\lambda\cdot n\), where \(n=\frac{Lz}{2\lambda}-2\) (two wavelengths less before reaching the boundary of the lattice domain). We use \(t_{\text{max}}=\frac{R+2\lambda}{C}\), and measure the maximum energy flux during a whole period in the time interval \(R+\lambda\leq t\leq R+2\lambda\). The normalized theoretical radiation pattern in spherical coordinates \((\phi,\theta)\), taking the z-axis in the direction of the dipole, is \(\sin^{2}(\theta)\), see [15, Chapter 9]. In figure (7) is plotted the theoretical and simulated radiation pattern in a plane of constant \(\phi\). Finally, we gauged the same cost function (37) and CPU time for the error and time measurement. Specifically, we compared the simulated and theoretical electric and magnetic field amplitudes along a line perpendicular to the dipole moment. From figure (8) is clear that in the studied interval, both models had very similar performance. Even more remarkable, for a given computation time, the electric and magnetic field costs are not the same for a given LB; furthermore, in the limit of small costs, MM performs better for the magnetic field but worse for the electric field. Fig. 5: CPU time versus cost for the Skin effect. Fig. 6: Comparison of magnetic field simulations using the MM model. ### _Gaussian Pulse Crossing Non-uniform media_ The setup was similar to the section on the dielectric pulse crossing one interface. However, in this case, we have four different mediums with relative permittivities of \(1,1.3,2,3\). The corresponding interfaces are located at \(z_{0},z_{0}+d,z_{0}+3d\), respectively, where \(z_{0}=\frac{L_{x}}{2}\) and \(d=\frac{L_{y}}{20}\). The simulation space consisted of \(L_{z}\) cells, and the pulse was positioned at \(z_{0}-\frac{L_{x}}{30}\) using an \(\alpha=\frac{L_{x}}{100\sqrt{2}}\) in equation (31). To achieve an error of 0.03 \(\%\) between the simulated and theoretical transmitted pulses using both methods, the MM model requires 18 seconds of CPU time and 4000 cells, which corresponds to 200,000 distribution functions. On the other hand, the HV model only requires 10 seconds of CPU time and 3000 cells, totaling 108,000 scalar distribution functions. Figures 6 and 7 depict the pulse after crossing all the interfaces. ## V Conclusion Both models are suitable alternatives for simulating electrodynamics phenomena. They exhibited similar behavior in Fig. 8: CPU time versus cost for the radiating dipole. Fig. 10: Dielectric pulse crossing several interfaces using the MM model. Fig. 7: Normalized theoretical and simulated slice of the radiation pattern of the Hertz dipole using the HV model. Fig. 9: Dielectric pulse crossing several interfaces using the HV model. terms of computation time and error. However, the HV model outperformed the MM model in a test involving a Gaussian pulse crossing a non-uniform medium. Despite this, the MM model generally performed slightly better than the HV model in most tests. Nevertheless, the HV model demonstrated lower memory consumption due to its reduced number of distribution functions. Therefore, the HV model may be preferred as it requires less memory, while yielding similar errors to the MM model with equal CPU times. It appears that reducing the number of distribution functions does not significantly affect overall performance but improves memory requirements. The trade-off of the HV model is that it requires more iterations because the speed of light in automaton units is lower than that of the MM model. Future works could explore more LBMs and further test the validity of this claim. ## Authorship contribution statement JR implemented the Lattice-Boltzmann algorithms and proposed the modified HV model. AS and JH performed the performance tests. AS and JR conceived the project. All three authors wrote and revised the article. ## Acknowledgment The authors would like to express their gratitude to Dr. Jose Daniel Munoz for his valuable comments and insights throughout the development of this research. Additionally, they would like to thank Dr. Rafael Rey for suggesting the dielectric pulse test conducted at multiple interfaces and to thank Dr.Andreas Hauser for clarifying some of our doubts. Their contributions greatly enhanced the quality and depth of this study. In addition, ChatGPT and Grammarly were utilized for writing style grammar correction. ## APPENDIX A: Source term in the HV model \[\sum_{i}\left(\partial_{t}f_{i}^{\text{eq}}+\sum_{\alpha}v_{\alpha i}\partial _{\alpha}f_{i}^{\text{eq}}\right)\approx\sum_{i}T_{i}\,. \tag{40}\] Following this equation for the equilibrium functions \(\vec{f}_{i}^{(0)}\) we obtain in the left-hand side \[\frac{\partial\vec{D}}{\partial t}-\nabla\times\vec{H}\,. \tag{41}\] On the other hand, we get in the right-hand side \[\begin{split}\sum_{i=1}^{6}\vec{T}_{i,\alpha}&=- \frac{1}{2}\sum_{i=1}^{6}\left(\vec{J}\cdot\vec{v}_{i}\right)\vec{v}_{i, \alpha}\,,\\ &=-\frac{1}{2}\sum_{i=1}^{6}\vec{J}_{\beta}\vec{v}_{i,\beta} \vec{v}_{i,\alpha}\,,\\ &=-\frac{1}{2}\vec{J}_{\beta}\sum_{i=1}^{6}\vec{v}_{i,\beta} \vec{v}_{i,\alpha}\,,\\ &=-\frac{1}{2}\vec{J}_{\beta}(2\delta_{\alpha,\beta})\,,\\ &=-\vec{J}\,.\end{split} \tag{42}\] Thus the Ampere-Maxwell equation is obtained. \[\frac{\partial\vec{D}}{\partial t}-\nabla\times\vec{H}=-\vec{J}\] ## Appendix B: Pseudo-code In the following pseudocode, we outline the steps involved in the Lattice Boltzmann Automata simulation for solving a specific problem using the MM and modified HV models. The pseudocode describes the main algorithmic steps involved in the simulation, including initialization, collision, advection, and analysis. The specific equations and considerations for different models are also highlighted. ``` 1:At \(t=0\), impose all fields \(\vec{B},\vec{E},\vec{J}\). 2:Initialize all the distribution functions as the equilibrium functions evaluated in \(\vec{B},\vec{E}\) and \(\vec{J}\). 3:Impose fields in the cells, if required. 4:for\(t=1\) to \(t_{\text{max}}\)do 5: // In the collision, compute the macroscopic fields by summing the distribution functions, i.e, using eqs. (13) to (16) for MM model and (24) for HV model 6: Collision 7: Impose fields if required. 8: Advection 9:endfor 10: Analyze and plot. ``` **Note:** Sometimes, for instance, in the skin effect and in the Hertz dipole, the current is imposed during the collision step. In the MM model, the current is computed with equation (15) after computing \(\vec{D^{\prime}}\) with equation (13), and then it is used in the equilibrium functions (17) during the collision step as in equation (18). On the other hand, in the HV model, the current is computed (for example as \(\vec{J}=\sigma\vec{E}\)) and then is passed to the source term \(T\) as in equation (28).
2309.04652
A Further Study of Linux Kernel Hugepages on A64FX with FLASH, an Astrophysical Simulation Code
We present an expanded study of the performance of FLASH when using Linux Kernel Hugepages on Ookami, an HPE Apollo 80 A64FX platform. FLASH is a multi-scale, multi-physics simulation code written principally in modern Fortran and makes use of the PARAMESH library to manage a block-structured adaptive mesh. Our initial study used only the Fujitsu compiler to utilize standard hugepages (hp), but further investigation allowed us to utilize hp for multiple compilers by linking to the Fujitsu library libmpg and transparent hugepages (thp) by enabling it at the node level. By comparing the results of hardware counters and in-code timers, we found that hp and thp do not significantly impact the runtime performance of FLASH. Interestingly, there is a significant reduction in the TLB misses, differences in cache and memory access counters, and strange behavior is observed when using thp.
Catherine Feldman, Smeet Chheda, Alan C. Calder, Eva Siegmann, John Dey, Tony Curtis, Robert J. Harrison
2023-09-09T00:45:05Z
http://arxiv.org/abs/2309.04652v1
# A Further Study of Linux Kernel Hugepages on A64FX with FLASH, an Astrophysical Simulation Code ###### Abstract. We present an expanded study of the performance of FLASH when using Linux Kernel Hugepages on Ookami, an HPE Apollo 80 A64FX platform. FLASH is a multi-scale, multi-physics simulation code written principally in modern Fortran and makes use of the PARAMESH library to manage a block-structured adaptive mesh. Our initial study used only the Fujitsu compiler to utilize standard hugepages (hp), but further investigation allowed us to utilize hp for multiple compilers by linking to the Fujitsu library libmpg and transparent huepages (hp) by enabling it at the node level. By comparing the results of hardware counters and in-code times, we found that hp and thp do not significantly impact the runtime performance of FLASH. Interestingly, there is a significant reduction in the TLB misses, differences in cache and memory access counters, and strange behavior is observed when using thp. high performance computing, A64FX architecture, astrophysics + Footnote †: c) 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-560398-2/23/07. [https://doi.org/10.1145/3569951.3597583](https://doi.org/10.1145/3569951.3597583) + Footnote †: c) 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-560398-2/23/07. + Footnote †: c) 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-560398-2/23/07. + Footnote †: c) 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-560398-2/23/07. + Footnote †: c) 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-560398-2/23/07. + Footnote †: c) 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-560398-2/23/07. + Footnote †: c) 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-560398-2/23/07. + Footnote †: c) 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-560398-2/23/07. + Footnote †: c) 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-560398-2/23/07. + Footnote †: c) 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-560398-2/23/07. + Footnote †: c) 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-560398-2/23/07. + Footnote †: c) 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-560398-2/23/07. + Footnote †: c) 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-560398-2/23/07. + Footnote †: c) 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-560398-2/23/07. ## 1. Introduction ### Ookami and A64FX The A64FX processor expects to provide high performance and reliability for memory-intensive applications while maintaining a good performance to power ratio. The appeal of A64FX, currently the backbone of the Fugaku supercomputer, is that it eliminates the need to port to accelerators such as GPUs to improve performance. Ookami is an open-access resource featuring Fujitsu A64FX processors provided under the US NSF's ACCESS program and managed jointly by Stony Brook University and the University at Buffalo. Ookami is an HPE/Cray Apollo80 system with 176 A64FX Fujitsu compute nodes, each with 32GB high-bandwidth memory (HBM) and a 512GB SSD. Ookami's FX700 series A64FX processors consist of four core memory groups each with 12 cores, resulting in a total of 48 cores, 64KB L1 cache per core, and SMB L2 cache shared between the cores and runs at 1.8 GHz. The nodes have 32 GB of high-bandwidth memory, where 5 GB are reserved for the OS, leaving 27 GB for the user. These processors use the ARMv8.2-A Scalable Vector Extension (SVE) SIMD instruction set with a 512 bit vector implementation, allowing for vector lengths anywhere from 128-2048 bits and enabling vector length agnostic programming (Cheng et al., 2019). ### Thermonuclear Supernovae with FLASH Our application is a bright stellar explosion known as a thermonuclear (Type Ia) supernova (SN Ia), which we model using FLASH, a software instrument for addressing multi-scale, multi-physics applications (Huget et al., 2017). FLASH is written in modern Fortran, parallelized through MPI, and implements AMR (Adaptive Mesh Refinement) using the PARAMESH library. Full-star hydrodynamics simulations such as these are memory and computationally intensive, making our application a good candidate to try on A64FX. Early study of the performance of FLASH on Ookami may be found in (Huget et al., 2017), and similar experiences are reported in (Huget et al., 2017; Fortran et al., 2018). The unoptimized performance on A64FX did not compare well to that found on traditional X86 architectures (Bahcall et al., 2019). Profiling indicated that FLASH spent about half of its time in the hydrodynamics routines, and within those 20% of the time was spent in routine for the material equation of state (EOS) (Huget et al., 2017). We therefore settled on two test problems for further exploration: a 2-d SN Ia problem (that exercises the material EOS) and, looking ahead to our science goal of 3-d SN Ia simulations, a 3-d hydrodynamics simulation, the Sedov explosion problem. We dubbed these two tests "EOS" and "3-d Hydro", and details of both the EOS and hydrodynamics modules may be found in the original FLASH paper (Huget et al., 2017). Our motivation for investigating huge memory pages was both the observed bountiful DTLB misses, and FLASH's memory stride. PARAMESH manages a block-structured adaptive mesh, where each block is separated into smaller cells that each store requisite variables, such as density and temperature, consecutively in an array. Thus there is a stride in memory when gathering the same variable (i.e. density) from different cells, and a larger stride between blocks. ### Previous Work with Hugepages Here, we explore both standard and transparent hugepages. Modern processors manage memory in blocks known as pages. Hugepage support was integrated into the Linux kernel in version 2.6. These pages are larger in size than regular pages, which in theory means there are fewer pages for the OS to manage as there is a finite amount of memory. Depending on the OS, hugepages come in different sizes. Managing these pages can be challenging and at times require changes to application code. To that extent, Transparent HugePages were implemented in the Linux kernel where the the "transparent" hugepages are an abstraction layer managed by the kernel, where the kernel is responsible for their creation, management and use (Bahcall et al., 2019). Transparent hugepages are by default disabled on Ookami. Other studies that have tested the performance effects of using hugepages on A64FX include (Fortran et al., 2018), (Bahcall et al., 2019), and (Bahcall et al., 2019), and suggest certain environment variable settings for best results. (Bahcall et al., 2019) explicitly shows that the greatest speedup gain from enabling hugepages is seen for a latency-bound section of their simulation, but is only a 1.11 \(\times\) speedup. (Bahcall et al., 2019) found that an increase in L2 TLB misses caused performance degradation when using normal 64 KiB pages, but didn't affect the performance when using 2 MiB hugepages. This work extends our initial study of using hugepages with just the Fujitsu compiler, which demonstrated that hugepages did not provide a significant speedup (Bahcall et al., 2019). Our speculation was that TLB misses might not make much of a difference because the A64FX has hardware to ameliorate the cost of TLB misses by avoiding OS calls, or because the FLASH data access patterns do not trigger a performance penalty. ## 2. Testing Use of Hugepages We ran the "EOS" and "3-d Hydro" test problems, as described above. The EOS test ran a \(\sim\) 1 GB 2-d SN Ia simulation for 50 time steps and the 3-d Hydro test ran a \(\sim\) 9 GB Sedov explosion simulation for 2 time steps. Both tests were run on 1 and 12 cores. We used the round robin distribution of processors for the runs on 12 cores because FLASH Morton orders the blocks to be spatially located together. Filling one core memory group first will put blocks together but round robin spreads them as much as possible. We ran each test 7 times, removed the highest and lowest run times, and averaged the results from the remaining 5. To investigate the effects of hugepages, we used the Fujitsu hardware counters (Feldman et al., 2017) of the Performance Application Programming Interface (PAPI) (Bahcall et al., 2019) to monitor cycles, TLB misses, and memory access, and used FLASH's internal timers to obtain runtimes. Tests consisted of running the PAPI-instrumented code without hugepages (no hp), with 2MB standard hugepages (hp), and with 2MB transparent hugepages (hp). To use (t)hp, we linked the GCC and ARM compilers to Fujitsu's libmpg library, and used compiler flags for the Fujitsu compiler. A detailed description of the runtime environment, including library versions, compiler flags, linking to PAPI and Fujitsu's libmpg library, and how to enable/disable (t)hp can be found in Appendix A. ## 3. Results First, we saw how the runtime, main memory bandwidth (MMB), and DTLB miss rate changed with huge page use. To do this, we used the following PAPI counters by setting PAPI_EVENTS to PERF_COUNT_HW_CPU_CYCLES, PERF_COUNT_HW_CACHE_MISSES,OTLB-L-OAD-MISSES. The results from the 1 processor runs are shown for the EOS test in Figure 0(a), and for the 3-d hydro in Figure 0(b) - the 12 core runs exhibited similar patterns and are therefore not shown. The figures show the ratios of runs with and without (t)hp, e.g. values around 1 indicate no changes, values \(<\) 1 indicate a reduction by using (t)hp, and values \(>\) 1 an increase. It is important to note that only a portion of our code is instrumented with PAPI, namely the EOS calls for the EOS test, and the hydrodynamics calls for the 3-d hydro test. Therefore, these counters represent the behavior in that specific module, rather than the software as a whole, while the timers show the full runtime. As expected and seen in our last study (Bahcall et al., 2019), in both cases the hardware cycles, MMB, and overall runtime are about the same when using hp, thp, or no hp. However, using hp drastically decreases the DTLB miss rate, while using thp does not have as much of an effect. Using the proved to be an interesting struggle. Thp would not enable in our 1 core runs with the Fujitsu compiler for the EOS test, and is therefore not shown in Figure 0(a). We finally saw thp usage by mapping the process to NUMA node 1 instead of NUMA node 0. When running the 3-d hydro application compiled with GCC on 12 cores, the node would reset in the middle of execution when thp was enabled. These difficulties using thp will be investigated in the future. We also observed the change in selected hardware counters and their derived rates when enabling hp. We found that most of these counters varied by only around 1%, so we report ratios of counters from a single run rather than an average as before. A64FX has 6 hardware counters, so these results were collected across multiple runs. For ease of interpretation, we ran these exploratory tests on 1 core only. The ratios of hp : no hp for the most relevant values are shown in Table 1, and full tables showing all measured counters and rates can be found in Appendix B. As before, values < 1 indicate a reduction by using hp, and values > 1 an increase. As expected, the TLB-related counters showed the biggest change. Although the L2-DTLB showed the greatest improvement when hp was enabled, 99% of the total DTLB misses resulted in an L1-DTLB miss, and only < 1% resulted in a L2-DTLB miss. The instruction TLBs were less affected. GCC typically exhibited a greater decrease in TLB refills than Fujitsu. The runtime, number of L1D and L2D cache misses, and the bandwidth were relatively unaffected by hp use. For the EOS test, the number of cycles spent waiting for memory access completion (LD_COMP_WAIT) is smaller when hp is enabled, but for the GCC compiler, the latency of L2 cache miss processing is higher. For the 3-d Hydro test with GCC, enabling hp slightly increased the total number of cpu cycles as well as (LD_COMP_WAIT). Overall, enabling hp has the overwhelming effect of reducing TLB misses, but not much else. The Fujitsu compiler seems to have less prominent changes in its counters than GCC. We also compared the single core results between compilers, namely to the Fujitsu compiler, which by far produced the fastest runtime. Figure 2 shows the ratio between the Fujitsu and other compilers (purple for GCC, pink for ARM) for each test problem (darker colors for EOS) and type of hugege (solid for no hp, dotted for hp, and striped for thp), using the same dataset as that from Figures 1a and 1b. Here, values < 1 indicate a reduction due to use of the Fujitsu compiler, and values > 1 indicate an increase. Regardless of hugege use, the Fujitsu compiler was nearly twice as fast as the others, and nearly four times as fast as ARM for the EOS test. The Fujitsu compiler also executes about half of the hardware cycles. For the EOS test, the Fujitsu compiler has a 2.5-3x greater MMB than the others; this is about 1.5-2x for 3-d Hydro. This is true \begin{table} \begin{tabular}{l|r r|r r} & EOS & \multicolumn{2}{c}{3-d Hydro} \\ Description & GCC & Fujitsu & GCC & Fujitsu \\ \hline \hline DTLB-LOAD-MSESES & 0.03 & 0.06 & 0.11 & 0.31 \\ L1D\_TLB\_REFILL & 0.03 & 0.05 & 0.11 & 0.31 \\ L20\_TLB\_REFILL & 0.0002 & 0.01 & 0.03 & 0.03 \\ L1T\_TLB\_REFILL & 0.71 & 1.01 & 0.04 & 0.65 \\ L2T\_TLB\_REFILL & 1.00 & 0.99 & 0.59 & 0.16 \\ \hline L1D\_CACHE\_REFILL & 0.96 & 0.99 & 1.00 & 1.00 \\ L2D\_CACHE\_REFILL & 1.08 & 1.06 & 0.96 & 1.03 \\ LD\_COMP\_WAIT & 0.71 & 0.78 & 1.17 & 0.99 \\ LD\_COMP\_WAIT\_L1\_MISS & 0.82 & 0.78 & 0.94 & 1.00 \\ LD\_COMP\_WAIT\_L2\_MISS & 0.90 & 0.96 & 0.97 & 0.98 \\ \hline Average latency of L1D cache miss processing & 1.03 & 1.03 & 1.00 & 1.00 \\ Average latency of L2 cache miss processing & 2.53 & 1.00 & 1.03 & 0.96 \\ Bidirectional effective bandwidth between L1D cache and L2 cache & 1.01 & 1.07 & 0.91 & 1.00 \\ Bidirectional effective bandwidth between L2 cache and memory & 1.10 & 1.11 & 0.87 & 1.04 \\ \hline \end{tabular} \end{table} Table 1. Counters and derived rates for single core runs, for each test problem and two compilers. Values shown are ratios with : without standard hugepages enabled. Counter descriptions and rate calculations can be found in [(10)]. Figure 1. Ratios of runs with and without hugepages for each compiler for the (a) EOS test and (b) 3-d hydro test on 1 core even though the Fujitsu compiler exhibits a higher DTLB miss rate, which interestingly increases with huge page use. This rate increase says nothing about the relative TLB misses between the compilers, however, so for a better comparison we look at the ratios between the raw counter values and derived rates. Table 2 shows the ratio between the Fujitsu and GCC compilers of a subset of counters, for each test problem with hp and no hp enabled. We chose to compare only these two compilers since the ARM compiler is too slow to be a viable choice for production runs, and we only look at no hp and hp runs because hp did not even achieve the goal of reducing TLB misses. Again, values \(<\) 1 indicate a reduction due to use of the Fujitsu compiler, and values \(>\) 1 indicate an increase. The data used is the same as that used to create Table 1, and full tables showing all measured counters and rates can be found in Appendix B. Although the Fujitsu compiler has a much higher TLB miss rate than the GCC compiler in most cases, it has lower total TLB misses. The Fujitsu compiler also has a higher (1.6-2.9 \(\times\)) memory bandwidth and lower latency. It has the same number of cache misses, but spends less total cycles waiting for memory access than the GCC compiler. \begin{table} \begin{tabular}{l c c c c} & EOS & \multicolumn{2}{c}{3-d Hydro} \\ Description & Hp & No hp & Hp & No hp \\ \hline \hline DTLB-LOAD-MISSES & 0.66 & 0.39 & 2.20 & 0.82 \\ L1D\_TLB\_REFILL & 0.55 & 0.39 & 2.52 & 0.86 \\ L2D\_TLB\_REFILL & 0.77 & 0.02 & 0.93 & 1.02 \\ L1I\_TLB\_REFILL & 0.70 & 0.49 & 0.63 & 0.04 \\ L2I\_TLB\_REFILL & 1.00 & 1.01 & 0.64 & 2.33 \\ \hline L1D\_CACHE\_REFILL & 0.94 & 0.90 & 0.91 & 0.92 \\ L2D\_CACHE\_REFILL & 1.04 & 1.06 & 1.08 & 1.00 \\ LD\_COMP\_WAIT & 0.50 & 0.46 & 0.66 & 0.78 \\ LD\_COMP\_WAIT\_L1\_MISS & 0.56 & 0.58 & 2.58 & 2.43 \\ LD\_COMP\_WAIT\_L2\_MISS & 0.82 & 0.77 & 2.11 & 2.09 \\ \hline Average latency of L1D cache miss processing & 0.90 & 0.90 & 1.04 & 1.03 \\ Average latency of L2 cache miss processing & 0.25 & 0.64 & 0.89 & 0.94 \\ Bidirectional effective bandwidth between L1D cache and L2 cache & 2.76 & 2.59 & 1.63 & 1.49 \\ Bidirectional effective bandwidth between L2 cache and memory & 2.88 & 2.85 & 1.91 & 1.61 \\ \end{tabular} \end{table} Table 2. Counters and derived rates for single core runs, for each test problem with either standard hp or no hp enabled. Values shown are ratios for Fujitsu : GCC compiler. Counter descriptions and rate calculations can be found in [10]. Figure 2. Ratios between the Fujitsu and other compilers (GCC and ARM), for each application and type of huge page. ## 4. Summary and Conclusions We found that for all compilers and both test problems, the use of both standard and transparent huge pages did not significantly affect the performance of FLASH, despite a drastic decrease in TLB misses. This suggests that TLB misses indeed do not have an impact on the performance. This may be due to the A64FX's translation table cache (TTC), which decreases the latency of virtual to physical address translation (Hariison et al., 2021). Higher cache miss rates when using the Fujitsu compiler are offset by higher memory bandwidth and lower latency, which results in a shorter runtime. The Fujitsu compiler demonstrates 2-4 times better performance than the GCC and ARM compilers. Although the Fujitsu compiler uses only half the total cycles of the GCC compiler, both compilers have the same number of cache misses. Since the bandwidth is \(\sim 2\)\(\times\) larger for Fujitsu, this means that less time is spent waiting for memory access completion (ie in LD_COMP_WAIT), thereby shortening the runtime. However, only \(\sim 20\) % - 40 % of the total cycles are spent in LD_COMP_WAIT, so a higher bandwidth can't completely account for the faster runtime. A contributing factor could be that Fujitsu may have better optimizations that take advantage of the A64FX hardware. This includes the use of SVE - the Fujitsu executable uses the SVE registers \(21\times\) more than GCC. The reason why Fujitsu produces the fastest executable, and what the performance bottlenecks are, will be explored in detail in future work. ## Acknowledgments Ookami is supported by the US NSF grant #1927880, and this research was supported in part by the US DOE under grant DE-FG02-8/ER40317. FLASH was developed in part by the US DOE NSA-ASC and OSC-ASCR-supported Flash Center for Computational Science at the University of Chicago. The authors gratefully acknowledge the generous support of the Ookami community. The authors also thank Jens Domke at RIKEN for very helpful suggestions.
2301.00216
An Efficient Hierarchical Kriging Modeling Method for High-dimension Multi-fidelity Problems
Multi-fidelity Kriging model is a promising technique in surrogate-based design as it can balance the model accuracy and cost of sample preparation by fusing low- and high-fidelity data. However, the cost for building a multi-fidelity Kriging model increases significantly with the increase of the problem dimension. To attack this issue, an efficient Hierarchical Kriging modeling method is proposed. In building the low-fidelity model, the maximal information coefficient is utilized to calculate the relative value of the hyperparameter. With this, the maximum likelihood estimation problem for determining the hyperparameters is transformed as a one-dimension optimization problem, which can be solved in an efficient manner and thus improve the modeling efficiency significantly. A local search is involved further to exploit the search space of hyperparameters to improve the model accuracy. The high-fidelity model is built in a similar manner with the hyperparameter of the low-fidelity model served as the relative value of the hyperparameter for high-fidelity model. The performance of the proposed method is compared with the conventional tuning strategy, by testing them over ten analytic problems and an engineering problem of modeling the isentropic efficiency of a compressor rotor. The empirical results demonstrate that the modeling time of the proposed method is reduced significantly without sacrificing the model accuracy. For the modeling of the isentropic efficiency of the compressor rotor, the cost saving associated with the proposed method is about 90% compared with the conventional strategy. Meanwhile, the proposed method achieves higher accuracy.
Youwei He, Jinliang Luo
2022-12-31T15:17:07Z
http://arxiv.org/abs/2301.00216v1
# An Efficient Hierarchical Kriging Modeling Method for High-dimension Multi-fidelity Problems ###### Abstract Multi-fidelity Kriging model is a promising technique in surrogate-based design as it can balance the model accuracy and cost of sample preparation by fusing low- and high-fidelity data. However, the cost for building a multi-fidelity Kriging model increases significantly with the increase of the problem dimension. To attack this issue, an efficient Hierarchical Kriging modeling method is proposed. In building the low-fidelity model, the maximal information coefficient is utilized to calculate the relative value of the hyperparameter. With this, the maximum likelihood estimation problem for determining the hyperparameters is transformed as a one-dimension optimization problem, which can be solved in an efficient manner and thus improve the modeling efficiency significantly. A local search is involved further to exploit the search space of hyperparameters to improve the model accuracy. The high-fidelity model is built in a similar manner with the hyperparameter of the low-fidelity model served as the relative value of the hyperparameter for high-fidelity model. The performance of the proposed method is compared with the conventional tuning strategy, by testing them over ten analytic problems and an engineering problem of modeling the isentropic efficiency of a compressor rotor. The empirical results demonstrate that the modeling time of the proposed method is reduced significantly without sacrificing the model accuracy. For the modeling of the isentropic efficiency of the compressor rotor, the cost saving associated with the proposed method is about 90% compared with the conventional strategy. Meanwhile, the proposed method achieves higher accuracy. **Keywords**: surrogate; multi-fidelity model; Hierarchical Kriging; high-dimension modeling ## 1 Introduction Surrogate model, also known as metamodels or response surfaces, has been widely used in numerical optimization or uncertainty quantification for expensive engineering problems to replace the time-consuming simulation models, aiming to relieve the computational burden (HAN et al., 2020; Shu et al., 2019; Zhou et al., 2020). Various types of surrogate model have been developed, such as polynomial response surface models (Chatterjee et al., 2019; Hawchar et al., 2017), support vector regression models (Shi et al., 2020; Xie et al., 2018; Zhou et al., 2015), radial basis function models (Chen et al., 2022; Liu et al., 2022; Song et al., 2019), neural networks (Yegnanarayana, 1994) and Kriging models (J. Forrester et al., 2006). Among them, Kriging gains popularity as it can not only provides the predictions of the expensive models but also estimate the prediction errors. Based on Kriging, optimization methods for single- and multi-objective problems (Schonlau et al., 1998; Zhan et al., 2017; Zhan and Xing, 2020) and global sensitivity analysis methods (Cheng et al., 2020; Van Steenkiste et al., 2019) have been developed to solve practical problems for aerodynamic (He et al., 2020; Wang et al., 2018) or structural (Viana et al., 2014; Zhou et al., 2020) design applications. Despite the continuous advance in Kriging-based modeling methods, the associated prohibitive computational cost of building sufficiently accurate Kriging for high-dimension applications remains an important challenge. Specifically, the cost of building the Kriging model for high-dimension problems is twofold. Firstly, to construct a sufficient accurate model, the number of sample data required will increase sharply. This will call for a large number of expensive simulations. Therefore, the cost of sample data preparation will be prohibitive for high-dimension problems. Secondly, with the increase of sample set, the cost for fitting the model will increase exponentially. For problems with plenty of parameters, the cost of model construction will be unacceptable. In extreme applications, the process of model tuning might even be more expensive than engineering simulations. Incorporating cheap auxiliary information has been demonstrated to be a promising strategy to alleviate the computational burden of data preparation. Such cheap information usually refers to low-fidelity data or inexpensive gradients. In this paper, Kriging assisted with cheap low-fidelity data, termed as multi-fidelity Kriging, is concerned for its extensive application in many fields such as numerical design optimization (He et al., 2021, 2022; Lin et al., 2022) or modeling of complex simulation problem (Lin et al., 2021). Co-Kriging (Kennedy & O'Hagan, 2000), Hierarchical Kriging (Han & Gortz, 2012), generalized hierarchical Co-Kriging (Zhou et al., 2020) and etc. are typical multi-fidelity Kriging surrogate models. Among them, the Hierarchical Kriging (HK) model has gained popularity because its merit of being as accurate as Co-Kriging and as simple as the correction-based methods. For instance, the HK model has been adopted to develop the variable-fidelity Efficient Global Optimization method (HAN et al., 2020). Though the cost of sample data preparation can be decreased by incorporating cheap low-fidelity data, the construction cost of multi-fidelity Kriging model remains inappropriate or even computational prohibitive for high-dimension problems. This is usually known as the curse of dimensionality for metamodels, for either single- or multi-fidelity model. To attack the curse for single-fidelity Kriging model, Toal et al. (Toal et al., 2008) suggested to use isotropic correlation function (i.e. the same hyperparameter for each variable) for high-dimension problems. Empirical comparison indicates that tuning a reduced set of hyperparameter might outperform an inaccurately tuned out but complete set of hyperparameters. Based on this observation, Zhao et al. (Zhao et al., 2020) developed an efficient Kriging modeling method based on maximal information coefficient. The relative magnitudes of hyperparameter are estimated by maximal information coefficient, or the importance of each variable is represented by the maximal information coefficient. Then this knowledge is utilized to reformulate the maximum likelihood estimation problem to reduce the dimensionality. Therefore, the modeling efficiency can be improved. It should be noted that if values of maximal information coefficient reflecting the importance of a variable is inconsistent with the reality, biased values of maximal information coefficient may even mislead the tuning process of hyperparameter. Instead of using maximal information coefficient, the distance correlation is adopted to represent the variable importance in (Fu et al., 2020). Furthermore, a high-dimension Kriging modeling method by utilizing the Partial Least Squares regression technique was developed in (Bouhlel et al., 2016, 2016). Partial Least Squares regression is adopted to reveal how inputs depend on responses and reduce the dimension. In this method, the number of hyperparameters is reduced to a maximum of four and the modeling time can be reduced remarkably. For multi-fidelity models based on Kriging, a multi-fidelity high dimensional model representation (MF-HDMR) is developed to efficiently approximate high dimensional problems (Cai et al., 2017). However, empirical thresholds are required to determine the linearity of the first-order component function and to test whether the second- or higher-order HDMR component exists or not. Moreover, the modeling time of the MF-HDMR over the test problems in the numerical experiments are not reported. Overall, it still remains a challenge to build a high-quality multi-fidelity Kriging model within a reasonable amount of computational effort for high-dimension problems. To that end, an efficient HK modeling method for high-dimension multi-fidelity design problems is proposed. In building the low-fidelity Kriging model, the maximum likelihood estimation problem is transformed into a one-dimension problem with the help of the relative values of the hyperparameters estimated by the technique of sensitivity analysis. By solving the one-dimension problem, a rough estimation of the hyperparameter for the low-fidelity Kriging model can be obtained. To prevent the possible misleading of the biased values of the sensitivity indicator, a correction step is further involved to obtain a fine combination of the hyperparameters. Similar strategy is adopted in tuning the high-fidelity model. The difference is that the relative magnitudes of the hyperparameters of high-fidelity model is provided by the hyperparameters of the fine-tuned low-fidelity model. The performance of the efficient HK modeling method is illustrated by ten analytic test examples and one real-world engineering example. Comparison between the proposed strategy and existing approach in terms of both the modeling efficiency and accuracy are carried out. The remainder of the paper is organized as follows. Section 2 briefs the theoretical background. Motivation and the proposed tuning strategy are detailed in Section 3. Numerical experiments over analytic test problems and the isentropic efficiency modeling of an axial flow compressor rotor are presented to demonstrate the effectiveness of the proposed method. Finally, conclusions and suggestions for future work are provided in Section 5. ## 2 Background The objective of this paper is to develop an efficient HK modeling strategy for high-dimension multi-fidelity problems. For better understanding, the two-fidelity modeling problems is considered. The efficient modeling strategy can build an accurate enough model for the high-fidelity black-box function \(y=f_{\text{HF}}\left(\mathbf{x}\right)\) with the assistant of \(y=f_{\text{LF}}\left(\mathbf{x}\right)\) with lowest computational cost. \(\mathbf{x}\in\mathbb{R}^{d}\) is the modeling variable with the number of variables being \(d\). The low- and high-fidelity responses at a sampling site are usually obtained by simulations, like the computational fluid dynamic-based simulation. ### Kriging Kriging assumes that a random process exits in each sampling site. It includes the trend function and the random process. According to the trend function used, there exist simple, ordinary, and universal Kriging. For ordinary Kriging, the prediction formulation can be expressed as: \[Y\left(\mathbf{x}\right)=\mu+Z\left(\mathbf{x}\right) \tag{1}\] where \(\mu\) represents the unknown constant; \(Z\left(\mathbf{x}\right)\) denotes a stationary random process with zero mean and process variance \(\sigma^{2}\). The covariance of \(Z\left(\mathbf{x}\right)\) is formulated as: \[\text{Cov}\left(Z\left(\mathbf{x}\right),Z\left(\mathbf{x}^{\prime}\right) \right)=\sigma^{2}R\left(\mathbf{x},\mathbf{x}^{\prime}\right) \tag{2}\] where \(R\left(\mathbf{x},\mathbf{x}^{\prime}\right)\) is the spatial correlation function depending on the Euclidean distance between two sites \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\). Various versions of correlation function can be utilized. In this paper, the Matern 5/2 correlation function (Ulaganathan et al., 2015) is adopted, which is formulated as: \[R\left(\mathbf{x},\mathbf{x}^{\prime}\right)=\left(1+\sqrt{s}a+\frac{5a^{2}}{3} \right)\exp\left(-\sqrt{5}a\right) \tag{3}\] where \(a=\sqrt{\sum_{i=1}^{d}\theta\left|x_{i}-x_{i}^{\prime\prime}\right|^{2}}\); \(d\) is the number of variables; \(\mathbf{0}=\left[\theta_{1},\theta_{2},...,\theta_{d}\right]\) are hyperparameters measuring the activity of each variable. \(\mathbf{0}\) is determined in the model fitting process by solving the following maximum likelihood estimation problem: \[\mathbf{0}=\arg\max\left(-\frac{m}{2}\ln\hat{\sigma}^{2}\left(\mathbf{0}\right) -\frac{1}{2}\ln\left|\mathbf{R}\left(\mathbf{0}\right)\right|\right) \tag{4}\] where \(m\) is the number of samples; \(\hat{\sigma}^{2}\) denotes the estimated value of \(\sigma^{2}\); \(\mathbf{R}\) is the correlation matrix. The likelihood function is often multimodal. Therefore, the evolutionary algorithms, such as genetic algorithm, are usually adopted to solve the optimization problem shown in (4). While, the evolutionary algorithms often need thousands fitness evaluations of the likelihood function. For high-dimension problems, the matrix inversion during the thousands evaluation of the likelihood function would result in prohibitive computational cost, which might even be more time-consuming than engineering simulations. The Kriging prediction \(\hat{y}\left(\mathbf{x}\right)\) for the quality of interest at any unvisited point are expressed as: \[\hat{y}\left(\mathbf{x}\right)=\mu^{*}+\mathbf{r}^{*}\mathbf{R}^{-1}\left( \mathbf{y}_{s}-\mu^{*}\mathbf{1}\right) \tag{5}\] where \(\mu^{*}\) is obtained via generalized least-square estimation; \(\mathbf{r}\) is the correlation vector between the unvisited point and the sampled points; \(\mathbf{y}_{s}\) is the response vector containing the sample responses; \(\mathbf{1}\) is a unit column vector. ### Hierarchical Kriging HK is one of the multi-fidelity Kriging models, which can fuse abundant low-fidelity sample data and a small set of high-fidelity data to obtain an approximation with high accuracy. Usually, the time cost for obtaining a low-fidelity sample is much cheaper than that of a high-fidelity sample. Therefore, the cost of sample data preparation can be reduced. In HK, the low-fidelity function is taken as the model trend for the high-fidelity model to avoid the calculation of the covariance matrix between low- and high-fidelity samples. The construction of a HK model starts with tuning of the low-fidelity Kriging model based on the low-fidelity samples. Then, the low-fidelity Kriging is used as the model trend of the Kriging for the high-fidelity function, which is expressed as: \[Y\left(\mathbf{x}\right)=\beta\hat{y}_{\text{LP}}\left(\mathbf{x}\right)+Z \left(\mathbf{x}\right) \tag{6}\] where \(\beta\) is a scaling factor indicating the level of correlation between the low- and high-fidelity functions; \(\hat{y}_{\text{LF}}\left(\mathbf{x}\right)\) denotes the prediction of low-fidelity Kriging; \(Z\left(\mathbf{x}\right)\) is the random process with zero mean and variance with the identical form as shown in (2). The parameters in the correlation function are determined in the model tuning procedure by maximizing the likelihood estimation problem. The HK prediction is formulated as \[\hat{y}\left(\mathbf{x}\right)=\beta^{*}\hat{y}_{\text{LF}}\left(\mathbf{x} \right)+\mathbf{r}^{*}\mathbf{R}^{-1}\left(\mathbf{y}_{s}-\beta^{*}\mathbf{F}\right) \tag{7}\] where \(\beta^{*}=\left(\mathbf{F}^{\text{T}}\mathbf{R}^{-1}\mathbf{F}\right)^{-1} \mathbf{F}^{\text{T}}\mathbf{R}^{-1}\mathbf{y}_{s}\); \(\mathbf{y}_{s}\) is the column vector containing the true responses of the high-fidelity sample; \(\mathbf{F}\) represents the column vector of the predictions from the low-fidelity Kriging at the high-fidelity sample sites. More details are referred to (Han & Gortz, 2012). For clarity, main steps for the conventional construction of a HK model are summarized below: Step 1: Collect the low- and high-fidelity sample data \(D_{\text{LF},n}=\left\langle\left(\mathbf{x}_{\text{LF},i},y_{\text{LF},i}\right) \right\rangle_{i=1}^{n}\) and \[D_{\text{HF},i}=\left\langle\left(\mathbf{x}_{\text{HF},i},y_{\text{HF},i}\right) \right\rangle_{i=1}^{k};\] Step 2: Determine the hyperparameters in the correlation function of the low-fidelity model by solving the following problem: \[\mathbf{0}_{\text{LF}}=\arg\max\left(-\frac{n}{2}\ln\hat{\sigma}_{\text{LF}}^{2} \left(\mathbf{0}_{\text{LF}}\right)-\frac{1}{2}\ln\left|\mathbf{R}_{\text{LF}}\left( \mathbf{0}_{\text{LF}}\right)\right|\right) \tag{8}\] Step 3: Obtain the predictions from the low-fidelity Kriging at the high-fidelity sample sites via (5); Step 4: Obtain the hyperparameters in the correlation function of the high-fidelity model by solving the following problem: \[\mathbf{0}_{\text{HF}}=\arg\max\left(-\frac{k}{2}\ln\hat{\sigma}_{\text{HF}}^{2} \left(\mathbf{0}_{\text{HF}}\right)-\frac{1}{2}\ln\left|\mathbf{R}_{\text{HF}}\left( \mathbf{0}_{\text{HF}}\right)\right|\right) \tag{9}\] Step 5: Calculate the prediction at untested sites using (7). In Step 2 and 4, the maximization problems are often solved by evolutionary algorithms. It will call for thousands or even more evaluation of likelihood function. However, for high-dimension problems, each calculation of likelihood function will be computational expensive. Therefore, the cost for model tuning of HK will be prohibitive for high-dimension problems. If the number of the parameters in the likelihood maximization problem can be reduced, the number of calls of the evaluation of likelihood function can be reduced, which means the modeling cost will be decreased significantly. Or if better initial values of the hyperparameters are available, local optimization method, which usually need less function evaluations, could be adopted to find better estimation of the hyperparameters with lower computational cost. ## 3 The efficient Hierarchical Kriging model It has been demonstrated that the hyperparameter \(\theta_{i}\) of Kriging model indicates the extent how the \(i\)th input variable influences the response (Forrester & Keane, 2009; Ulaganathan et al., 2015). In detail, a larger value of \(\theta_{i}\) means that the \(i\)th variable has greater influence on the response. Meanwhile, sensitivity indicator in the field of sensitivity analysis can measure the importance of variables over the response (Shan & Wang, 2010). Therefore, if the relationship between sensitivity indicator and hyperparameter can be established, the dimensionality of the maximum likelihood estimation problems might be reduced to improve the modeling efficiency. In HK, the hyperparameters of both the low- and high-fidelity model indicate the importance of the variable to the low- and high-fidelity response, respectively. Moreover, the low- and high-fidelity functions generally correlate well with each other. It is reasonable to believe that the hyperparameters of the low-fidelity model might measure the importance of the variable to the high-fidelity response as well. Or, there might be a linear or simple relationship between the hyperparameters of the low- and high-fidelity model. If such relationship can be revealed, it can be used to reduce the number of parameters in the likelihood maximization problems, or it may even serve as a good initial guess of the hyperparameter to narrow the search space of the hyperparameter so as to alleviate the computational burden of the model tuning procedure. Above all, we would like to make use of two relationships to develop an efficient modeling strategy of HK. The first one is the relationship between the sensitivity indicator and hyperparameter of low-fidelity model. The other one is the relationship between the low- and high-fidelity hyperparameters. In this paper, the sensitivity indicator maximal information coefficient (MIC) is adopted, which is briefed firstly in this section. Then, an analytic example is introduced to illustrate the feasibility of the idea behind the proposed efficient modeling strategy of HK. After that. technique details and implementation are presented. ### Maximal information coefficient MIC is an sensitivity analysis method for identifying the variables with significant influence on the response (Reshef et al., 2011). It is an improved version of mutual information. The MIC of two variables \(\mathbf{x}_{i}\) and \(\mathbf{y}\) is expressed as: \[\omega_{i}\left(\mathbf{x}_{i},\mathbf{y}\right)=\max_{a,b\sim a}\frac{\text{ MI}\left(\mathbf{x}_{i},\mathbf{y}\right)}{\text{log}_{2}\left(\min\left(a,b \right)\right)} \tag{10}\] where \(\omega_{i}\in\)[0,1] is the MIC value; \(a\) and \(b\) are the number of rows and columns of gridding the scatterplot of data \(\mathbf{x}_{i}\) and \(\mathbf{y}\); \(B\) is the upper bound of the grid size, \(\text{MI}\left(\mathbf{x}_{i},\mathbf{y}\right)\) denotes the mutual information between \(\mathbf{x}_{i}\) and \(\mathbf{y}\). In practice, the \(\text{MI}\left(\mathbf{x}_{i},\mathbf{y}\right)\) is estimated by the following formula: \[\text{MI}\left(\mathbf{x}_{i},\mathbf{y}\right)=\sum_{i=1}^{n}\hat{p}\left( \mathbf{x}_{i}^{(i)},\mathbf{y}^{(i)}\right)\text{log}\frac{\hat{p}\left( \mathbf{x}_{i}^{(i)},\mathbf{y}^{(i)}\right)}{\hat{p}\left(\mathbf{x}_{i}^{(i )}\right)\hat{p}\left(\mathbf{y}^{(i)}\right)} \tag{11}\] where \(\hat{p}\left(\mathbf{x}_{i}^{(i)}\right)\) and \(\hat{p}\left(\mathbf{y}^{(i)}\right)\) denote the estimated probability density function, and \(\hat{p}\left(\mathbf{x}_{i}^{(i)},\mathbf{y}^{(i)}\right)\) represents the estimated joint probability density function. Larger value of MIC implies greater influence of a variable on the response. Notably, the MIC does not assume any distribution of sample data and are easy to compute. In multi-fidelity modeling problems, the number of low-fidelity samples are usually much larger than that of the low-fidelity samples. Therefore, in the proposed method, the MICs between each variable and the response are estimated using low-fidelity data other than high-fidelity data, as a larger data set can result in more accurate identification of the influence of variable on the response via MIC. ### An illustrative example To clarify the motivation of the proposed method, an analytic function is utilized to illustrate the relationship between the MIC and hyperparameters of low-fidelity model as well as the relationship between hyperparameters of low- and high-fidelity model. The high- and low-fidelity function of the analytic problem from (Cai et al., 2017) is expressed as: \[f_{\text{HF}}\left(\mathbf{x}\right) =x_{1}^{2}+x_{2}^{2}+x_{1}x_{2}-14x_{1}-16x_{2}+\left(x_{3}-10 \right)^{2}+4\left(x_{4}-5\right)^{2}+\left(x_{5}-3\right)^{2}+2\left(x_{6}- 1\right)^{2} \tag{12}\] \[+5x_{7}^{2}+7\left(x_{8}-11\right)^{2}+2\left(x_{9}-10\right)^{2} +\left(x_{10}-7\right)^{2}+45\] \[f_{\text{LF}}\left(\mathbf{x}\right) =0.8f_{\text{HF}}-\sum_{i=1}^{10}x_{i}+100\] \[x_{i} \in[-10,11],i=1,2,...,10\] To begin with, 100 and 50 sample data are collected for the low- and high-fidelity model, respectively. Before the calculation of MIC and the construction of the models, low- and high-fidelity sample data are centered to have zero mean. The HK model is built with the collected sample data following the procedure described in Section 2.2. Specifically, the genetic algorithm is adopted to solve the likelihood maximization problem. The population size is set as 40, and the maximum number of function valuations is 5000. The fractions of crossover and migration are set as 0.8 and 0.2, respectively. To prevent the influence of the random procedure in the genetic algorithm, the HK model is built on the identical sample set by 20 times. 5000 high-fidelity test data is adopted to measure the accuracy of the built model. The most accurate model is screened out and the hyperparameters are recorded. Finally, the MIC values, hyperparameters of the low- and high-fidelity model are plotted in Fig. 1. As shown in Fig. 1(a), the trends of the MIC values and the tuned hyperparameters are quite similar. Moreover, it can be noted that both the MIC and tuned hyperparameters can identify the importance of each variable. From the expression of the current problem, it can be observed that \(x_{\text{s}}\) has the largest coefficient and should be the most influential variable of this problem. Such observation can also be concluded from Fig. 1(a) with the MIC and hyperparameter. The MIC values and the hyperparameters of \(x_{\text{i}},x_{\text{z}},x_{\text{s}}\) are small, indicating that those three variables have less influence on the response. This can also be confirmed from the function expression. As the MIC values and tuned hyperparameter has similar trends but different magnitude, it is possible to assume that the hyperparameters are proportional to the MIC values. Or, the following linear relationship can be established: \[\mathbf{\theta}_{\text{LF}}=\lambda\mathbf{\omega} \tag{13}\] where \(\lambda\in\mathbb{R}^{+}\) is the scale factor between the MIC values \(\mathbf{\omega}\) and the hyperparameters of the low-fidelity model \(\mathbf{\theta}_{\text{LF}}\). As shown in Fig. 1(b), the hyperparameters of the low- and high-fidelity model share nearly identical trends but with different magnitude. It is natural to believe that the hyperparameters of the high-fidelity model are proportional to the hyperparameters of the low-fidelity model. Such observation can be expressed as follows: \[\mathbf{\theta}_{\text{HF}}=\chi\mathbf{\theta}_{\text{LF}} \tag{14}\] Figure 1: Hyperparameters and MIC values for the illustration function where \(\chi\in\mathbb{R}^{+}\) is the scale factor between the hyperparameters of the low- and high-fidelity model \(\mathbf{\theta}_{\text{LF}}\) and \(\mathbf{\theta}_{\text{HF}}\). This illustrative example exemplified the inner relationship between the sensitivity indicator and hyperparameters as well as the connection between the low- and high-fidelity model hyperparameters. Then the question left is how to make full use of those relationships to improve the modeling efficiency of the HK model, which is depicted in next subsection. ### Proposed construction strategy With above observations, the hyperparameter estimation problem for the low-fidelity model shown in (8) can be reformulated as follows: \[\begin{split}&\mathbf{\theta}_{\text{LF}}=\arg\max\biggl{(}-\frac{n }{2}\ln\hat{\sigma}_{\text{LF}}^{2}\left(\mathbf{0}_{\text{LF}}\right)-\frac{ 1}{2}\ln\left|\mathbf{R}_{\text{LF}}\left(\mathbf{0}_{\text{LF}}\right)\right| \biggr{)}\\ & s.t.\ \mathbf{0}_{\text{LF}}=\lambda\mathbf{\omega}\end{split} \tag{15}\] In practice, \(\mathbf{0}_{\text{LF}}\) is obtained with a two-step strategy. Firstly, the above equality constrained problem is reformulated into an unconstrained problem by inserting the equality relationship between MIC and \(\mathbf{0}_{\text{LF}}\) to determine \(\lambda\): \[\lambda=\arg\max\biggl{(}-\frac{n}{2}\ln\hat{\sigma}_{\text{LF}}^{2}\left( \lambda\mathbf{\omega}\right)-\frac{1}{2}\ln\left|\mathbf{R}_{\text{LF}} \left(\lambda\mathbf{\omega}\right)\right|\biggr{)} \tag{16}\] Then \(\mathbf{0}_{\text{LF}}=\lambda\mathbf{\omega}\) is utilized to calculate \(\mathbf{0}_{\text{LF}}\). Compared with the \(d\)-dimension optimization problem (8), the hyperparameter estimation problem is now a one-dimension problem. It can be, of course, solved in a more efficient manner than the original one. The number of likelihood evaluations can be reduced significantly, thus improving the modeling efficiency. While, such strategy has a drawback obviously. The hyperparameters of each variable are tied together with the scale factor \(\lambda\) artificially. During the tuning process, the hyperparameters cannot change independently, which might sacrifice the model accuracy. Moreover, the importance of a variable estimated by MIC might be inconsistent with the reality. The biased MIC values may mislead the tuning process, degrading the effectiveness of the proposed strategy. Therefore, a local search is further involved with solving the problem (8) by starting from the already obtained \(\mathbf{0}_{\text{LF}}\). The local search allows the independent changes of each hyperparameter. This is much useful to improve the accuracy of the low-fidelity model based on our preliminary investigation. Low-fidelity model with high accuracy can further improve the performance of the multi-fidelity model. It is one of the key points to ensure the effectiveness of the proposed efficient HK model. The tuning process for the high-fidelity model is similar to that of the low-fidelity model. The main difference is the utilization of the connection between the hyperparameters of low- and high-fidelity model. In detail, the following one-dimension problems is solved firstly to obtain an estimation of the scale factor between the hyperparameters of the low- and high-fidelity model: \[\chi=\arg\max\left(-\frac{k}{2}\ln\hat{\sigma}_{\mbox{\tiny HF}}^{2}\left(\chi \theta_{\mbox{\tiny LF}}\right)-\frac{1}{2}\ln\left|\mathbf{R}_{\mbox{\tiny HF}} \left(\chi\theta_{\mbox{\tiny LF}}\right)\right|\right) \tag{17}\] Then, \(\theta_{\mbox{\tiny HF}}\) is calculated by \(\theta_{\mbox{\tiny HF}}=\chi\theta_{\mbox{\tiny LF}}\). After that, a local search starting from the already obtained \(\theta_{\mbox{\tiny HF}}\) is followed to improve the model accuracy by allowing the independent change of each hyperparameter. For clarity, the main steps of the proposed efficient HK modeling method for multi-fidelity high-dimension problems are summarized below: Step 1: Collect the low- and high-fidelity sample data \(D_{\mbox{\tiny LF},n}=\left\{\left(\mathbf{x}_{\mbox{\tiny LF},n},y_{\mbox{\tiny LF },i}\right)\right\}_{i=1}^{n}\) and \(D_{\mbox{\tiny HF},k}=\left\{\left(\mathbf{x}_{\mbox{\tiny HF},i},y_{\mbox{\tiny HF },i}\right)\right\}_{i=1}^{k}\); Step 2: Calculate the values of MIC \(\omega\) by using the low-fidelity data \(D_{\mbox{\tiny LF},n}\); Step 3: Determine the scale factor \(\lambda\) by solving the problem in (16); Step 4: Obtain the \(\theta_{\mbox{\tiny LF}}\) via the relationship \(\theta_{\mbox{\tiny LF}}=\lambda\omega\); Step 5: Solve the problem in (8) by a local optimizer starting from the hyperparameters obtained in Step 4 to obtain a better estimation of \(\theta_{\mbox{\tiny LF}}\); Step 6: Obtain the predictions from the low-fidelity Kriging at the high-fidelity sample sites via (5); Step 7: Determine the scale factor \(\chi\) by solving the problem in (17); Step 8: Obtain the \(\theta_{\mbox{\tiny HF}}\) via the relationship \(\theta_{\mbox{\tiny HF}}=\chi\theta_{\mbox{\tiny LF}}\); Step 9: Solve the problem in (9) by a local optimizer starting from the hyperparameters obtained in Step 8 to obtain a better estimation of \(\theta_{\mbox{\tiny HF}}\); Step 10: Calculate the prediction at untested sites using (7). The proposed HK modeling method is implemented based on a DACE toolbox (Lophaven et al., 2002). In the generation of the sample sites, the Latin hypercube sampling is adopted. The minepy package (Albanese et al., 2013) is adopted to calculate the MIC values with default settings. The one-dimension optimization problems in the model fitting process are solved via the Matlab's fminbnd optimizer. For the local optimizer, Matlab's fmincon function is utilized. Options for those optimizers and details of the implementation can be found in the source code, which is available at [https://github.com/Youwei-He/HDHK](https://github.com/Youwei-He/HDHK). To verify the implementation, the Forrester function (Forrester et al., 2007) is employed. The high- and low-fidelity function are given by: \[\begin{split} f_{\mbox{\tiny HF}}\left(\mathbf{x}\right)& =\left(6x^{2}-2\right)\sin\left(12x-4\right)\\ f_{\mbox{\tiny LF}}\left(\mathbf{x}\right)&=0.5f_ {\mbox{\tiny HF}}+10\left(x-0.5\right)-5\\ x&\in\left[0,1\right]\end{split} \tag{18}\] The sampling sites for the high- and low-fidelity data are \(\mathbf{x}_{\mbox{\tiny HF}}=\left\{0,0.4,0.6,1\right\}\) and \(x_{\text{LF}}=\left\{0,0.1429,0.2857,0.4286,0.5714,0.7143,0.8571,1\right\},\) respectively. Two HK models are built with the conventional and proposed strategy. Figure 2 compares the low-fidelity and high-fidelity predictions with the true functions. The predictions from the model tuned by the proposed strategy are labeled with HD as the method is developed for high-dimension problems. Overall, the low- or high-fidelity predictions from either tuning strategy agree well with the true functions. The predictions from the two modeling strategies almost overlap with each other. The \(\beta^{\text{-}}\) estimated with the conventional and the proposed strategy is 1.8769 and 1.8772, respectively, which both are close to the true value of 2. Those observations verify the implementation. The time for the conventional and proposed tuning strategy are 0.1216s and 0.0243s, respectively. This indicates that, though the strategy is developed for tuning the high-dimension HK model efficiently, it can also improve the modeling efficiency on this one-dimension problem. To quantify the performance of the proposed method, numerical experiments are carried out and presented in next section. ## 4 Experimental study In this section, the performance of the proposed method is tested and compared with conventional tuning strategy. For simplicity, the HK employing the conventional tuning strategy and the proposed high-dimension modeling method is shorted as HKC and HKHD, respectively. ### Numerical examples The expressions of the adopted analytic test problems (Cai et al., 2017) are summarized in Table 1. The number of modeling variables ranges from 2 to 50. \begin{table} \begin{tabular}{l l l} \hline No. & Function & Design \\ \hline \multirow{2}{*}{1} & \(f_{\text{HF}}\left(\mathbf{x}\right)=4x_{1}^{2}-2.1x_{1}^{4}+\frac{1}{3}x_{1} ^{6}+x_{i}x_{2}-4x_{2}^{2}+4x_{2}^{4}\) & \(x_{i}\in[-2,2]\) \\ & \(f_{\text{LF}}\left(\mathbf{x}\right)=f_{\text{HF}}(0.7\mathbf{x})+x_{i}x_{2}-65\) & \(i=1,2\) \\ \hline \end{tabular} \end{table} Table 1: Numerical test functions Figure 2: HK predictions over the Forrester function \[\begin{array}{ll}f_{\rm HF}({\bf x})=\Bigg{(}x_{2}-1.275\bigg{(}\frac{x_{1}}{\pi} \bigg{)}^{2}-5\frac{x_{1}}{\pi}-6\bigg{)}^{2}+10\bigg{(}1-\frac{0.125}{\pi} \bigg{)}{\rm cos}\big{(}x_{1}\big{)}&x_{1}\in[-5,10]\\ &f_{\rm LF}({\bf x})=0.8f_{\rm HF}({\bf x})-2.5x_{2}-30\\ &f_{\rm HF}({\bf x})=100\Big{(}x_{1}^{2}-x_{2}\Big{)}^{2}+a_{1}^{2}+a_{3}^{2}+9 0\Big{(}x_{3}^{2}-x_{4}\Big{)}+10.1\Big{(}a_{2}^{2}+a_{4}^{2}\Big{)}+19.8a_{2}a _{4}&\\ a_{i}=x_{i}-1,i=1,2,3,4&x_{i}\in[-4,4]\\ &f_{\rm LF}({\bf x})=90\Big{(}x_{1}^{2}-x_{2}\Big{)}^{2}+a_{1}^{2}+a_{3}^{2}+5 0\Big{(}x_{3}^{2}-x_{4}\Big{)}+5\Big{(}a_{2}^{2}+a_{4}^{2}\Big{)}+10a_{2}a_{4}& i=1,...,4\\ a_{i}=0.9x_{i}-1,i=1,3;\quad a_{i}=0.5x_{i}-1,i=2,4&\\ f_{\rm HF}\left({\bf x}\right)=\sum_{i=1}^{10}{\rm exp}\big{(}x_{i}\big{)} \Bigg{(}A(i)+x_{i}-{\rm ln}\bigg{(}\sum_{k=1}^{10}{\rm exp}\big{(}x_{k}\big{)} \bigg{)}\Bigg{)}&\\ A=[-6.089,-17.164,-34.054,-5.914,-24.721,-14.986,-24.100,&x_{i}\in[-5,5]\\ &-10.708,-26.662,-22.179]&i=1,...,10\\ f_{\rm LF}\left({\bf x}\right)=\sum_{i=1}^{10}{\rm exp}\big{(}x_{i}\big{)} \Bigg{(}B(i)+x_{i}-{\rm ln}\bigg{(}\sum_{k=1}^{10}{\rm exp}\big{(}x_{k}\big{)} \bigg{)}\Bigg{)}&\\ B=[-5,-10,-30,-5,-25,-15,-20,-10,-25,-20]&\\ f_{\rm HF}\left({\bf x}\right)=\sum_{i=1}^{9}\Big{(}\Big{(}x_{i+1}^{2}-x_{i} \Big{)}^{2}+\big{(}x_{i}-1\big{)}^{2}\Big{)}&x_{i}\in[-3,3]\\ f_{\rm LF}\left({\bf x}\right)=\sum_{i=1}^{9}\Big{(}0.9x_{i+1}^{4}+2.2x_{i}^{2 }-1.8x_{i}x_{i+1}^{2}+0.5\Big{)}&i=1,...,10\\ f_{\rm HF}\left({\bf x}\right)=x_{1}^{2}+x_{2}^{2}+x_{1}x_{2}-14x_{1}-16x_{2}+ \big{(}x_{3}-10\big{)}^{2}+4\big{(}x_{4}-5\big{)}^{2}+\big{(}x_{5}-3\big{)}^{2 }&\\ 6&+2\big{(}x_{6}-1\big{)}^{2}+5x_{7}^{2}+7\big{(}x_{8}-11\big{)}^{2}+2\big{(} x_{9}-10\big{)}^{2}+\big{(}x_{10}-7\big{)}^{2}+45&x_{i}\in[-10,11]\\ f_{\rm LF}\left({\bf x}\right)=0.8f_{\rm HF}-\sum_{i=1}^{10}x_{i}+100&i=1,...,1 0\end{array}\] \[\begin{array}{ll}f_{\rm HF}\left({\bf x}\right)=\big{(}x_{1}-1\big{)}^{2}+ \sum_{i=2}^{16}i\Big{(}2x_{i}^{2}-x_{i-1}\Big{)}^{2}&x_{i}\in[-5,5]\\ f_{\rm LF}\left({\bf x}\right)=0.9f_{\rm HF}\left({\bf x}\right)+10&i=1,...,16\\ f_{\rm HF}\left({\bf x}\right)=\big{(}x_{1}-1\big{)}^{2}+\sum_{i=2}^{30}i\Big{(}2 x_{i}^{2}-x_{i-1}\Big{)}^{2}&x_{i}\in[-3,3]\\ f_{\rm LF}\left({\bf x}\right)=0.8f_{\rm HF}\left({\bf x}\right)-\sum_{i=1}^{20}0.4x_{i}x_{i+1}-50&i=1,...,30\\ f_{\rm HF}\left({\bf x}\right)=\sum_{i=1}^{50}i\Big{(}x_{i}^{2}+x_{i}^{4}\Big{)}& x_{i}\in[-2,4]\\ f_{\rm LF}\left({\bf x}\right)=0.8f_{\rm HF}\left({\bf x}\right)-\sum_{i=1}^{50} \big{(}ix_{i}^{2}/10+x_{i}\big{)}-25&i=1,...,50\end{array}\] For HKC, the likelihood maximization problems are solved by Genetic Algorithm. The population size is set as \(4d\), and the maximum generation is set as \(125\). Therefore, the maximum number of likelihood function evaluation is \(500d\). The fractions of crossover and migration are set as \(0.8\) and \(0.2\), respectively. The search space of the hyperparameter \({\bf 0}\) is \([10^{-4},10^{2}]^{d}\). The maximum number of function evaluation for the fminbnd optimizer to solve the one-dimension problem in (16) and (17) is set as \(500\). Search interval for the scale factor \(\lambda\) and \(\chi\) is set as [10\({}^{-4}\), 10\({}^{2}\)] based on preliminary test. For the fmincon optimizer utilized to do the local search, function evaluations as many as 500 times are allowed. The number of low- and high-fidelity samples is set as 10\(d\) and 5\(d\), respectively. To test the accuracy of the models, 200\(d\) (maximum 5000) validation points are generated by Latin hypercube sampling. Two global accuracy metrics, the coefficient of determination R\({}^{2}\) and root mean square error (RMSE), and a local accuracy metric, the maximum absolute error (MAE), are utilized to evaluate the model accuracy: \[\text{R}^{2}=1-\frac{\sum_{i=1}^{N}\left(y_{i}-\hat{y}_{i}\right)^{2}}{\sum_{i= 1}^{N}\left(y_{i}-\overline{y}\right)^{2}} \tag{19}\] \[\text{MSE}=\sqrt{\sum_{i=1}^{N}\frac{\left(y_{i}-\hat{y}_{i}\right)^{2}}{N}} \tag{20}\] \[\text{MAE}=\max\left(\left|y_{i}-\hat{y}_{i}\right|\right) \tag{21}\] where \(N\) denotes the number of validation points; \(y_{i}\) and \(\hat{y}_{i}\) are the true and predicted high-fidelity response of the \(i\)th validation point, respectively; \(\overline{y}\) is the mean of the true response of validation points. Notably, only the high-fidelity prediction is involved in the accuracy comparison, as the high-fidelity response is usually the quantity of interest in practical applications. A closer value to 1 of R\({}^{2}\), indicates the better global accuracy of the model. Smaller values of the RMSE and MAE means better accuracy. Moreover, the training time are recorded to measure the modeling efficiency of the two strategies. Each analytic problem is modelled 10 times to obtain the mean and standard deviation (STD) results of those metrics. The experiment is conducted as a PC with Intel Xeon CPU E5-2666 v3 (r) 2.90GHz and 64GB RAM. ### Results and discussion Table 2 summarizes the statistic results of the modeling time and accuracy metrics. Boxplots are utilized to better visualize the test results over representative problems in Fig. 3-7. It can be noted that the modeling time of HKHD is much shorter than that of the HKC in all the test functions. Generally, the modeling time of HKHD method is 1/7\(\sim\)1/10 to that of the HKC method. For 10-D No. 4 function, the mean modeling time of HKHD is 0.589s, while it is 5.474s averagely for HKC method. This indicates that the proposed method can save more than 95% time for constructing the HK model. For the 50-D No.9 test function, the modeling time of HKHD is 304.0s in average. It saved 85.2% time compared with the conventional tuning strategy, which need 2055.2s to construct the model. The time-saving would be much meaningful for applications, e.g., surrogate-based optimization, that the surrogate model needs to be tuned frequently. In terms of the model accuracy, the HKHD is more accurate that the HKC over all numerical test functions. For the 4-D No.3 function, the R\({}^{2}\) of the HKHD and HKC are 0.990 and 0.659, respectively. The RMSE of the HKHD and HKC are 630.190 and 3591.430, respectively. This means that the global accuracy of HKHD is better that that of the HKC. As for the local accuracy, the HKHD outperforms the HKC as the MAE are 3909.955 and 19877.880, respectively. The R\({}^{2}\) of the HKHD over the 10-D No. 6 function is as high as 0.995, while it is 0.642 averagely of the HKC method. The RMSE and MAE values indicate that the global and local accuracy by HKHD method is 85.7% and 80.1% higher than that of the HKC. In the 50-D No.9 function, the HKHD achieved the performance of R\({}^{2}\) being 0.745, which is significantly higher than that of the HKC (R\({}^{2}\) being 0.305 averagely). The global and local accuracy indicated by the RMSE and MAE increased 42.9% and 41.7%, respectively. It should be mentioned that the performance of the model built following the conventional strategy might be improved by allowing more likelihood function evaluations. While, this might increase the modeling time significantly but the gain of accuracy might be unworthy, especially for applications which needs adaptive update of model. As a conclusion of the empirical experiments, the proposed HKHD method can build more accurate model than the conventional strategy within significant shorter time. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{No.} & \multicolumn{3}{c}{Time(s)} & \multicolumn{2}{c}{R\({}^{2}\)} & \multicolumn{2}{c}{RMSE} & \multicolumn{2}{c}{MAE} \\ & & HKC & HKHD & HKC & HKHD & HKC & HKHD & HKC & HKHD \\ \hline 1 & Mean & 0.275 & 0.039 & 0.244 & 0.653 & 10.1 & 6.1 & 40.0 & 27.0 \\ & STD & 0.033 & 0.005 & 0.581 & 0.530 & 4.4 & 4.3 & 11.2 & 11.8 \\ 2 & Mean & 0.247 & 0.035 & 0.730 & 0.985 & 102.7 & 23.5 & 530.7 & 133.7 \\ & STD & 0.031 & 0.004 & 0.146 & 0.011 & 33.2 & 9.3 & 184.7 & 75.6 \\ 3 & Mean & 0.844 & 0.089 & 0.659 & 0.990 & 3591.4 & 630.2 & 19877.9 & 3910.0 \\ & STD & 0.107 & 0.018 & 0.288 & 0.008 & 1739.6 & 246.7 & 7594.0 & 1774.9 \\ 4 & Mean & 5.474 & 0.589 & 0.472 & 0.617 & 1328.9 & 1151.1 & 6471.3 & 5073.5 \\ & STD & 0.253 & 0.088 & 0.202 & 0.033 & 252.2 & 49.5 & 1317.1 & 841.4 \\ 5 & Mean & 5.564 & 0.401 & 0.288 & 0.456 & 77.7 & 66.6 & 360.7 & 300.5 \\ & STD & 0.051 & 0.179 & 0.239 & 0.304 & 14.3 & 18.2 & 58.8 & 79.5 \\ 6 & Mean & 5.553 & 0.520 & 0.642 & 0.995 & 477.1 & 68.1 & 1983.9 & 394.6 \\ & STD & 0.054 & 0.098 & 0.410 & 0.002 & 377.8 & 12.2 & 1107.7 & 91.2 \\ 7 & Mean & 28.593 & 4.309 & 0.459 & 0.705 & 19353.5 & 14497.4 & 100586.7 & 68426.1 \\ & STD & 6.281 & 0.604 & 0.199 & 0.041 & 3468.8 & 991.0 & 18000.1 & 8388.5 \\ 8 & Mean & 273.679 & 43.437 & 0.436 & 0.704 & 6639.1 & 4825.3 & 33752.2 & 22112.9 \\ & STD & 39.280 & 4.272 & 0.093 & 0.023 & 549.0 & 184.7 & 2797.1 & 3298.7 \\ 9 & Mean & 2055.243 & 304.094 & 0.305 & 0.745 & 11065.9 & 6309.7 & 44894.0 & 26156.2 \\ & STD & 12.531 & 102.526 & 0.118 & 0.249 & 963.1 & 2352.9 & 2352.8 & 8681.8 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of the metric statistic results Figure 3: Boxplots for the 4-D No. 3 function To investigate the effect the sample size on the modeling performance, additional two groups of experiment are conducted. One group of experiment start with the number of low- and high-fidelity samples being \(8d\) and \(4d\), respectively. The number of low- and high-fidelity samples is set as \(12d\) and \(6d\), respectively, in the other group of experiment. Table 3 presents the statistic results of the performance metrics by HKHD with different sample sizes. The metric values of the sample size being \(10d\)+\(5d\) taken from Table 2 are also included for better comparison. It can be noted that the modeling time increases with the increase of the sample size. For example, it needs 1.295s, 4.309s and 9.830s to finish the model construction of the No. 7 function with the sample size being \(8d\)+\(4d\), \(10d\)+\(5d\), and \(12d\)+\(6d\), respectively. For the 50-D No. 9 test problem, the modeling time increased from 304.0s to 3386.9s averagely, a 1003% increase, as the sample size expanded from \(10d\)+\(5d\) to \(12d\)+\(6d\). In terms of the model accuracy, it improves as more samples are adopted. For instance, the RMSE decreased from 6309.7 to 6165.3, 2.3% Figure 4: Boxplots for the 10-D No. 6 function Figure 5: Boxplots for the 16-D No. 7 function Figure 6: Boxplots for the 30-D No. 8 function Figure 7: Boxplots for the 50-D No. 9 function improvement as the sample size expanded from \(10d+5d\) to \(12d+6d\). While, for the sample size increased from \(10d+5d\) to \(12d+6d\), the global accuracy indicated by RMSE values (11031.2 and 6309.7) improved 42.8%. Those observations indicate that a moderate sample size would achieve a balance between the modeling efficiency and accuracy. **Table 3.** Statistic results of the performance metrics by HKHD with different sample sizes \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Metric} & Sample size & \multicolumn{2}{c}{No. 3} & \multicolumn{2}{c}{No. 6} & \multicolumn{2}{c}{No. 7} & \multicolumn{2}{c}{No. 9} \\ & & Mean & STD & Mean & STD & Mean & STD & Mean & STD \\ \hline \multirow{4}{*}{Time/s} & \(8d\)+\(4d\) & 0.758 & 0.048 & 4.341 & 0.080 & 11.784 & 0.707 & 1283.576 & 39.593 \\ & \(10d\)+\(5d\) & 0.844 & 0.107 & 5.553 & 0.054 & 28.593 & 6.281 & 2055.243 & 12.531 \\ & \(12d\)+\(6d\) & 1.143 & 0.137 & 10.040 & 0.508 & 108.974 & 22.316 & 24942.824 & 241.766 \\ & \(8d\)+\(4d\) & 3289.1 & 1985.6 & 356.1 & 238.1 & 18721.6 & 3153.8 & 11768.9 & 500.7 \\ \multirow{4}{*}{RMSE} & \(10d\)+\(5d\) & 3591.4 & 1739.6 & 477.1 & 377.8 & 19353.5 & 3468.8 & 11065.9 & 963.1 \\ & \(12d\)+\(6d\) & 2089.8 & 1734.9 & 260.8 & 253.3 & 19427.9 & 3585.2 & 11115.2 & 567.8 \\ \multirow{4}{*}{Time/s} & \(8d\)+\(4d\) & 16290.2 & 6844.3 & 1533.7 & 630.6 & 87603.9 & 13576.6 & 50396.9 & 1819.9 \\ & \(10d\)+\(5d\) & 19877.9 & 7594.0 & 1983.9 & 1107.7 & 100586.7 & 18000.1 & 44894.0 & 2352.8 \\ \multirow{4}{*}{MAE} & \(12d\)+\(6d\) & 13361.1 & 8547.7 & 1225.0 & 744.1 & 100452.0 & 9498.5 & 50534.2 & 2838.8 \\ \hline \hline \end{tabular} Table 4 presents the statistic results of the performance metrics by HKC with different sample sizes. The modeling time increases with sample size. For example, it needs 108.9s to build the HK model with the sample size being \(12d\)+\(5d\) on the 16-D No.7 test problem. Meanwhile, the construction time is 28.5s and 11.7s averagely by using a sample with size of \(10d\)+\(5d\) and \(8d\)+\(4d\), respectively. While, with the increase of the sample size, the model accuracy is not always improved. The mean RMSE values of HKC on No. 9 problem are 11768.9, 11065.9, and 11115.2, respectively. The comparison of the performance metrics with the change of sample size between the HKHD and HKC are presented in Fig. 8-11. It can be noted that the HKHD outperforms HKC over those test problems with various sample size in terms of both modeling efficiency and accuracy. **Table 5.** Statistic results of the performance metrics by HKC with different sample sizes \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Metric} & Sample size & \multicolumn{2}{c}{No. 3} & \multicolumn{2}{c}{No. 6} & \multicolumn{2}{c}{No. 7} & \multicolumn{2}{c}{No. 9} \\ & size & Mean & STD & Mean & STD & Mean & STD & Mean & STD \\ \hline \multirow{4}{*}{Time/s} & \(8d\)+\(4d\) & 0.758 & 0.048 & 4.341 & 0.080 & 11.784 & 0.707 & 1283.576 & 39.593 \\ & \(10d\)+\(5d\) & 0.844 & 0.107 & 5.553 & 0.054 & 28.593 & 6.281 & 2055.243 & 12.531 \\ & \(12d\)+\(6d\) & 1.143 & 0.137 & 10.040 & 0.508 & 108.974 & 22.316 & 24942.824 & 241.766 \\ \multirow{4}{*}{RMSE} & \(8d\)+\(4d\) & 3289.1 & 1985.6 & 356.1 & 238.1 & 18721.6 & 3153.8 & 11768.9 & 500.7 \\ & \(10d\)+\(5d\) & 3591.4 & 1739.6 & 477.1 & 377.8 & 19353.5 & 3468.8 & 11065.9 & 963.1 \\ \multirow{4}{*}{RMSE} & \(12d\)+\(6d\) & 2089.8 & 1734.9 & 260.8 & 253.3 & 19427.9 & 3585.2 & 11115.2 & 567.8 \\ & \(8d\)+\(4d\) & 16290.2 & 6844.3 & 1533.7 & 630.6 & 87603.9 & 13576.6 & 50396.9 & 1819.9 \\ \multirow{4}{*}{MAE} & \(10d\)+\(5d\) & 19877.9 & 7594.0 & 1983.9 & 1107.7 & 100586.7 & 18000.1 & 44894.0 & 2352.8 \\ & \(12d\)+\(6d\) & 13361.1 & 8547.7 & 1225.0 & 744.1 & 100452.0 & 9498.5 & 50534.2 & 2838.8 \\ \hline \hline \end{tabular} Table 6.** Statistic results of the performance metrics by HKC with different sample sizes \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Metric} & Sample size & \multicolumn{2}{c}{No. 3} & \multicolumn{2}{c}{No. 6} & \multicolumn{2}{c}{No. 7} & \multicolumn{2}{c}{No. 9} \\ & size & Mean & STD & Mean & STD & Mean & STD & Mean & STD \\ \hline \multirow{4}{*}{Time/s} & \(8d\)+\(4d\) & 0.758 & 0.048 & 4.341 & 0.080 & 11.784 & 0.707 & 1283.576 & 39.593 \\ & \(10d\)+\(5d\) & 0.844 & 0.107 & 5.553 & 0.054 & 28.593 & 6.281 & 2055.243 & 12.531 \\ \multirow{4}{*}{Time/s} & \(12d\)+\(6d\) & 1.143 & 0.137 & 10.040 & 0.508 & 108.974 & 22.316 & 24942.824 & 241.766 \\ \multirow{4}{*}{RMSE} & \(8d\)+\(4d\) & 3289.1 & 1985.6 & 356.1 & 238.1 & 18721.6 & 3153.8 & 11768.9 & 500.7 \\ & \(10d\)+\(5d\) & 3591.4 & 1739.6 & 477.1 & 377.8 & 19353.5 & 3468.8 & 11065.9 & 963.1 \\ \multirow{4}{*}{RMSE} & \(12d\)+\(6d\) & 2089.8 & 1734.9 & 260.8 & 253.3 & 19427.9 & 3585.2 & 11115.2 & 567.8 \\ \multirow{4}{*}{MAE} & \(8d\)+\(4d\) & 16290.2 & 6844.3 & 1533.7 & 630.6 & 87603.9 & 13576.6 & 50396.9 & 1819.9 \\ & \(10d\)+\(5d\) & 19877.9 & 7594.0 & 1983.9 & 1107.7 & 100586.7 & 18000.1 & 44894.0 & 2352.8 \\ \multirow{4}{*}{MAE} & \(12d\)+\(6d\) & 13361.1 & 8547.7 & 1225.0 & 744.1 & 100452.0 & 9498.5 & ### Engineering example In addition to those analytic problems, an engineering problem of modeling the isentropic efficiency of the axial compressor rotor Rotor37 is covered to further demonstrate the effectiveness of the developed efficient modeling method. Rotor37 is an isolated axial-flow compressor wheel designed and experimentally analyzed at the NASA (Reid & Moore, 1978). The main geometric and design specifications of Rotor 37 are summarized in Table 5. Fig. 12 illustrates is 3-D view. **Table 5.** Main geometric parameters and design specifications of Rotor37 [MISSING_PAGE_POST] The problem is to build a model to predict the isentropic efficiency of Rotor 37 with variation of the blade geometry: \[f=\eta_{c}\left(\mathbf{x}\right) \tag{22}\] with \[\eta_{c}=\frac{h_{2_{c}}-h_{1}}{h_{2_{c}}-h_{1}} \tag{23}\] where \(h_{1}\) denotes the specific enthalpy of the air at the rotor inlet, \(h_{2_{c}}\) and \(h_{2_{c}}\) represent the specific enthalpy of the gas at the outlet of the rotor for isentropic and real compression process, respectively; \(\mathbf{x}\) is the parameters determining the blade shape. \(h_{1}\), \(h_{2_{c}}\), \(\mathrm{and}\)\(h_{2_{c}}\) are obtained from the result of the computational fluid dynamic (CFD) simulation. In this problem, the Rotor37 blade is constructed with three blade sections and a stacking law. Each section is composed by adding the thickness of the suction and pressure side to the camber line. Camber line and thickness of pressure and suction side are parameterized by Bezier curve. For each section, nine parameters are used to determine the profile shape as illustrated in Fig. 13(a). In detail, \(\beta_{1}\) and \(\beta_{2}\) are the inlet and out blade angle, respectively; \(\alpha\) and \(\gamma\) denote the trailing wedge angle and camber angle. \(t_{\mathrm{p1}}\), \(t_{\mathrm{p2}}\), \(t_{\mathrm{a1}}\), \(t_{\mathrm{s2}}\), and \(t_{\mathrm{s3}}\) are the control point of the Figure 12: 3-D view of the Rotor37 thickness distribution of the pressure and suction side. The line goes through the center of gravity of each section is the stacking line. As shown in Fig. 13(b), the profile at the blade mid and tip are allowed to in the axial and circumferential direction, usually known as sweep and lean of the stacking line. More details about the shape parameterization can be found in (NUMECA, 2021). As a result, the blade shape is governed by 31 parameters. The NUMECA/AutoBlade is utilized to generate the file describing the blade shape for the grid generation. Fine and coarse grids are used in the high- and low-fidelity simulation, respectively. Multi-block structured mesh is generated where the O4H topology is used around the blade and additional H-blocks are placed in the upstream/downstream of the blade. Refinements are conducted in the near walls to capture the boundary layer flow characteristics. Fig. 14 presents the grids for the low- and high-fidelity simulations for the baseline geometry, with the number of cells being 312077 and 799185, respectively. The CFD simulation solving the Reynolds time-averaged Navier-Stokes equations enclosed with the Spalart-Allmaras turbulence model are conducted in NUMECA to determine the isentropic efficiency. At the inlet, the total temperature and total pressure of the axial inlet flow are applied. While the static pressure is specified at outlet boundary. No-slip and adiabatic conditions at solid surfaces are applied and periodicity condition is applied at lateral sides of the computational domain to facilitate single blade passage simulation. Those boundary conditions are kept unchanged among all the simulations over different blades. Simulations stop if the global residual decreased to \(10^{-5}\). The low- and high-fidelity simulation finished within about 5min and 12min, respectively. The isentropic efficiency of the Rotor37 obtained from the low- and high-fidelity simulations is 85.41% and 85.09%, respectively. 200 high-fidelity samples and 400 low-fidelity samples are generated by the Latin hypercube Figure 14: Grid for low- and high-fidelity simulation Figure 13: Geometric parameters for the parametric representation of the blade sampling procedure. Corresponding simulations are conducted to obtain the responses. 164 out of the 200 high-fidelity simulations ended smoothly. Meanwhile, for the 400 low-fidelity simulations, 296 simulations obtained the responses successfully. Rest of the low- and high-fidelity simulations failed. From the log of the computation, it is found that the simulation failures are resulted from bad geometry, ill mesh, and weak convergence of the CFD solver. The HKC and HKHD models are built based on this set of low- and high-fidelity data. To measure the performance of the models, 350 samples are generated by Latin hypercube sampling and simulated with the high-fidelity simulation. In turn, 285 computations are successful. The performance metrics of those model predictions are listed in Table 6. The HKHD method spent 16.8s to build the HK model. This is a 90% saving of the modeling time compared with HKC, which spent 227.4s to tune the model. As for the accuracy, HKHD outperformed the HKC in terms of the R\({}^{2}\), RMSE and MAE metric. The R\({}^{2}\) is 0.975 and 0.907 for the HKHD and HKC, respectively, indicating the HKHD model is more accurate in the global view. For the local accuracy, HKHD is also superior to the HKC as the MAE values for those two models are 0.0295 and 0.0463. respectively. Moreover, comparison results of the simulation validation data and the predictions are illustrated in Fig. 15. It can be intuitively found that the HKHD method performs better than the HKC strategy. Overall, the proposed modeling strategy can build more accurate model with in significantly shorter time. This demonstrates the effectiveness of the proposed method on practical engineering problem. ## 5 Conclusions In this paper, an efficient HK modeling method is developed for improving the modeling efficiency over high-dimension multi-fidelity problems. The relative magnitudes of hyperparameters are estimated by maximal information coefficients or the hyperparameters of \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & Time/s & R\({}^{2}\) & RMSE & MAE \\ \hline HKC & 227.4 & 0.907 & 0.0064 & 0.0463 \\ HKHD & 16.8 & 0.975 & 0.0033 & 0.0295 \\ \hline \hline \end{tabular} \end{table} Table 6: Metric values on the engineering problem Figure 15: Comparisons of the simulated values and the predictions lower fidelity model. Then the high-dimension maximum likelihood estimation problem is reformulated into a one-dimension problem to improve the modeling efficiency. Local correction search is added to further exploit the search space of the hyperparameters. To demonstrate the effectiveness and efficiency, ten numerical cases and one engineering modeling problem are tested. For the numerical examples, the proposed method only needs 1/7-1/10 time of the compared conventional strategy and can achieve higher accuracy. With the increase of the sample size, the modeling efficiency of the proposed method decreases and the model accuracy improves. For the conventional tuning strategy, the modeling efficiency decreases with the expansion of the sample set but the model accuracy is not always improved. As for the prediction of the isentropic efficiency of Rotor37, the cost saving associated with the proposed approach is about 90% compared with the conventional tuning strategy, and the proposed approach even achieves higher accuracy. Currently, the proposed method is illustrated for two-fidelity problems. We believe that extending the efficient modeling method to multi-fidelity problems would be straightforward. Moreover, optimization method based on the proposed efficient HK method will be pursued in the near future.
2309.14441
Revisiting Tree Isomorphism: An Algorithmic Bric-à-Brac
The Aho, Hopcroft and Ullman (AHU) algorithm has been the state of the art since the 1970s for determining in linear time whether two unordered rooted trees are isomorphic or not. However, it has been criticized (by Campbell and Radford) for the way it is written, which requires several (re)readings to be understood, and does not facilitate its analysis. In this article, we propose a different, more intuitive formulation of the algorithm, as well as three propositions of implementation, two using sorting algorithms and one using prime multiplication. Although none of these three variants admits linear complexity, we show that in practice two of them are competitive with the original algorithm, while being straightforward to implement. Surprisingly, the algorithm that uses multiplications of prime numbers (which are also be generated during the execution) is competitive with the fastest variants using sorts, despite having a worst theoretical complexity. We also adapt our formulation of AHU to tackle to compression of trees in directed acyclic graphs (DAGs). This algorithm is also available in three versions, two with sorting and one with prime number multiplication. Our experiments are carried out on trees of size at most $10^6$, consistent with the actual datasets we are aware of, and done in Python with the library treex, dedicated to tree algorithms.
Florian Ingels
2023-09-25T18:02:03Z
http://arxiv.org/abs/2309.14441v2
# Revisiting Tree Isomorphism: ###### Abstract The AHU algorithm has been the state of the art since the 1970s for determining in linear time whether two unordered rooted trees are isomorphic or not. However, it has been criticized (by Campbell and Radford) for the way it is written, which requires several (re)readings to be understood, and does not facilitate its analysis. In this paper, we propose an alternative version of the AHU algorithm, which addresses this issue by being designed to be clearer to understand and implement, with the same theoretical complexity and equally fast in practice.. Whereas the key to the linearity of the original algorithm lay on the careful sorting of lists of integers, we replace this step by the multiplication of lists of prime numbers, and prove that this substitution causes no loss in the final complexity of the new algorithm. **Keywords:** tree isomorphism, AHU algorithm, prime numbers multiplication ## 1 Introduction ### Context The Aho, Hopcroft and Ullman (AHU) algorithm, introduced in the 1970s [1, Example 3.2], establishes that the tree isomorphism problem can be solved in linear time, whereas the more general graph isomorphism problem is still an open problem today, where no proof of NP-completeness nor polynomial algorithm is known [34], even though very efficient algorithms exist [27, 4]. As far as we know, AHU remains the only state-of-the-art algorithm for practically determining whether two trees are isomorphic. Recently, Liu [25] proposed to represent a tree by a polynomial of two variables, computable in linear time, and where two trees have the same polynomial if and only if they are isomorphic. Unfortunately, the existence of an algorithm to determine the equality of two polynomials in polynomial time is still an open question [32]. We should also mention [10], which proposes an alternating logarithmic time algorithm for tree isomorphism - under NC complexity class framework, that is, problems efficiently solvable on a parallel computer [5]. One criticism made of the AHU algorithm concerns the way the algorithm is presented in the original article, which is claimed to be _utterly opaque. Even on second or third reading. When an algorithm is written it should be clear, it should persuade, and it should lend itself to analysis._ -- Douglas M. Campbell and David Radford [11] To the best of our knowledge, this remark seems to have remained a dead letter in the community, and no alternative, clearer version of the algorithm seems ever to have been published - with the exception of Campbell and Radford themselves, but with quasilinear complexity instead of linear. In this article, we propose to revisit the AHU algorithm by giving an alternative version that is intended to be easier to understand and implement, with the same theoretical complexity and equally fast in practice. This variant is based on replacing multiset hashing, originally carried out in the form of sorting lists of integers, by the elementary principle of multiplying lists of prime numbers. The rest of this section is devoted to introducing the notations and definitions useful for the rest of the paper. Section 2 presents the AHU algorithm, while Section 3 presents our variant of the same algorithm and its complexity analysis; both versions are then compared numerically in Section 4. ### Tree isomorphisms A rooted tree \(T\) is a connected directed graph without any undirected cycle such that (i) there exists a special node called the root and (ii) any node but the root has exactly one parent. The parent of a node \(u\) is denoted by \(\mathcal{P}(u)\), whereas its children are denoted by \(\mathcal{C}(u)\). The leaves \(\mathcal{L}(T)\) of \(T\) are the nodes without any children. Rooted trees are said to be unordered if the order among siblings is not significant. In this paper, we use _trees_ to refer to unordered rooted trees. The degree of a node \(u\) is defined as \(\deg(u)=\#\mathcal{C}(u)\) and the degree of a tree \(T\) as \(\deg(T)=\max_{u\in T}\deg(u)\). The depth \(\mathcal{D}(u)\) of a node \(u\) is the length of the path between \(u\) and the root. The depth \(\mathcal{D}(T)\) of \(T\) is the maximal depth among all nodes. The level of a node \(u\) is defined as \(\mathcal{D}(T)-\mathcal{D}(u)\). The sets of nodes of level \(d\) in a tree \(T\) is denoted by \(T^{d}\), and the mapping \(d\mapsto T^{d}\) can be constructed in linear time by a simple traversal of \(T\). **Definition 1**.: _Two trees \(T_{1}\) and \(T_{2}\) are said to be isomorphic if there exists a bijective mapping \(\varphi:T_{1}\to T_{2}\) so that (i) the roots are mapped together and (ii) for any \(u,v\in T_{1}\), \(v\in\mathcal{C}(u)\iff\varphi(v)\in\mathcal{C}(\varphi(u))\)._ Such a mapping \(\varphi\) is called a _tree isomorphism_. In other words, two trees are isomorphic if one can be obtained from the other by simply swapping the children of each node. An example of isomorphic trees is provided in Figure 1. Whenever two trees \(T_{1}\) and \(T_{2}\) are isomorphic, we note \(T_{1}\simeq T_{2}\). It is well known that \(\simeq\) is an equivalence relation on the set of trees [37]. The _tree isomorphism problem_ consists in deciding whether two trees are isomorphic or not. For the broader graph isomorphism problem, it is not usual to explicitly construct the isomorphism \(\varphi\) - let us mention nonetheless [14, Section 3.3] and [19] - but rather to compute a certificate of non-isomorphism. For instance, Weisfeiler-Lehman algorithms, also known as colour refinement algorithms [18, 21], colour the nodes of each graph according to certain rules, and the histograms of the colour distributions are then compared: if they diverge, the graphs are not isomorphic. This test is not complete in the sense that there are non-isomorphic graphs with the same colour histogram - even though the distinguishing power of these algorithms is constantly being improved [16]. While the graph isomorphism problem is not solved in the general case, it is solved for trees by virtue of the AHU algorithm, which is built on a colouring principle similar to that of Weisfeiler-Lehman. Figure 1: Two isomorphic trees. The Aho, Hopcroft and Ullman algorithm In this section we introduce AHU algorithm, that solves the tree isomorphism problem. First, we present the general principle of the algorithm in Section 2.1, before reproducing and commenting in Section 2.2 the original presentation of the algorithm as it can be found in [1]. ### Principle In [11], Campbell and Radford provide a very clear, step-by-step exposition of the intuitions that lead to the AHU algorithm, and we invite the interested reader to consult it. For the sake of self-containment, we offer here another intuition of how the AHU algorithm works, presented as a colouring process, thus making the connection with Weisfeiler-Lehman algorithms for graph isomorphism. The core idea behind AHU algorithm is to provide each node in trees \(T_{1}\) and \(T_{2}\) a canonical representative of its equivalence class for \(\simeq\), thus containing all the information about its descendants. The trees are isomorphic if and only if the canonical representatives of the roots are identical. The nodes of both trees are simultaneously browsed in ascending levels. Suppose that each node \(u\) of level \(d-1\) has been assigned a colour \(c(u)\), supposed to represent its equivalence class for the relation \(\simeq\). Each node \(u\) of level \(d\) is associated with a multiset \(\mathcal{C}_{c}(u)=\{c(v):v\in\mathcal{C}(u)\}\) - if \(u\) is a leaf, this multiset is denoted \(\emptyset\). Each distinct multiset is given a colour, which is assigned to the corresponding nodes. An illustration is provided in Figure 2. In the end, the trees are isomorphic if and only if their roots receive the same colour. Moreover, after processing level \(d\), if the multiset of colours assigned to the nodes of level \(d\) differs from one tree to the other, we can immediately conclude that the trees are not isomorphic. In practice, colours are represented by integers. The pseudocode for this ideal version of the AHU algorithm is given in Algorithm 1. We say ideal because it ignores an important implementation problem. Indeed, multisets must be treated carefully: in line 9 to find out whether a colour has already been assigned to multiset \(\mathcal{C}_{c}(u)\), and in line 13 to check whether the colours assigned to level \(d\) coincide between the trees. The latter can be addressed using pigeonhole sort [8] in linear time; whereas for the former, there are two main options: either by using hash functions specifically designed for multisets [12, 26], or by treating these Figure 2: Assigning colours to nodes in AHU algorithm. multisets as lists that are sorted before being hashed or compared. The second approach is commonly used, both by AHU and by Weisfeiler-Lehman algorithms. Before examining the actual AHU algorithm in detail, we investigate the complexity of Algorithm 1. If we assume that determining whether \(f(\mathcal{C}_{\mathrm{c}}(u))\) is defined or not, at line 9, can be accomplished in constant time (e.g. assuming a perfect hash function [23] working with multisets), then we have the following result. **Proposition 1**.: _Algorithm 1 runs in \(\mathbf{O}(\mathfrak{n})\), where \(\mathfrak{n}=\#\mathsf{T}_{1}=\#\mathsf{T}_{2}\)._ **Proof.** Fix a level \(d\) and a node \(u\in\mathsf{T}_{i}^{d}\). Building \(\mathcal{C}_{\mathrm{c}}(u)\) requires \(O(\deg(u))\); noticing that \(\sum_{u\in\mathsf{T}_{i}^{d}}\deg(u)=\#\mathsf{T}_{i}^{d-1}\) and that the comparison in line 13 can be done in \(O(\#\mathsf{T}_{i}^{d})\) - e.g. with pigeonhole sort [8]; summing over \(d\) leads to the result. \(\mathcal{P}\) Notably, this establishes that the tree isomorphism problem is solvable in linear time, provided that the assumption made above is valid. Note that there also exists a naive algorithm for tree isomorphism in \(O(n^{2})\)[11] which makes extensive use of Knuth tuples [22]. ### Original algorithm The description of the AHU algorithm in the original article [1, Example 3.2] is quite different from what has been presented previously. For the sake of self-containedness, we reproduce it here, where only minor changes have been made to fit the notations used in this paper: 1. First, assign to all leaves in \(\mathsf{T}_{1}\) and \(\mathsf{T}_{2}\) the integer \(0\). 2. Assume by induction that all nodes at level \(d-1\) of \(\mathsf{T}_{1}\) and \(\mathsf{T}_{2}\) have been assigned an integer. Let \(\mathsf{L}_{1}\) (respectively \(\mathsf{L}_{2}\)) be the list of nodes in \(\mathsf{T}_{1}\) (respectively \(\mathsf{T}_{2}\)) at level \(d-1\) sorted by non-decreasing value of the assigned integers. 3. Assign to the nonleaves of \(\mathsf{T}_{1}\) at level \(d\) a tuple of integers by scanning the list \(\mathsf{L}_{1}\) from left to right and performing the following actions: * For each vertex on list \(\mathsf{L}_{1}\) take the integer assigned to \(u\) to be the next component of the tuple associated with \(\mathcal{P}(u)\). * On completion of this step, each nonleaf \(w\) of \(\mathsf{T}_{1}\) at level \(d\) will have a tuple \((i_{1},i_{2},\ldots,i_{k})\) associated with it, where \(i_{1},\ldots,i_{k}\) are the integers, in non-decreasing order, associated with the children of \(w\). * Let \(\mathsf{S}_{1}\) be the sequence of tuples created for the vertices of \(\mathsf{T}_{1}\) on level \(d\). 4. Repeat Step 3 for \(\mathsf{T}_{2}\) and let \(\mathsf{S}_{2}\) be the sequence of tuples created for the vertices of \(\mathsf{T}_{2}\) on level \(d\). 5. Sort \(\mathsf{S}_{1}\) and \(\mathsf{S}_{2}\) lexicographically. Let \(\mathsf{S}_{1}^{\prime}\) and \(\mathsf{S}_{2}^{\prime}\), respectively, be the sorted sequence of tuples. 6. If \(\mathsf{S}_{1}^{\prime}\) and \(\mathsf{S}_{2}^{\prime}\) are not identical, then halt: the trees are not isomorphic. Otherwise, assign the integer \(1\) to those vertices of \(\mathsf{T}_{1}\) on level \(d\) represented by the first distinct tuple on \(\mathsf{S}_{1}^{\prime}\), assign the integer \(2\) to the vertices represented by the second distinct tuple, and so on. As these integers are assigned to the vertices of \(\mathsf{T}_{1}\) on level \(d\), replace \(\mathsf{L}_{1}\) by the list of the vertices so assigned. Append the leaves of \(\mathsf{T}_{1}\) on level \(d\) to the front of \(\mathsf{L}_{1}\). Do the same for \(\mathsf{L}_{2}\). \(\mathsf{L}_{1}\) and \(\mathsf{L}_{2}\) can now be used for the assignment of tuples to nodes at level \(d+1\) by returning to Step 3. 7. If the roots of \(\mathsf{T}_{1}\) and \(\mathsf{T}_{2}\) are assigned the same integer, \(\mathsf{T}_{1}\) and \(\mathsf{T}_{2}\) are isomorphic. Note that, in Step 5, the authors resort to a variant of radix sort [1, Algorithm 3.2]. Actually, the tree isomorphism problem and AHU algorithm are only introduced in the book as an application example of this sorting algorithm. To analyse the complexity of AHU algorithm, the authors make the assumption that trees are sufficiently small so that they can be described by a \(k\) bit word (i.e. with a 64-bit machine, \(\#T<2^{64}\)). For the purpose of this paper, we reframe this assumption as follows. **Assumption 1**.: _For any considered tree \(T,\log\#T=O(1)\)._ With this assumption, they show that tree isomorphism can be solved in linear time. **Theorem 1**.: _AHU algorithm runs in \(O(n)\) where \(n=\#T_{1}=\#T_{2}\)._ **Proof.** See the proofs in [1, Example 3.2] for the whole algorithm and especially [1, Algorithm 3.2] for sorting lists \(S_{1}\) and \(S_{2}\) in Step 5. Assumption 1 is made to ensure that the largest integer manipulated in the various lists is not too large, and therefore that the (linear) sorting algorithm for these lists can effectively consider these numbers as integers and not as sequences of \(0\)s and \(1\)s. \(\mathcal{P}\) **Remark 1**.: _If Assumption 1 is relaxed, there are (large) trees for which the algorithm runs in \(O(n\log n)\); see [11]._ As already mentioned in the introduction, Campbell and Radford describe this formulation of the algorithm as "utterly opaque. Even on second or third reading." (sic) [11]. In their view, this is detrimental to understanding the algorithm and being able to analyse and implement it. Based on this observation, the natural question that arises is whether it is possible to find a version of AHU algorithm that is easier to understand and analyze, while remaining linear under Assumption 1, and manipulates only elementary concepts, just as original AHU does by sorting lists of integers. Hash functions designed for multisets have already been mentioned [12, 26], but they involve advanced concepts, which would make implementation difficult for non-specialists. For this reason, they are beyond the scope of this paper. Instead, we propose an algorithm that uses only elementary concepts, replacing hash of multisets by multiplications of primes numbers. ## 3 Revisiting AHU algorithm In Algorithm 1, we need to associate a unique integer \(f(\mathcal{C}_{c}(u))\) to each distinct multiset \(\mathcal{C}_{c}(u)\) of integers encountered. There is a particularly simple and fundamental example where integers are associated with multisets: prime factorization. Indeed, through the fundamental theorem of arithmetic, there is a bijection between integers and multisets of primes. For example, \(360=2^{3}\cdot 3^{2}\cdot 5\) is associated to the multiset \(\{2,2,2,3,3,5\}\). Note that this bijection is well known [9], and has already been successfully exploited in the literature for prime decomposition, but also usual operations such as product, division, gcd and lcd of numbers [36]. To the best of our knowledge, this link has never been exploited to replace multiset hashing, a fortiori in the context of graph isomorphism algorithms - such as Weisfeiler-Lehman, or AHU for trees. Note, however, that this approach has been used in the context of evaluating poker hands [35], where prime multiplication has been preferred to sorting cards by value in order to get a unique identifier for each distinct possible hand. Since the original AHU sorts lists of integers, the main difficulty in making this substitution is to ensure that the complexity of multiplying lists of primes does not exceed that of sorting lists of integers. In Section 3.1, we present our version of AHU algorithm which uses multiplication of primes; while Section 3.2 presents its complexity analysis and shows that, under the same Assumption 1 as original AHU, our algorithm is still linear. ### AHU algorithm with primes Suppose that each node \(u\) at level \(d\) has received a prime number \(c(u)\), assuming that all nodes at that level and of the same class of equivalence have received the same number. Then, to a node \(u\) at level \(d\), instead of associating the multiset \(\mathcal{C}_{c}(u)=\{c(v):v\in\mathcal{C}(u)\}\), we associate the number \(N(u)=\prod_{v\in\mathcal{C}(u)}c(v)\). The nodes of level \(d\) are then renumbered with prime numbers - where each distinct number \(N(u)\) gets a distinct prime. The fundamental theorem of arithmetic ensures that two identical multisets \(\mathcal{C}_{c}(\cdot)\) receive the same number \(N(\cdot)\). The pseudocode for this new version of AHU is presented in Algorithm 2. This version of AHU differs from the ideal version presented in Algorithm 1 by only few lines: line 4, which defines additional variables required for this version; line 9 (respectively line 8 in Algorithm 1), which replaces the multiset \(\mathcal{C}_{c}(u)\) with the number \(N(u)\); and line 11 (respectively line 10), that replaces the increment of \(k\) by providing a new prime number \(p\). The subroutine NextPrime, introduced in Algorithm 3, returns the next prime not already used at the current level; if there is no unassigned prime in the current list \(P\), then new primes are generated using a segmented version of the sieve of Eratosthenes. ``` Input:\(T_{1},T_{2}\) Output:\(\top\) if and only if \(T_{1}\simeq T_{2}\) 1if\(\mathcal{D}(T_{1})\neq\mathcal{D}(T_{2})\)then 2return\(\bot\) 3else 4\(P\leftarrow[2,3,5,7,11,13]\) and \(N_{\text{sieve}}\gets 16\) 5for\(d\) from\(0\) to\(\mathcal{D}(T_{1})\)do 6 Let \(f:1\mapsto 2\) 7\(p\gets 2\) 8for\(i\in\{1,2\}\) and \(u\in T_{i}^{d}\)do 9\(N(u)\leftarrow\prod_{v\in\mathcal{C}(u)}c(v)\) 10if\(f(N(u))\) is not definedthen 11\(N_{\text{sieve}},P,p\leftarrow\)NextPrime\((N_{\text{sieve}},P,p)\) 12 Define \(f(N(u))=p\) 13\(c(u)\gets f(N(u))\) 14if\(\{c(u):u\in T_{1}^{d}\}\neq\{c(u):u\in T_{2}^{d}\}\)then 15return\(\bot\) 16 return\(\top\) ``` **Algorithm 2**PrimeAHU Let us denote \(p_{n}\) the \(n\)-th prime number. There are well known bounds on the value of \(p_{n}\)[13, 31] - with In denoting the natural logarithm and \(n\geq 6\): \[n(\ln n+\ln\ln n-1)<p_{n}<n(\ln n+\ln\ln n). \tag{1}\] Suppose we have the list of all primes \(P\leq N_{\text{sieve}}\), where \(N_{\text{sieve}}\) is the largest integer sieved so far. With \(\#P=n-1\), to generate \(p_{n}\), we simply resume the sieve up to the integer \(\lceil n(\ln n+\ln\ln n)\rceil\), starting from \(\lfloor n(\ln n+\ln\ln n-1)\rfloor\) or \(N_{\text{sieve}}+1\), whichever is greater - to make sure there is no overlap between two consecutive segments of the sieve. With this precaution in mind, the total complexity of the segmented sieve is the same as if we had directly performed the sieve in one go [6]; i.e., \(O(N\log\log N)\) for a sieve performed up to integer \(N\). Therefore, to generate the first \(n\) prime numbers, according to (1), the complexity of the sieve can be evaluated as \(O\left(n\cdot(\log n+\log\log n)\cdot\log\log\left(n\cdot(\log n+\log\log n) \right)\right).\) We refer the reader to [29] for practical considerations on the implementation of the segmented sieve of Eratosthenes. **Remark 2**.: _Note that other sieve algorithms exist, with better complexities - such as Atkin sieve [2] or the wheel sieve [30]; the sieve of Eratosthenes has the merit of being the simplest to implement and sufficient for our needs. Also, a better asymptotic complexity but with a worse constant can be counterproductive for producing small primes - which is rather our case since we generate the primes in order._ ### Complexity analysis There are several points to adress when analysing Algorithm 2: (i) the complexity for testing whether or not \(f(N)\) is defined in line 10; (ii) the number of primes required by the algorithm, and the complexity for generating them; and (iii) the complexity of multiplication in line 9. To simplify the notations, let us assume that we run the algorithm with \(T_{1}=T_{2}=T\) - this is the worst case, since, if \(T_{1}\not\simeq T_{2}\), we do not visit all the levels. (i)The question of determining in \(O(1)\) whether \(f(N)\) is defined or not is not trivial in Algorithm 1 where \(f(\cdot)\) hashes multisets; however, for integers it is possible by virtue of [15]. Note also that [15] also provides a way to create a table, which associates with integer \(i\) the \(i\)-th prime number, searchable in \(O(1)\). An application of this table is found in line 14: one can use pigeonhole sort to compare the two lists, with complexity \(O(\#T^{d}+p_{n})\), where \(p_{n}\) is the biggest prime in the list; but many holes will be unnecessary (as \(c(u)\) is necessarily prime). Using the table, one can use only \(n\) holes, one for each prime number, which reduces the complexity to \(O(\#T^{d}+n)\). Since the primes are reallocated at each level, at level \(d\) we need as many primes as there are different equivalence classes at that level - i.e. \(\#c(u):u\in T^{d}\). This number is \(\leq\#T^{d}\), therefore the complexity of the sort collapses to \(O(\#T^{d})\). (ii)As already discussed, to generate the first \(n\) primes, the sieve must be carried out up to the integer \(N=n\cdot(\ln n+\ln\ln n)\), for total complexity \(O(N\log\log N)\). The number of primes required by Algorithm 2 at level \(d\) is \(\#c(u):u\in T^{d}\). Thus, in total, the number of primes needed is exactly \(\max\limits_{d\in[0,D(T)]_{1}}\#c(u):u\in T^{d}\)). We call this number the _width_ of \(T\) and denote it by \(\mathcal{W}(T)\). We have the following key result. **Proposition 2**.: _For any tree \(T,\mathcal{W}(T)\cdot(\ln\mathcal{W}(T)+\ln\ln\mathcal{W}(T))=O(\#T)\)._ **Proof.** The proof can be found in Appendix A. \(\diameter\)It follows that generating the primes required for Algorithm 2 is done in \(O(\#T\log\log\#T)\). **(iii)** Let us denote \(M(n)\) the complexity for multiplying two \(n\)-bits numbers, where \(M(n)\) depends on the algorithm used: \(O(n^{2})\) for usual schoolbook algorithm, \(O(n^{1.585})\) with Karatsuba algorithm [20], and \(O(n\cdot\log n\cdot\log\log n)\) with Schonhage-Strassen algorithm [33]. The fastest known algorithm was recently introduced [17], with complexity \(O(n\cdot\log n)\) - even if this result is, by the authors' own admission, primarily theoretical. For our complexity proof, we assume that the algorithm used is Schonhage-Strassen; note that, in practice, for small integers, algorithms with a worse complexity but a better constant are used. Nonetheless, we assume in the sequel that \(M(n)=n\cdot\log n\cdot\log\log n\), for the purpose of the proof. Multiplying two \(n\)-bits numbers together yields a \(2n\)-bits number. Therefore, if we sequentially multiply \(m\) numbers of \(n\) bits together, the total complexity can be evaluated as \(M(n)+M(2n)+\cdots+M((m-1)n)\), which is \(O(m\cdot M(mn))\). A better way is to adopt a divide and conquer approach and multiply two numbers which themselves are the recursive product of \(m/2\) numbers. This strategy leads to a complexity of \(O(M(mn))\) by virtue of the Master Theorem [7]. ConclusionCombining all the above discussions, we get the following result. **Proposition 3**.: _Algorithm 2 runs in_ \[O\left(\#T\cdot\log p_{\mathcal{W}(T)}\cdot\log\left(\deg(T)\cdot\log p_{ \mathcal{W}(T)}\right)\cdot\log\log\left(\deg(T)\cdot\log p_{\mathcal{W}(T)} \right)+\#T\log\log\#T\right)\] _where \(p_{\mathcal{W}(T)}\) is the largest prime needed by the algorithm._ **Proof.** By the previous discussion in (ii), we consider separately the generation of primes, whose total complexity is \(O(\#T\log\log\#T)\). Now, fix \(d\in[0,\mathcal{D}(T)]\) and \(u\in T^{d}\). Computing \(N(u)\) implies multiplying \(\deg(u)\) primes with at most \(\log p_{\mathcal{W}(T)}\) bits, which is, following (iii), \(O\left(M\left(\deg(u)\cdot\log p_{\mathcal{W}(T)}\right)\right)\) - with \(M(n)=n\cdot\log n\cdot\log\log n\) as stated earlier. Lines 10 to 13 are \(O(1)\) from the discussion in (i). Sorting the lists in line 14 is \(O(\#T^{d})\) - also from (i). Processing level \(d\) thus requires \[O\left(\#T^{d-1}\cdot\log p_{\mathcal{W}(T)}\cdot\log\left(\deg(T)\cdot\log p _{\mathcal{W}(T)}\right)\cdot\log\log\left(\deg(T)\cdot\log p_{\mathcal{W}(T) }\right)+\#T^{d}\right),\] noticing that \(\sum_{u\in T^{d}}\deg(u)=\#T^{d-1}\) and bounding other occurrences of \(\deg(u)\) by \(\deg(T)\). Summing over \(d\) leads to the claim. \(\diameter\) As already stated, the original AHU algorithm is linear only under the assumption that trees are not too large - recall Assumption 1. The term \(\log\log\#T\) coming from the generation of primes immediately vanishes. We now analyse the term \[\log p_{\mathcal{W}(T)}\cdot\log\left(\deg(T)\cdot\log p_{\mathcal{W}(T)} \right)\cdot\log\log\left(\deg(T)\cdot\log p_{\mathcal{W}(T)}\right).\] First, since \(\deg(T)<\#T\), we have \(\log\deg(T)=O(1)\). Using (1) we have \[p_{\mathcal{W}(T)}<\mathcal{W}(T)\left(\ln\mathcal{W}(T)+\ln\ln\mathcal{W}(T) \right),\] where \(\ln\) is the natural logarithm. Using Proposition 2, we have \(p_{\mathcal{W}(T)}=O(\#T)\). It follows immediately that \(\log p_{\mathcal{W}(T)}=O(1)\). The nested logarithms follows without difficulty. Finally, we have proven the following result. **Theorem 2**.: _Algorithm 2 runs in \(O(n)\) with \(n=\#T_{1}=\#T_{2}\)._ Numerical experiments We established that PrimesAHU is equivalent in complexity (under the same assumption) to the original version. However, this theoretical result would be of little interest, especially with regard to the "ease of implementation" argument, if the constant were much larger, resulting in disproportionately long calculation times compared with the original algorithm. We show here that this is not the case, by comparing on random trees two Python implementations of the algorithm, where (i) PrimesAHU turns out to be faster in the case when the trees are isomorphic, and (ii) of comparable time in the case when they are not. Section 4.1 provides some insights about our implementation of the two algorithms, while Section 4.2 presents the results obtained. ### Comments on the implementation We implemented in Python the two algorithms discussed in this paper, the original AHU (oAHU for short) and PrimesAHU (pAHU for short), making extensive use of the treex library [3], designed to manipulate trees. We tried, as far as possible, to re-implement all the auxiliary functions used by the algorithms, to avoid an unfair advantage, linked, for example, to the use of highly optimised functions in Python. Typically, to multiply lists of prime numbers, we have implemented and used the recursive procedure described in Section 3.2-(ii), even though the numpy.prod function is faster when lists become sufficiently large. Note, however, that for multiplication operations \(*\), we let Python choose the appropriate algorithm (schoolbook for small numbers and Karatsuba for large numbers), whereas we chose Schonhage-Strassen for the purposes of our proof of complexity in Section 3.2 - despite being slower in practice for small numbers. Concerning perfect hash function for integers, we used Python dictionaries. Note also our implementation of the segmented sieve of Eratosthenes ignores multiples of 2 and 3, thus making the sieve 6 times faster. Experiments have been conducted on a HP Elite Notebook with 32 Go of RAM and Intel Core i7-1365U processor. ### Results Provided an integer \(n\), to treat the case \(T_{1}\simeq T_{2}\), we generate a random recursive tree \(T\) of size \(n\)[38], generate a copy \(T^{\prime}\) of \(T\), and measure the computation time taken for both algorithms - oAHU and pAHU - on the couple \((T,T^{\prime})\). To treat the case \(T_{1}\not\simeq T_{2}\), we directly generate a pair \((T_{1},T_{2})\) of random recursive trees of size \(n\) and measure the computation time for both algorithm. We generated 100 couples for both cases and with \(n=10^{i}\) for each \(i\in[\![1,6]\!]\). The results are depicted in Figure 3. As expected, both algorithms behave linearly; pAHU is faster in the case \(T_{1}\simeq T_{2}\), as can be seen in Figure 3, whereas it achieves comparable time to oAHU in the case \(T_{1}\not\simeq T_{2}\), as soon as the trees are not too small - see Figure 3. Note also that it is about 10 times faster to conclude \(\not\simeq\) than \(\simeq\) on the examples considered. It is not our intention here to explain the differences in performance between the two algorithms - that is a topic requiring more detailed analysis - but suffice it to say that our algorithm, pAHU, is capable of performing just as well as oAHU in practice, which was our objective in this section. Figure 3: Computation time (in seconds, log scale) for testing isomorphism of random recursive trees of sizes \(10^{i}\), \(i\in[1,6]\), using either oAHU or pAHU algorithm, with 100 couples of trees tested for each size and each case. ## Conclusion and perspectives Following a remark by Campbell & Radford, who deplored the lack of clarity of the AHU algorithm (fundamental to understanding the tree isomorphism problem), we proposed a variant of this same algorithm: (i) with the same theoretical complexity; (ii) just as fast in practice, and (iii) intended to be simple to understand and implement. AHU works by sorting lists of integers, with the aim of computing a unique hash of multisets. We proposed instead to use an equally elementary concept, the multiplication of prime numbers, to also compute an invariant - thanks to the fundamental theorem of arithmetic. We mentioned previously that the Weisfeiler-Lehman algorithms, used for graph isomorphism, use integer list sorts just like AHU to determine the next colour to assign to each node. This raises the question of whether these sorts can also be replaced by our idea of multiplying lists of prime numbers. While this issue is outside the scope of this paper, and remains to be investigated, let us nonetheless mention two points that may prove challenging. First, the way Weisfeiler-Lehman operates can lead to processing as many colours as there are nodes in the graph, and therefore having to generate as many prime numbers - requiring the sieve of Eratosthenes to be run up to an integer supra-linear in the size of the graph. Next, we would multiply lists whose size depends on the degree of the current node; in a dense or complete graph, this means lists whose size is comparable to the number of nodes in the graph. The complexity of performing these multiplications could prove far more expensive than for trees. Since (1-dimensional) Weisfeiler-Lehman can be implemented in \(\mathrm{O}((\#\mathrm{V}+\#\mathrm{E})\log\#\mathrm{V})\) for a graph \(\mathrm{G}=(\mathrm{V},\mathrm{E})\), it remains to be investigated whether or not the additional complexities mentioned above exceeds that of the original algorithm. See [21, Section 3.1] and references therein for a more precise description of the Weisfeiler-Lehman algorithms. ## Acknowledgements The author would like to thank Dr. Romain Azais and Dr. Jean Dupuy for their helpful suggestions on the first draft of the article.
2309.12030
Striking Gold in Advertising: Standardization and Exploration of Ad Text Generation
In response to the limitations of manual ad creation, significant research has been conducted in the field of automatic ad text generation (ATG). However, the lack of comprehensive benchmarks and well-defined problem sets has made comparing different methods challenging. To tackle these challenges, we standardize the task of ATG and propose a first benchmark dataset, CAMERA, carefully designed and enabling the utilization of multi-modal information and facilitating industry-wise evaluations. Our extensive experiments with a variety of nine baselines, from classical methods to state-of-the-art models including large language models (LLMs), show the current state and the remaining challenges. We also explore how existing metrics in ATG and an LLM-based evaluator align with human evaluations.
Masato Mita, Soichiro Murakami, Akihiko Kato, Peinan Zhang
2023-09-21T12:51:24Z
http://arxiv.org/abs/2309.12030v2
# CAMERA: A Multimodal Dataset and Benchmark for Ad Text Generation ###### Abstract In response to the limitations of manual online ad production, significant research has been conducted in the field of automatic ad text generation (ATG). However, comparing different methods has been challenging because of the lack of benchmarks encompassing the entire field and the absence of well-defined problem sets with clear model inputs and outputs. To address these challenges, this paper aims to advance the field of ATG by introducing a redesigned task and constructing a benchmark. Specifically, we defined ATG as a cross-application task encompassing various aspects of the Internet advertising. As part of our contribution, we propose a first benchmark dataset, **CA**Multimodal **E**valuation for Ad Text **G**ene**R**A**tion(CAMERA), carefully designed for ATG to be able to leverage multi-modal information and conduct an industry-wise evaluation. Furthermore, we demonstrate the usefulness of our proposed benchmark through evaluation experiments using multiple baseline models, which vary in terms of the type of pre-trained language model used and the incorporation of multi-modal information. We also discuss the current state of the task and the future challenges. ## 1 Introduction Over the past few decades, online advertising has emerged as one of the most successful business models and has become a significant source of income for the Internet industry (Meeker and Wu, 2018). The global online advertising market has witnessed significant growth and quadrupled over the last decade, particularly in the domain of search ads or search engine advertising (Figure 1). Search ads are designed to accompany search engine results and are tailored to be relevant to users' queries (search queries). They are typically sold based on pre-selected keywords, also known as _bid words_, chosen by advertisers. These ads are displayed alongside a landing page (LP), providing further details about the advertised product or service. Therefore, ad creators must create compelling ad texts that captivate users and encourage them to visit the LP. However, the increasing volume of search queries, which is growing at a rate of approximately 8% annually (Djuraskovic, 2022), poses challenges for manual ad production. The growing demand in the industry has fueled extensive research on the automatic generation of ad texts. Researchers have explored various approaches, starting with _template-based_ methods that generate ad text by inserting relevant keywords into predefined templates (Bartz et al., 2008; Fujita et al., 2010; Thomaidou et al., 2013). Recently, neural language generation (NLG) techniques based on encoder-decoder models, which are widely employed in machine translation and automatic summarization, have been applied to ad text generation (ATG) (Hughes et al., 2019; Mishra et al., 2020; Kamigaito et al., 2021). However, the automated evaluation of ATG models presents significant challenges that need to be addressed. Previous research has been constrained to conducting individual experiments using proprietary datasets that are not publicly available (Murakami et al., 2023). This limitation arises from the absence of a shared dataset (i.e., a benchmark) that can be universally applied across the field. Moreover, the absence of benchmarks has resulted in a lack of consensus regarding the models' in Figure 1: An example of search ads. put/output formats. While some studies use keywords as input (Bartz et al., 2008; Fukuda, 2019), others employ existing advertisements (Mishra et al., 2020) or LPs that users click on after viewing an advertisement (Hughes et al., 2019; Kanungo et al., 2022; Golobokov et al., 2022). This variation in input sources indicates that the field as a whole has yet to establish a standardized problem setting, which hinders the generalization and comparability of ATG techniques. This study aims to significantly contribute to advancing ATG as an academic field by comprehensively redesigning this task. We define the problem setting of ATG as a versatile task that can be applied to various online advertising domains (SS3). We also highlight the differences between this task and existing tasks (e.g., summarization), the unique technical challenges, and its academic significance as a research subject. In order to engage a broader community of researchers beyond those who possess ad data, we construct the first publicly available benchmark, **CAMERAE**, which is meticulously developed a comprehensive dataset that serves as a reliable resource for training, validating, and evaluating models for this task (SS4). Our dataset comprises actual data sourced from Japanese search ads and incorporates extensive annotations encompassing multi-modal information such as the LP images. To demonstrate the usefulness of the proposed benchmark dataset and provide insights into the current state and future challenges of this task, we conducted evaluation experiments by building several baseline models with variations in terms of pre-trained models (BART (Lewis et al., 2020) and T5 (Raffel et al., 2022)) and the use of multi-modal information (e.g., layout information and visual features in LPs) (SS5). The results demonstrate that the performance of ATG models varied significantly across different industries and also effectively leveraging multi-modal information poses a challenge for future research. Finally, we discuss the remaining challenges of ad text generation and directions for resolving them for future development (SS6). Our major contributions are: * We redesigned ATG as a cross-application task, and clarified its academic significance and differences from existing tasks and then build the first publicly available benchmark, CAMERAE, which is meticulously designed for ATG task. * We demonstrated the usefulness of the proposed dataset and the current status and future challenges of ATG task through evaluation experiments using current mainstream ATG models. * We make our dataset available to academic researchers to facilitate their investigation, while taking care not to cause any disadvantage to advertisers.1 Footnote 1: [https://github.com/CyberAgentAllLab/camera](https://github.com/CyberAgentAllLab/camera) ## 2 Background Various types of online advertising exist, including search ads, display ads 2, and slogans 3. However, since most existing studies are related to search ads (Murakami et al., 2023), this study also focuses on search ads and provides an overview of ATG research and its current limitations. Footnote 2: Display ads typically take the form of banner ads strategically placed within designated advertising spaces on websites or applications. Footnote 3: Slogans are catchy phrases designed to captivate the attention of internet users and generate interest in products, services, or campaigns. ### A Quick Retrospective Early ATG systems predominantly relied on template-based approaches (Bartz et al., 2008; Fujita et al., 2010; Thomaidou et al., 2013). These approaches involved filling appropriate words (i.e., keywords) into predefined templates, resulting in the generation of advertising texts. Although this method ensured grammatically correct ad texts, it has limitations in diversity and scalability because it could only accommodate variations determined by the number of templates, which are expensive to create. To address these constraints, alternative approaches have been explored, including reusing existing promotional text (Fujita et al., 2010) and extracting keywords from LPs to populate template slots (Thomaidou et al., 2013). Encoder-decoder models, which have demonstrated their utility in NLG tasks such as machine translation and summarization (Sutskever et al., 2014), have been applied to ATG research (Hughes et al., 2019; Youngmann et al., 2020; Kamigaito et al., 2021; Golobokov et al., 2022). These models have been employed in various approaches, including _translating_ low click-through-rate (CTR) sentences into high CTR sentences (Mishra et al., 2020), _summarizing_ crucial information extracted from the LPs (Hughes et al., 2019; Kamigaito et al., 2021), and combining these techniques by first summarizing the LPs and subsequently translating them into more effective ad texts based on CTR (Youngmann et al., 2020).4 Recently, transfer learning approaches using pre-learned language models have become mainstream, allowing for more fluent and diverse ad text generation (Wang et al., 2021; Zhang et al., 2021; Golobokov et al., 2022; Kanungo et al., 2022; Wei et al., 2022; Li et al., 2022; Murakami et al., 2022). Footnote 4: CTR is a widely-used indicator of advertising effectiveness in the online advertising domain. ### Current Limitations ATG has experienced remarkable growth in recent years, garnering significant attention as a valuable application of natural language processing (NLP). However, the automated evaluation of models presents substantial challenges. These challenges are primarily due to the absence of a shared benchmark dataset that can benefit the entire research community, resulting in individual validation using non-public data and impeding comprehensive comparisons among different methods (Murakami et al., 2023). Table 1 summarizes existing studies in ATG. This table demonstrates that ATG field has primarily been driven by advertising-related companies. It is worth noting that there is no consensus on the input-output format as each company has independently verified their ATG systems using its datasets, which is a significant impediment to the generalization of technology. In addition, while ATG research was initially dominated by data mining-related research areas such as KDD and CIKM, it has recently begun to attract attention in the ACL community. As a confluence of these trends, this study aims to establish ATG as an NLP task by defining the task and building a benchmark. ## 3 Design of Ad Text Generation We define the ATG task as follows: **Task definition** Let \(\mathbf{x}\) be a source document that describes advertised products or services, \(\mathbf{a}\) a user signal reflecting the user's latent needs or interests, and \(\mathbf{y}\) an ad text. ATG aims is to model \(\mathbf{p(y|a,x)}\). In search ads, there are numerous variations in input/output settings based on individual company specifications, with the potential for further changes in the future (see Table 1). To foster the generalization of ATG technology in an academic research context, we aimed to develop a task not tied to a specific application (e.g., Google, Microsoft Bing, and Yahoo search engines) but focused on universal core problems shared across these applications. Although this study's primary focus is search ads, our redesigned task setting is adaptable to tackling specific challenges in other advertising domains, including display ads and various merchandise advertisements. For example, in the case of display ads, the user signal \(\mathbf{a}\) could be the user's purchase history. **The requirements of ad text** The purpose of advertising is to influence consumers' (users) atti \begin{table} \begin{tabular}{l l l l l l l} \hline \hline **Work** & **Approach** & **Input** & **Output** & **Affiliation** & **Lang.** & **xACL** \\ \hline Bartz et al. (2008) & Template & Keyword & Ad text & Yahoo & En & \\ Fujita et al. (2010) & Template & Promotional text & Ad text, Keyword & Recruit & Ja & \\ Thomaidou et al. (2013) & Template & LP & Ad text & Athens Univ. & En & \\ Hughes et al. (2019) & Seq2Seq & LP & Ad text & Microsoft & En & \\ Fukuda (2019) & Seq2Seq & Keyword & Ad text & DENTSU & Ja & \\ Mishra et al. (2020) & Seq2Seq & Ad text & Ad text & Yahoo & En & \\ Youngmann et al. (2020) & Seq2Seq & LP, Ad text & Ad text & Microsoft & En & \\ Duan et al. (2021) & Seq2Seq & Query, KB & Ad text & Tencent & Zh & \\ Kamigaito et al. (2021) & Seq2Seq & LP, Query, Keyword & Ad text & CyberAgent & Ja & ✓ \\ Wang et al. (2021) & Seq2Seq & LP, Ad text & Ad text & Microsoft & En & \\ Zhang et al. (2021) & Seq2Seq & Ad text, Keyword, KB & Ad text & Baidu & Zh & \\ Golobokov et al. (2022) & Seq2Seq & LP & Ad text & Microsoft & En & ✓ \\ Kanungo et al. (2022) & Seq2Seq & Multiple ad texts & Ad text & Amazon & En & \\ Wei et al. (2022) & Seq2Seq & User review, Control code & Ad text & Alibaba & Zh & ✓ \\ Li et al. (2022) & Seq2Seq & Query & Ad text, Keyword & Microsoft & En & ✓ \\ Murakami et al. (2022) & Seq2Seq & Keyword, LP & Ad text & CyberAgent & Ja & \\ \hline \hline \end{tabular} \end{table} Table 1: A summary of existing research on ad text generation. _xACL_ (✓) presents whether the paper belongs to the ACL community, or some other research community (no ✓). tudes and behaviors towards a particular product or service. Therefore, the goal of ATG is to create text that encourages users' purchasing behaviors. In this study, we have identified the following two fundamental requirements of ad text: (1) The information provided by the ad text is consistent with the content of the source document; and (2) the information is carefully curated and filtered based on the users' potential needs, considering the specific details of the merchandise. Requirement 1 relates to _hallucinations_, which is currently a highly prominent topic in the field of NLG (Wiseman et al., 2017; Parikh et al., 2020; Maynez et al., 2020). This requirement can be considered crucial for practical implementation since the inclusion of _non-factual hallucination_ in ad texts can cause business damage to advertisers. Regarding Requirement 2, it is necessary to successfully convey the features and attractiveness of a product within a limited space and immediately capture the user's interests. Therefore, ad text must selectively include information from inputs that can appeal to users. Differences from existing tasksThe ATG task is closely related to the conventional document summarization task in that it performs information compression while maintaining consistency with the input document's content. Particularly, _query-focused summarization (QFS)_(Dang, 2005), a type of document summarization, is the closest in problem setting because it takes the user's query as the input; however, there are some differences. The task of QFS aims to create a summary from one or multiple document(s) that answers a specific query (_explicit needs_), which is exactly the same behavior as a search engine. In contrast, ATG (especially for search ads) must extract _latent needs_ from user signals (search queries) and then return a summary. Another notable difference is that while summarization aims to deliver accurate text that fulfills task-specific requirements, ATG surpasses mere accuracy and aims to influence user attitudes and behavior. Consequently, unconventional and/or ungrammatical text may be intentionally used in ad-specific expressions to achieve this objective (refer to details in SS4.2). Therefore, QFS is a subset of ATG (QFS \(\subset\) ATG). One of the technical challenges unique to ATG is capturing users' latent needs based on such user signals \(\mathbf{a}\) and generating appealing sentences that lead to advertising effectiveness, which depends significantly on the psychological characteristics of the recipient users. Therefore, realizing more advanced ATG will also require a connection with advertising psychology (Scott, 1903) based on cognitive and social psychology. The ATG task is an excellent research topic for advancing user-centered NLP technologies. ## 4 Construction of CAMERA ### Dataset Design In this study, the following two benchmark design policies were first established: the benchmark should be able to (1) utilize multimodal information and (2) evaluate by industry domain. In terms of **Design Policy 1**, various advertising formats use textual and visual elements to communicate product features and appeal to users effectively. It is well-recognized that aligning content with visual information is crucial in capturing user attention and driving CTR. Exploring the effective utilization of such multi-modal information is crucial for the ATG task and is a key design policy. **Design Policy 2** highlights the significance of incorporating specific _advertising appeals_ to create impactful ad texts. In general, ad creators must consider various aspects of advertising appeals such as the _price_, _product features_, and _quality_. For instance, advertising appeals in terms of _price_ such as _"free shipping"_ and _"get an extra 10% off"_ captivate users by emphasizing cost savings through discounts and competitive prices. Previous studies revealed that the effectiveness of these advertising appeals vary depending on the target product and industry type (Murakami et al., 2022). To foster the development of robust models, it is crucial to conduct an industry-wise evaluations. ### Construction Procedure We utilized Japanese search ads from our company involved in the online advertising business.5 In \begin{table} \begin{tabular}{l r r r} \hline \hline & \# instance & \# ad text & Industry-wise \\ \hline Train & 12,395 & 1 & \\ Dev & 3,098 & 1 & \\ Test & 872 & 4 & ✓ \\ \hline \hline \end{tabular} \end{table} Table 2: Statistics of our dataset. _Industry-wise_ (✓) indicates whether the data is separable by industry. these source data, the components of user queries, ad texts, and LPs (URLs) are allocated accordingly. Search ads comprise a _title_ and _description_, as shown in Figure 1. Description in search ads has a larger display area compared to titles. It is typically written in natural sentences but may also include advertising appeals. In contrast, titles in search ads often include unique wording specific to the advertisements. They may deliberately break or compress grammar to the extent acceptable to humans because their primary role is immediately capturing a user's attention. For instance, when promoting free consultation for a specific service, an ad-specific expression, such as _"Easy 30 seconds! Free consultations at \(xx\)"_ may be used. Studies in advertising psychology have reported that these seemingly ungrammatical expressions, unique to advertisements, not only do not hinder human comprehension but also capture their attention Wang et al. (2013). In this study, we extracted only titles as ad texts \(\mathbf{y}\) to create a benchmark focusing on ad-specific linguistic phenomena. In our dataset, we extracted meta description from the HTML-associated LPs, which served as a description document (_LP description_) \(\mathbf{x}\) for each product. Furthermore, in line with **Design Policy 1**, we processed a screenshot of the entire LP to obtain an LP image, allowing us to leverage multi-modal information. Through this process, we obtained images \(\mathbf{I}\), layout information \(\mathbf{C}\), and text \(\{x_{i}^{ocr}\}_{i=1}^{|\mathbf{R}|}\) for the rectangular region set \(\mathbf{R}\) using the OCR function of the Cloud Vision API.6 Footnote 6: [https://cloud.google.com/vision/docs/ocr](https://cloud.google.com/vision/docs/ocr) ### Annotation The source data is assigned a delivered gold reference ad text, but because of the variety of appeals in the ads, there is a wide range of valid references for the same product or service. Therefore, three additional gold reference ad texts were created for the test set by three expert annotators who are native Japanese speakers with expertise in ad annotation. The annotation guidelines used are presented in Appendix A with detailed information. During the data collection process for evaluation annotations, data were randomly selected based on keywords manually mapped to industry labels, such as _"designer jobs"_ mapped to the human resource industry, following **Design Policy 2**. Here, we used the following four industry domain labels: human resources (HR), e-commerce (EC), finance (Fin), and education (Edu). Table 2 provides the statistics of our dataset. The dataset was partitioned into training, development, and test sets to prevent data duplication between the training (development) and test sets, which was achieved through filtering processes. Table 3 presents examples from the test set of this dataset.7 Although the annotators were not provided with explicit instructions regarding advertising appeals, they created ad texts (#2-4) that featured diverse advertising appeals distinct from the original ad text (#1). This suggests that our test set captures a certain level of diversity in expressing advertisements. Footnote 7: Although not included due to space limitations, the actual dataset also includes LP images (screenshots), their OCR results, and industrial labels. ### Understanding of Ad Text Generation As discussed in SS3, the ATG task is closely related to summarization. There are two primary methodologies within summarization: _extractive_ and _abstractive_. The extractive approach constructs a summary by directly selecting and utilizing meaningful sentences from the input. In contrast, the abstractive approach generates novel sentences that capture the essence of the input more creatively. To gain more insight into the dynamics of ATG, we investigated the extent to which ad creators are making their ads extractive (or abstractive). This exploration would be also useful as a guideline for future model development. Figure 2 illustrates the percentage of _novel_ en \begin{table} \begin{tabular}{l l l} \hline \hline **LP description** & **User query** & **Ad text** \\ \hline CardLoanDiagnosis.com & & 1. [Official] _Top3 Popular_: Card Loans \\ Diagnosis of _(instant)_ loan cards! & & 2. [Easily] diagnose _recommended_: card loans \\ _(\$recommended)_ companies & & 3. Diagnose Cards Available for _(Same-Day)_: Borrowing! \\ to borrow money _(now)_! & & 4. Get Financing in _(as Fast as 30 Minutes)_: _(Online!_ \\ \hline \hline \end{tabular} \end{table} Table 3: Examples of our dataset, translated into English for visibility. The highlighted areas in each color indicate the aspects of advertising appeals: (_Speed_), (_Trend_), and (_User-friendliness_), based on Murakami et al. (2022b)’scheme. tities in the target ad texts not found in their respective source documents. Here, we focused on five distinct entity types as outlined in Table 4 to conduct a more comprehensive analysis.8 By incorporating additional input information such as the LP description and OCR-processed text of the LP full view, the percentage of novel entities in the target ad text was effectively reduced. The concern of having a high percentage of new entities in the ad text, which could make the task overly difficult and lead to a problematic setting, was also dispelled based on the analyses of this benchmark. Furthermore, the analysis based on entity type reveals a wide range of variations in _Time Expressions_ and _Numerical Expressions_. In the example of _Numerical Expressions_ as shown in Table 4, the source document \(x\) mentioned the price range as _6,800 yen - 8,000 yen_, while the target ad text \(y\) only included the lower limit of the range as _6,800 yen_. This rewording may be intended to make the price more appealing to users by presenting the lowest price, or to make it more straightforward to fit into a limited display area. Footnote 8: The procedure for calculating the ratio of novel entities is described in Appendix B. ## 5 Experiments We conducted evaluation experiments using various baseline models to assess the proposed benchmark's usefulness and discuss the task's current status and future challenges. As outlined in SS2.2, previous ATG studies have utilized non-public data like in-house datasets and exhibited varying input-output configurations. This variability has resulted in challenges related to reproducibility and equitable comparisons. Therefore, we attempted to benchmark ATG model performance by implementing the dominant approach in existing work (outlined in SS2.1), rather than replicating specific existing models. Specifically, our investigation primarily focused on two aspects: (1) exploring the impact of different pre-trained models on performance (SS5.1), and (2) examining the effectiveness of incorporating multi-modal features, such as images and layouts from the LPs, and their overall influence on the results (SS5.2). These experiments allowed us to gain insight into the current status and clarify potential avenues for improvement and further research. ### Exp 1: Selection of Pre-trained Models To investigate the impact of different pre-trained models on ATG, we constructed multiple baseline models using the encoder-decoder framework, currently the predominant model in ATG (SS2). Specifically, we built baseline models based on two types of commonly used encoder-decoder: BART Lewis et al. (2020) and T5 Raffel et al. (2022). These pre-trained models served as the foundation for our baseline models, which allowed us to compare their performance and understand the effects of the selected pre-trained model on ATG. We fine-tuned each pre-trained model on the training dataset to create our baseline models. Specifically, we used a pre-trained model \begin{table} \begin{tabular}{l l l} \hline \hline **Entity type** & **Input** & **Output** \\ \hline _Time Expressions_ & 2022\(\sharp\):\(\mathcal{G}\)) (_September 2022_) & 2022\(\sharp\): (_2022_) \\ _Katakana_ & \(\sharp\)\(\uparrow\)\(\downarrow\) (site) & \(\sharp\)\(\leftarrow\)\(\downarrow\)\(\times\)\(\leftarrow\)\(\downarrow\) (homepage) \\ _Numerical Expressions_ & 6,800\(\sharp\) - 8,000\(\sharp\) (_6,800 yen - 8,000 yen_) & 6,800\(\sharp\) (_6,800 yen_) \\ _Named Entity_ & \(\uparrow\)\(\not\)\(\times\) (_Ishida_) & \(\sharp\)\(\not\)\(\times\)\(\not\)\(\times\) (_Ishida Corporation_) \\ _Terms_ & \(\sharp\)\(\land\)\(\#\)\(\#\)\(\#\)\(\#\) (_Job Openings_) & \(\sharp\)\(\land\)\(\#\)\(\#\)\(\cap\) (_Job Introductions_) \\ \hline \hline \end{tabular} \end{table} Table 4: The novel entity types used in our analysis and their corresponding examples. Katakana is a Japanese syllabary. Figure 2: Percentages of novel entities included in our dataset when input information is increased. japanese_bart_base_2.0 from Kyoto University's Japanese version of BART 9 as the basis for our BART-based baseline model (referred to as BART). For the T5-based baseline model (referred to as T5), we used a pre-trained model sonoisa/t5-base-japanese 10. The specific hyperparameters and other experimental details are reflected in Appendix C. Footnote 9: [https://github.com/utanaka2008/fairseq/tree/japanese_bart_pretrained_model](https://github.com/utanaka2008/fairseq/tree/japanese_bart_pretrained_model) Footnote 10: [https://huggingface.co/sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) ### Exp 2: Utilization of Multi-modal Features To investigate the effectiveness of incorporating multi-modal features such as images and layout in the LPs and their impact on the overall performance, we built various settings for the T5-based model that considered LP image information, following previous studies on NLG from images (Tanaka et al., 2021; Murakami et al., 2022). Figure 3 presents an overview of incorporating the LP information into the T5-based model. 11. As an input, we used three sets of token sequences, the LP descriptions \(x^{des}\), user queries \(x^{qry}\), and each OCR token sequence \(x^{ocr}_{i}\) of the rectangular region set \(R=\{r_{i}\}_{i=1}^{|R|}\) obtained by OCR from the LPs, where each token sequence \(x^{*}\) is \(x^{*}=(x^{*}_{i})_{t=i}^{|R|}\). Furthermore, the layout \(C=c_{i=1}^{|R|}\) and image information \(I=I_{i=1}^{|R|}\) for the rectangular region set \(R\) was used. Here, \(c_{i}\) denotes \((x^{\text{min}}_{i},x^{\text{max}}_{i},y^{\text{min}}_{i},y^{\text{max}}_{i}) \in\mathbb{R}^{4}\) as shown in Figure 3. Footnote 11: Note that the model constructed for this experiment, shown in Figure 3, is not the proposed model, but a baseline model created according to Murakami et al. (2022) Next, we explicitly describe each embedding (Figure 3) as follows: Token embeddingEach token sequence \(x^{*}\) was transformed into an embedding sequence \(t^{*}\) before being fed into the encoder. Here, \(D\) denotes the embedding dimension. Segment embeddingThe encoder distinguishes the region of each token sequence \(x^{*}\). For example, for a token sequence \(x^{des}\), we introduced \(s^{des}\in\mathbb{R}^{D}\). Visual embeddingWe introduced an image \(I_{i}\) for each rectangular region \(r_{i}\) to incorporate visual information from the LP, such as text color and font. More specifically, the obtained image \(I_{i}\) was resized to 128 \(\times\) 32 (width \(\times\) height). The CNN-based feature extraction was employed to create visual features \(v_{i}\in\mathbb{R}^{D}\). Layout embeddingIn the LP, the position and size of the letters played crucial roles. We input the layout \(c_{i}\) of a rectangular region \(r_{i}\) into the MLP to obtain \(l_{i}\in\mathbb{R}^{D}\). Using the above embeddings, we generated the encoder inputs, as shown in Figure 3. This study investigated the contribution of each type of multi-modal information to the overall performance. We incorporated the following three types of multi-modal information into the model architecture in Figure 3: LP OCR text (lp_ocr;o), LP layout information (lp_layout;l), and LP BBox image features (lp_visual;v). Figure 3: An overview of the model incorporating LP information, following Murakami et al. (2022). ### Evaluation Metrics To evaluate the generated texts quality, we employed two widely used metrics in ATG [14]: BLEU-4 (B-4)12[11] and ROUGE-1 (R-1) [12]. These metrics assess the similarity between the generated text and reference text based on \(n\)-gram overlap. Furthermore, to evaluate the relevance of the LP and the ad text, we used the keyword insertion rate (Kwd) [13], which represents the percentage of cases where the specified keyword is included in the generated text.13 Footnote 12: [https://github.com/mjpost/sacrebleu](https://github.com/mjpost/sacrebleu) Footnote 13: Actually, Google Ads recommends to include at least one of advertisement keywords: [https://support.google.com/google-ads/answer/1704392?hl=en](https://support.google.com/google-ads/answer/1704392?hl=en) ### Result The experimental results are listed in Table 5.14 First, we examined the performance of BART and T5 to analyze the impact of the pre-trained model type. Overall, we observed a trend in which BART achieved higher scores for B-4, whereas T5 performed better for R-1 and Kwd. However, the industry-wise evaluation results show variation when considering different industries. For example, in the human resources domain, BART outperformed T5 in B-4 and R-1. Hence, it is not possible to determine a universally appropriate pre-trained model, and the selection should be based on a specific evaluation purpose. Footnote 14: For the evaluation, BLEU was assessed using all four references, whereas R-1 and Kwd were evaluated using one original reference each, as multi-reference evaluation is commonly done for BLEU. We focused on the utility of multi-modal information. Overall, we observed that incorporating additional features such as OCR-processed text (+ {o}) and the LP layout information (+ {o,l}) improved the quality of generated sentences in terms of B-4 and R-1 scores. However, when the LP image features were added (+ {o,l,v}), we observed a decline in the R-1 scores, specifically for EC and Fin domains. One possible explanation for this performance degradation is that some of the image information may have functioned as noise because the LP Full View was used as is in this experiment. Therefore, it is necessary to develop a model that dynamically selects only information important as an advertisement from LP images and effectively improves the generation quality. In summary, the evaluation experiments conducted with this benchmark have demonstrated significant performance gaps among ATG models in a variety of industry domains. In addition, the challenge of effectively leveraging multimodal information to improve performance emerged as a focus of future research efforts (in accordance with Design Policy 2). Given that these insights are derived from our dataset (in accordance with **Design Policy 1&2** in SS4.1), this experiment demonstrates the utility of the proposed dataset. ## 6 Looking into the Future ### Data In this study, we developed a benchmark dataset in Japanese. However, as advertising is relevant worldwide, it is crucial to construct benchmark datasets in other languages. Similar to this study, the most straightforward approach to constructing such datasets is for advertising-related companies to provide advertising data to the research community. However, companies often find it challenging to share such data because of their sensitive nature. Therefore, an important future direction is to explore methodologies for creating advertising data instead of relying solely on sampling real-world delivery data. The challenge in this case would be generating or sampling _user signals_, considering that the ad text can be created by annotators to a certain extent, as demonstrated in our study. Fortunately, when it comes to user signals (i.e., search queries) in search ads, attempts to generate them automatically are relatively feasible because any query is acceptable, even in practical scenarios. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Overall} & \multicolumn{2}{c}{HR} & \multicolumn{2}{c}{EC} & \multicolumn{2}{c}{Fin} & \multicolumn{2}{c}{Eda} \\ \cline{2-13} Model & B-4 & R-1 & Kwd & B-4 & R-1 & Kwd & B-4 & R-1 & Kwd & B-4 & R-1 & Kwd & B-4 & R-1 & Kwd \\ \hline BART & 14.6 & 22.8 & 76.0 & 21.7 & 25.3 & 70.9 & 12.5 & 20.1 & 81.5 & 12.3 & 29.2 & 80.4 & 9.0 & 17.1 & 73.3 \\ T5 & 13.6 & 25.6 & **90.0** & 18.9 & 23.9 & **84.8** & 12.9 & 28.6 & **93.6** & 12.2 & 33.5 & **94.7** & 6.4 & 18.5 & **88.4** \\ +{o} & 17.8 & **27.5** & 85.6 & 23.4 & **26.4** & 82.3 & 15.9 & **29.6** & 87.3 & 18.6 & **33.9** & 88.5 & 10.2 & **21.8** & 85.3 \\ +{o,l} & 18.4 & 25.7 & 84.4 & **24.0** & 25.9 & 81.4 & **18.3** & 26.5 & 87.3 & 17.5 & 31.9 & 86.1 & **10.8** & 19.9 & 83.7 \\ +{o,l,v} & 16.3 & 26.0 & 84.5 & 19.0 & 25.2 & 82.7 & 17.9 & 27.5 & 86.7 & **19.2** & 33.0 & 91.4 & 8.0 & 19.7 & 78.9 \\ \hline \hline \end{tabular} \end{table} Table 5: Results: a **bold** value indicates the best result in each column. ### Evaluation Task-agnostic metrics such as BLEU and ROUGE have traditionally been used as automatic evaluation metrics for ATG models Murakami et al. (2023), including those used in our study. However, these conventional metrics have limited correlation with human judgments in various NLG tasks, such as machine translation Mathur et al. (2020) and summarization Deutsch et al. (2021). Considering the uncertainty of the reliability of these metrics in the context of ATG, it is crucial to investigate their validity thoroughly. Previously, conducting such meta-evaluations for these metrics was challenging because of the requirements for multiple and diverse system outputs on a shared dataset. However, with the establishment of task specifications and benchmarks in this study, conducting a meta-evaluation in the future would be more feasible. In addition, the quality of an advertisement is determined by a combination of various perspectives (e.g., fluency, diversity, faithfulness, relevance), according to the reports Murakami et al. (2023). Hence, in the future, it will be essential to incorporate multi-dimensional evaluations Zhong et al. (2022) to ensure more transparent and interpretable evaluations. ## 7 Conclusion In this study, we redefined ATG as a cross-application task and developed the first benchmark dataset. Additionally, through evaluation experiments using this benchmark, we demonstrated that the performance of ATG models varied significantly across different industries. Effectively leveraging multi-modal information poses a challenge for future research. ATG is a promising application of NLP and a critical and complex research area for advancing user-centric language technology. We anticipate that the research infrastructure established in this study will drive the progress and development of ATG technology. ## Limitations As noted in SS6.1, one of the limitations of this study is that the dataset is only available in Japanese. In particular, the community should also enjoy benchmark datasets in English that are more accessible to researchers and developers around the world. We hope that advertising-related companies who share our vision of building on common datasets to build on the technologies in the field of ATG will follow this research and provide public datasets to the community in the future.
2309.12756
Towards an MLOps Architecture for XAI in Industrial Applications
Machine learning (ML) has become a popular tool in the industrial sector as it helps to improve operations, increase efficiency, and reduce costs. However, deploying and managing ML models in production environments can be complex. This is where Machine Learning Operations (MLOps) comes in. MLOps aims to streamline this deployment and management process. One of the remaining MLOps challenges is the need for explanations. These explanations are essential for understanding how ML models reason, which is key to trust and acceptance. Better identification of errors and improved model accuracy are only two resulting advantages. An often neglected fact is that deployed models are bypassed in practice when accuracy and especially explainability do not meet user expectations. We developed a novel MLOps software architecture to address the challenge of integrating explanations and feedback capabilities into the ML development and deployment processes. In the project EXPLAIN, our architecture is implemented in a series of industrial use cases. The proposed MLOps software architecture has several advantages. It provides an efficient way to manage ML models in production environments. Further, it allows for integrating explanations into the development and deployment processes.
Leonhard Faubel, Thomas Woudsma, Leila Methnani, Amir Ghorbani Ghezeljhemeidan, Fabian Buelow, Klaus Schmid, Willem D. van Driel, Benjamin Kloepper, Andreas Theodorou, Mohsen Nosratinia, Magnus Bång
2023-09-22T09:56:25Z
http://arxiv.org/abs/2309.12756v2
# Towards an MLOps Architecture for XAI in Industrial Applications ###### Abstract Machine learning (ML) has become a popular tool in the industrial sector as it helps to improve operations, increase efficiency, and reduce costs. However, deploying and managing ML models in production environments can be complex. This is where Machine Learning Operations (MLOps) comes in. MLOps aims to streamline this deployment and management process. One of the remaining MLOps challenges is the need for explanations. These explanations are essential for understanding how ML models reason, which is key to trust and acceptance. Better identification of errors and improved model accuracy are only two resulting advantages. An often neglected fact is that deployed models are bypassed in practice when accuracy and especially explainability do not meet user expectations. We developed a novel MLOps software architecture to address the challenge of integrating explanations and feedback capabilities into the ML development and deployment processes. In the project _EXPLAIN_, our architecture is implemented in a series of industrial use cases. The proposed MLOps software architecture has several advantages. It provides an efficient way to manage ML models in production environments. Further, it allows for integrating explanations into the development and deployment processes. Keywords:MLOps XAI Software Architecture ## 1 Introduction The application of ML in the industrial sector promises significant improvements, such as increased effectiveness, energy efficiency, and yield. However, despite many pilot applications, the practitioners among the authors observe that only a few ML projects have moved into actual and continuous production use. One of the barriers to the successful and sustainable use of ML is the difficulty in communicating the inferences, predictions, and decisions these algorithms make to domain experts who may not have a technical background in ML. Such communication cannot be limited to the output of the ML model but must also include insights into how and why the model produced that output. This _explanation_ is necessary to create trust and enable domain experts to exercise oversight over both the ML development process and the ML models in use. The ITEA project EXPLAIN [17] aims to develop an end-to-end ML life cycle and an MLOps software architecture that inherently provides explainability and interactivity for industrial domain experts. This means that individuals with little to no technical background in ML can participate and contribute during the entire process, during activities like data preparation, modeling, model deployment, and inference. The process becomes accessible and transparent so that everyone involved can understand how and why a model generates its output and even interact with it to contribute to human domain knowledge. To date, there has been limited use and discussion of MLOps for XAI in industrial applications. If at all, it has been used only for specific elements or individual steps in the ML life cycle. As we learned from an internal study [41], major cloud providers now offer such integrated MLOps solutions, but only some currently include specific XAI-based functionalities and the specific needs of industrial applications are not considered. In this paper, we propose an MLOps architecture with the above-mentioned capabilities. This architecture is based on our companies' experiences and project requirements. We believe this architecture will help bridge the gap between technical and non-technical experts and pave the way for more transparent and accessible ML processes. At first, the problem is described in more detail using the project life cycle in Chapter 2. Then, the related work in MLOps, MLOps workflow, XAI, and interactive ML is described in Chapter 3. Chapter 4 summarizes our architecture's MLOps and XAI requirements. Based on these requirements, Chapter 5 briefly describes the novel software architecture. The Implementation is discussed in Chapter 6. Finally, Section 7 concludes. ## 2 Explain Life Cycle We aim to enhance the traditional ML life cycle by adding steps that empower stakeholders and elevate their influence, participation, and ownership. This vision is driven by a desire to create a more practice-integrated ML approach involving practitioners in the process and outcome. By engaging stakeholders at every stage, EXPLAIN seeks to create a more transparent and accountable ML process that delivers better results and benefits for all. The extended life cycle is shown in Figure 1. As part of the ML life cycle and MLOps, addressing the detection and update of ML models whose inference quality is deterior reason for this quality degradation is concept drift. Concept drift can occur when the modeled relation evolves in such a way that the data model (a concept a model has learned during training) deteriorates with progressing time. For example, the gradual degradation of physical process equipment, such as fouling in a heat exchanger or wear of a pump, can cause concept drift for models used in a production environment. Closely related to concept drift is data drift, which is one of the main reasons for deteriorating model quality. Data drift can be described as a change in the distribution of model input data during production with respect to the distribution of input data during model training. Predictions can worsen with progressing data drift if the model does not account for it. An example of data drift is a change of input material quality in a process, e.g., due to a change of material supplier or sensor drift influencing a model feature. Various techniques detect data drift and potential concept drift, e.g., [6, 46, 50, 49]. These techniques usually rely on statistical tests or additional data models. Some techniques provide a certain degree of explainability to the drift detection process [1] and others rely on explainability methods to detect drift [12]. Explanatory training of models based on feedback from domain experts on the explanations generated by the model itself can complement current approaches [29]. The explanations can also be used as plausibility checks for the models and their underlying data sets, helping to avoid technical debt in the ML lifecycle and in the ongoing development and deployment of ML/AI applications [36, 5, 47]. Appropriate stakeholders are described in Section 2.1 while the life-cycle steps are described in Section 2.2. ### Stakeholders For explanations to be useful, the receiver of any particular explanation should be carefully and purposefully considered [5]. For instance, a direct user of an Figure 1: Adopted explain life cycle from [14]. AI system may ask a "why not" question, seeking clarification for why their expectations were not met, while a system engineer may ask a "how" question, intending to understand where to debug and improve the overall system performance. Therefore, it is crucial first to identify _who_ our various stakeholders are and further pinpoint their corresponding requirements for explainability at any given time in the life cycle [26]. In the project EXPLAIN, we involve industry stakeholders throughout the design and development of our architecture to meet their needs. In addition to the ML engineers and data scientists who work on training, deploying, and maintaining the ML models [18], we identify two main stakeholder groups: end users and domain experts. Our ML life cycle, as illustrated in Figure 1, heavily relies on these stakeholders. End users like machinery operators, site supervisors, and machine maintenance personnel are considered, as well as domain experts such as process engineers and reliability experts. The specific stakeholders involved in each stage of the ML life cycle depend on the use case domain, which in our case, includes industries like _manufacturing_, _electronics_, _mining_, and _pulp and paper_. Consider the pulp and paper domain, where manufacturing involves continuous maintenance and quality control of large machinery, which can be optimized using smart sensors. These sensors are installed on rotational parts of large manufacturing machines to detect and collect vibration data for further predictive analytics. Vibration analysts are one of the stakeholders requiring explainability--these are the aforementioned domain experts who support predictive model development. Stakeholders with a deep understanding of the critical variables may want to examine a specific prediction and inquire about the factors that led to that outcome. The XAI technique to look towards an appropriate answer may be _feature-importance_, where the most influential features over that particular prediction are offered in order of importance. If a feature contributes too heavily--or not at all--towards the prediction, then the domain expert can identify the issue and work with the ML expert to address it during model improvement. After ML model deployment, the pulp mill operators interact with the model; they receive recommended actions, which they may accept or reject. These operators are the end users, who also play a central role in our overall XAI process. As operators are individuals in specific roles, they make decisions on a day-to-day basis and their experience in that role heavily influences their choices, regardless of any AI-driven tools available to them. It can be challenging to capture and represent this experience in ML-driven decision-making, which is why explainability is of interest to many. In particular, users may want to understand why a specific action was recommended over what they would have preferred. In these situations, counterfactual explanations may be helpful, as they can answer why a certain outcome was predicted instead of the one the user expected [45]. By presenting a minimal change in the input that would result in an alternative target outcome, counterfactual explanations can help users understand the reasoning behind the AI-powered recommendation. ### Life-Cycle Steps The **blue circle** in Figure 1 describes the initial development of an ML model for industrial applications. The industrial process is also the primary data source for historical data for ML training and new data for ML predictions, which are either processed directly by the AI systems or by the end user based on the recommendations of an AI system. In an industrial context, this initial process cannot be carried out by ML experts alone, but it is essential to involve professionals with a deep understanding of the industrial process. Ideally, these stakeholders will provide input on today's model requirements and support data collection and processing [14]. The orange boxes indicate process steps that allow for better involvement of industry experts and end-users. _Explanatory modeling_ combines interactive ML and explainable AI (XAI). Similar to active learning, [37] ML models are incrementally trained as domain or ML experts improve the training data. Such improvements can take the form of sample labeling, training set curation (removing harmful samples or up-sampling beneficial samples), or data sample cleaning. As part of the process, the experts receive model outputs together with explanations of the model and provide feedback - which can be used to improve the training data set [44] or to fine-tune the loss function [35]. In the _explanation-based review_ phase, ML models are validated by providing domain experts with insight into the internal reasoning of the trained model. This ensures that the models learn only relevant concepts from the data provided and that only robust and reliable ML models are released for use in production. After deployment, the ML applications enter the production phase, indicated with the **green circle**, acting on live production data. In this phase, _output explanations_ provide the end user with insights into why and how the model produced a particular output. This enables the end user to monitor the ML model and analyze the output. The explanation can also help to understand problems in production processing and to derive the right corrective actions more quickly by pointing the end user to relevant data points. End users can provide feedback or trigger _incremental explanatory training_. ## 3 Related Work Numerous publications cover MLOps, Explainable AI (XAI), and the MLOps workflow. However, no publications specifically focus on MLOps software architecture supporting explanations. Section 3.1 defines MLOps and MLOps architecture in the industrial context, and Section 3.2 summarizes essential sources on the MLOps workflow. Section 3.3 deals with XAI, while Section 3.4 deals with interactive ML. ### MLOps The concept of DevOps [38], which pertains to the development and operation of expansive software systems, has gained significant popularity intending to accelerate the deployment and ensure reliable releases. MLOps is an evolving discipline focused on efficiently deploying and managing ML models in production environments. It combines the principles of DevOps with the specific challenges and requirements of ML systems, allowing organizations to operationalize their ML models at scale [48]. MLOps streamlines the end-to-end lifecycle of ML models by addressing aspects such as reliability, scalability, sustainability, and performance [5]. The core components of MLOps encompass a variety of techniques, tools, and best practices that optimize the entire ML model lifecycle [42]. The main principles of MLOps can be organized into four categories [19]: 1) Data Engineering (Data Collection, Data Analysis, Data Preparation), 2) Model Engineering (Model Building, Model Training, Model Evaluation, Model Selection, Model Packaging), 3) Operations (CI/CD-testing, Model Deployment, Monitoring), 4) Supporting Activities (Infrastructure, Versioning, Automation, Tools). MLOps inherited automation as a fundamental principle from DevOps. Employing continuous integration and continuous delivery (CI/CD) pipelines automates the various software development and deployment stages. This automation ensures that changes to code or data automatically trigger the deployment, leading to faster iterations and reliable releases. Furthermore, continuous training (CT) - an additional practice in MLOps - enables automatic model retraining, allowing models to remain up-to-date and adaptable to real-time data changes [42]. By embracing MLOps, organizations can effectively tackle challenges related to version control, reproducibility, model drift, data drift, and model performance degradation. Further, they can develop an end-to-end MLOps infrastructure considering the need for seamless explanation methods and leveraging explanations for the model tests, monitoring, improvement, and auditing. ### MLOps Workflow Amershi, Saleema et al.'s paper [3] discusses several crucial steps in the software engineering workflow for ML. 1. First, the appropriate features are selected for a product in the requirements section. 2. Then, the search for existing data sets and the acquisition of new data occurs, with incorrect data being cleaned from the data sets. For many ML methods, additional labeling is necessary. 3. In the feature engineering stage, all activities for extracting and selecting features are carried out, and models are chosen and trained. If the features are not good enough, a new look is taken at them. 4. In the model evaluation stage, metrics test the model with additional data sets. 5. Finally, after the deployment of the model on the target platform, it is monitored. MLOps, as described by Symeonidis et al. [42], goes beyond this process by incorporating additional testing and continuous integration/continuous delivery (CI/CD) to ensure that ML is brought into operation smoothly and efficiently. Monitoring, sustainability, robustness, fairness, and explainability are core competencies for building mature, automated, and efficient MLOps systems. In their paper, Symeonidis et al. provide an overview of MLOps, defining the operation and the components of such systems while highlighting the current problems and trends. They also present different tools and their usefulness in providing the corresponding guidelines. Furthermore, they propose a connection between MLOps and AutoML (Automated Machine Learning), suggesting how this combination could work. Personal data and the GDPR play a role in a few industrial applications. As per GDPR [15], providing individuals with meaningful explanations is crucial when automated decisions are made [43]. To achieve this, MLOps and AI software sustainability are essential. However, the more these platforms are integrated into day-to-day software operations, the more the risk of AI software sustainability becoming unsustainable from a social, technical, or organizational perspective. The challenges of operationalizing ML models in the manufacturing domain are significant, given the probabilistic nature of ML algorithms, reliance on large data sets, and the need for constant retraining [32]. Raffin et al. [32] have proposed a domain model which divides the landscape into five contexts reflecting the differences between edge systems, monitoring and dashboarding on the cloud instance, and the features of the MLOps domain. In their work, Raffin et al. [32] refer to the white paper from Salama et al. [34], which explains the overall MLOps process and the workflow needed per step in the MLOps process. The end-to-end workflow proposed in our paper has similar components to those presented in Raffin et al.'s work. However, it also shows the relationship between the components and intermediate artifacts, such as data sets, models, and serving packages. These critical process components need to incorporate explainable AI. Moreover, in industrial applications, some unique challenges need to be addressed. These involve particular challenges relevant to the process industry [20] and Industry 4.0 [19] context. ### Explainable AI XAI describes that AI, especially ML solutions, should be understandable and explainable to stakeholders such as modelers and end users. At present, it describes a collection of different techniques and methods that attempt to achieve this goal [14]. This is a response to the "black box" phenomena in ML, where even the designers cannot explain the reasoning behind a specific inference [5]. XAI promises to help users perform more effectively by refining their understanding of AI-powered systems and dispelling misconceptions. XAI may also allow for the social right to explanation [15], although it is relevant even with no regulatory requirement. By improving the user experience of a product or service, XAI can help end users trust that the AI system is making good decisions. During the modeling process, stakeholders and users can assess the quality of the model and make necessary improvements to both the model and data, ultimately leading to better performance. Further, during model deployment, the explanation enables human oversight by helping the user judge whether a prediction is reasonable. XAI aims to make AI more transparent and understandable to humans, explain an AI inference in the past, present, and future, and reveal information based on actions. These characteristics make it possible to confirm existing knowledge, challenge it, and generate new assumptions [14]. Brennen [9] delves into the various interpretations of "Explainable AI". During his research, respondents had differing views on what they wanted to understand about AI and what they already understood. The need for explainability can arise from debugging, bias identification, and building trust in new technology. While decision trees are transparent by design, more opaque models such as deep neural networks (DNNs) require more complex monitoring and explanation [43]. Borg et al. [8] suggest using heatmaps for visual explanations in computer vision models, and Dhanorkar et al. [11] explore xai design space, including model selection and tracking adversarial model behaviors. Galhotra and Pradhan [31] categorize XAI methods as intrinsic or post hoc, while Cheng et al. [10] discuss white-box vs. black-box and interactive vs. static explanations. In industrial IoT systems, time-series data is prevalent [21], and LIME [33] and SHAP [27] are used to explain univariate time-series classification algorithms [30]. Further, transparency is important to ensure calibration of a user's mental model to a system's performance [13, 39] and overall ensure adequate human control [28, 39]. Hence, transparency is crucial to XAI [5] since it ensures human control over the system's performance and good engineering practices [25, 16]. ### Interactive Machine Learning Explainable Artificial Intelligence (XAI) aims to make ML results more interpretable, while Interactive Machine Learning (IML) involves integrating humans into the insight discovery process. Addressing the common obstacle of insufficiently labeled data in developing classification models for process monitoring and optimization in chemical batch production, particularly focusing on multivariate signal data, [2] propose an active learning web-application that assists human experts in labeling batch recipe steps using process data. To tackle the crucial task of dataset labeling in supervised and semi-supervised machine learning, [23] combines model-based active learning with user-based interactive labelling, to employ visual cues to guide users in selecting and labeling instances, leading to positive effects on user confidence, difficulty, orientation, and perception of model performance. [44, 35] highlights the increasing importance of IML and proposes Explanatory Interactive Learning (XIL) as a way to bridge the gap between XAI and IML. XIL combines algorithmic explanations with user interaction during iterative training loops, enabling users to adjust labels during the training process based on explanations and provided feedback which leads to enhancing the connection between XAI and IML. Assaf and Schumann [4] investigated CNN models for forecasting and explanations. These models were used to provide visual explanations, as [7] introduced visual interactive labeling (VIAL), which combines active learning and interactive visualizations to leverage their respective strengths. In domains such as manufacturing, where high-dimensional data with limited labels and spurious correlations are prevalent, manual feature engineering can be expensive. To overcome this challenge, [22] propose a method called interactive visual feature engineering. By utilizing dimensionality reduction techniques and interactive visualizations. Application of XIL enhances the predictive capabilities and interpretability of models, empowering human experts in the process. The learning algorithm queries the user, predicts labels, and provides explanations, enabling iterative feedback and improvement of the model. ## 4 Requirements for an Explainable MLOps Architecture We collected requirements for a novel MLOps software architecture that can support the EXPLAIN ML life cycle, reflecting the needs of various industrial domains like mining, paper pulp and metals production, power generation, and electronics manufacturing. These requirements are collected in multiple ways. For instance, interviews were conducted as part of a case study. Furthermore, in a brainstorming session with industrial experts, software engineers, and XAI researchers, we identified requirements regarding data collection and management, models, explainers, training, deployment and serving, monitoring and feedback, general architecture, infrastructure, and performance. The results of this case study are published in [18]. This study aimed to investigate the extent to which MLOps is implemented by four project partners and describes their ML use cases, MLOps software architecture, tools, and requirements from their unique perspectives. Our interviews revealed that each industry partner uses MLOps differently, depending on their use case. There were variations in tools and architectural patterns used across the board. Overall, our findings were heavily focused on the architecture decisions involved in the MLOps tool landscape that the interviewed companies utilized. Furthermore, a brainstorming workshop was held with the partners to gather more details about specific requirements for different domains and components of the architecture. As mentioned, the EXPLAIN project covers a wide variety of use cases that can result in divergent requirements. Nonetheless, the goal is to identify overlapping requirements to find generic architecture components that can be reused for XAI applications in general. The requirements from this session have been split into MLOps requirements, e.g., for the infrastructure and storage, and XAI requirements, containing requirements, for instance, for the visualization of explanations and the connection between existing MLOps components and explainers. The requirements are also linked to both stakeholder groups, which are the domain experts during training, and end users during production. The requirements substructures are shown below: **MLOps**: Infrastructure, Data & Storage, Data Traceability, Models, Model Traceability, Model Deployment & Serving, Feedback, Monitoring, Other Non-functional * [XAI] Explainer Support, Explainer Traceability, Explanation-based Review, Explainer Feedback, Explainer Monitoring The MLOps requirements are listed in Section 4.1 and the XAI requirements in Section 4.2. In a requirement, _shall_ means that the application should fulfill this requirement and _must_ means that it must necessarily be fulfilled. As in the MLOps life cycle, for the stakeholder groups, a distinction is made between development and production requirements (D and P in Table 1). The development steps are: * [noitemsep,topsep=0pt] * **D1:** Requirement Identification * **D2:** Data Collection * **D3:** Explanatory Modeling * **D4:** Statistical Validation * **D5:** Explanation-based Review. The production steps are: * [noitemsep,topsep=0pt] * **P1:** Deploy * **P2:** Load Live Data * **P3:** Prepare Live Data * **P4:** Output Explanations * **P5:** Record model output, user and system response * **P6:** Incremental Explanatory Training * **P7:** Update Improved Model ### MLOps Requirements Implementing MLOps requires careful consideration of divergent requirements, particularly those related to architecture. In this section, we will delve into the crucial MLOps requirements that are taken into account for our architecture and should form the basis of industrial ML applications. They are split into different categories and mapped to the life cycle phases (from Figure 1) in Table 1. The next section extends these MLOps requirements with additional XAI requirements. #### 4.1.1 Infrastructure (IN) In general, all components in the life cycle must adhere to the infrastructure requirements to guarantee that the platform is scalable [MLOPS-IN-{05,07}], modular [MLOPS-IN-{01-04}], and maintainable [MLOPS-IN-{06}]. [MLOPS-IN-01] The system must run on a cloud environment. [MLOPS-IN-02] The system must run on an on-premises, self-hosted environment. [MLOPS-IN-03] The system must support Windows and Linux applications. [MLOPS-IN-04] The system must be composable to serve different UCs. [MLOPS-IN-05] The system must support individual horizontal scaling for different components in the architecture. [MLOPS-IN-06] The system should be described in a modeling language. [MLOPS-IN-06.1] The system should be deployable with this modeling language. [MLOPS-IN-06.2] The description must be version controlled. [MLOPS-IN-07] The system should support hardware acceleration for model training and inferences. #### 4.1.1 Data & Storage (DS) The system needs to support various use cases that deal with different data types [MLOPS-DS-{01}], storage [MLOPS-DS-{02,05}], and interfaces [MLOPS-DS-{03-04}]. [MLOPS-DS-01] The system must flexibly support different data formats, such as image, time series, and text data, e.g. _CSV_, _TXT_, and _JSON_. [MLOPS-DS-02] The system must handle data storage sizes of Terabytes or more. [MLOPS-DS-03] The system must be able to support data ingress via REST APIs and OPC UA. [MLOPS-DS-04] The system must use open data standards and interfaces. [MLOPS-DS-05] The system should support different backends such as SQL, NoSQL, InfluxDB, S3-compatible storage, and GCP buckets. #### 4.1.2 Data Traceability (DT) The traceability of data is a critical functionality of any MLOPs system which deals lineage of data and metadata, including labels [MLOPS-DT{01}], the definition of data sets [MLOPS-DT-02], keeping track of applied preprocessing and generation [MLOPS-DT-{03}], and ingress of annotations [MLOPSDT-{04}]. [MLOPS-DT-01] The system must track the origin of the data. [MLOPS-DT-01.1] This must track the equipment/hardware used, such as the sensors, including their location. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline **Major phases** & \multicolumn{4}{|c|}{**Development**} & \multicolumn{4}{|c|}{**Production**} \\ \hline _Minor phases_ & D1 & D2 & D3 & D4 & D5 & P1 & P2 & P3 & P4 & P5 & P6 & P7 \\ \hline Infrastructure & X & X & X & X & X & X & X & X & X & X & X & X & X \\ \hline Data \& Storage & & X & & & & & X & X & & & & & \\ \hline Data Traceability & & X & X & & & & & X & X & & X & X & \\ \hline Models & & & & X & & & & & & & & & & \\ \hline Model Traceability & & & & X & & & & & & & & & & \\ \hline Model Deployment \& Serving & & & & & X & X & & & & X & X & X \\ \hline Feedback & & & X & X & & X & & & & & X & X & \\ \hline Monitoring & & & & & & & & & & & & & & & \\ \hline Other Non-functional & X & X & X & X & X & X & X & X & X & X & X & X & X \\ \hline \end{tabular} \end{table} Table 1: Mapping of the **MLOps** requirements to the different phases in the life cycle in Figure 1. The minor phases D1-D5 map to the five phases of the development cycle, and the minor phases P1-P7 map to the seven phases of the production cycle. [MLOPS-DT-01.2] This must track when the data was captured with global timestamps (date and time with time zone). [MLOPS-DT-01.3] This should track specific configurations, settings, and parameters of the sensor, e.g., the settings of a camera or the sampling frequency of a vibration sensor. [MLOPS-DT-02] The system must track the version of the data and data set, which are an immutable representation of the data at a certain point in time. [MLOPS-DT-03] The system must track the preprocessing steps applied to the raw data before model training or inference. [MLOPS-DT-03.1] The system should support saving transformed and artificially generated data that has been generated during training. [MLOPS-DT-04] The system must have an ingress for annotations and labels on the data. [MLOPS-DT-04.1] The annotations must be linked to the samples. #### 4.1.1 Models (MO) Like with the data and storage requirements, due to the required support for different use cases, support different models and evaluations [MLOPS-MD[01-02]] and interfaces [MLOPS-MD[03-04]] is needed. [MLOPS-MD-01] The system must support different ML frameworks such as TensorFlow, PyTorch, and scikit-learn models. [MLOPS-MD-02] The system must support the following metrics: MAE, MSE, RMSE, F1 and \(R^{2}\) score, Recall, Precision, and Specificity, Quantile Loss, and Variance Ratio Criterion. [MLOps-MD-03] The system should support evaluation functions and aggregated metrics flexibly. [MLOPS-MD-04] The system must support open model standards and interfaces. #### 4.1.2 Model Traceability (MT) As with data traceability, model traceability is key for maintainable ML applications, with a link to the input of the model training [MLOPS-MT-{01,04}], the model architecture [MLOPS-MT-{02}], the software used for training [MLOPS-MT-03], and the final performance metrics [MLOPS-MT-04]. [MLOPS-MT-01] The system must track which data sets have been used for model training. [MLOPS-MT-01.1] The system must track the training, validation, and test splits. [MLOPS-MT-02] The system must track the version of the model architecture used. [MLOPS-MT-02.1] The system must track the initial state of the models before training. [MLOPS-MT-03] The system must track the software version used for training, including libraries, compilers, and interpreters, where applicable. [MLOPS-MT-04] The system must track the performance metrics when the model is evaluated on a test data set. [MLOPS-MT-05] The system must track the relationship between [MLOPS-MO-{01-03}]. #### 4.2.2 Model Deployment & Serving (MD) The next step is to deploy the models and serve incoming requests, where it is important that the models are versioned [MLOPS-MD-01], there are interfaces to retrain models and deploy models [MLOPS-MD-{02-05}], and that all inference results are stored [MLOPS-MD-{06}]. [MLOPS-MD-01] The system must version the model artifacts after training (model registration). [MLOPS-MD-02] The system should support automatically retraining models on triggers like new data sets or new annotations/labels. [MLOPS-MD-03] The system should have an interface to start and define new model training runs manually. [MLOPS-MD-04] The system should support different deployment schemes like shadow, canary, and A/B. [MLOPS-MD-05] The system should have a GUI to deploy versioned model artifacts manually. [MLOPS-MD-05.1] The GUI should show the deployed models, explainers, and deployment schemes. [MLOPS-MD-06] The system must support storing all inference results. [MLOPS-MD-06.1] These predictions should be linked to the data the models used to generate them. #### 4.2.3 Feedback (FB) It is highly unlikely that the data on which the ML models make predictions are always of high quality or will not drift over time. Thus, it is important to have a proper data quality feedback interface [MLOPS-FB-{01}] and ML prediction feedback and annotation interfaces [MLOPS-FB-{02-03}]. [MLOPS-FB-01] The system must have an interface to view and provide feedback on the data quality. [MLOPS-FB-01.1] This interface should allow marking data samples as 'bad', excluding them from being used by other components. [MLOPS-FB-01.2] This interface must have the functionality to compare new data samples to similar data samples. [MLOPS-FB-01.3] This interface should have the functionality to drill down on a sample and view its details and other related samples. [MLOPS-FB-02] The system must have a graphical user interface (GUI) to provide feedback on the predictions made by the models during training and in production. [MLOPS-FB-02.1] This interface should have a way to guide users to the hard samples first, e.g., a sample that has an uncertain prediction or an incorrect prediction. [MLOPS-FB-02.2] The system should have an interface to compare different training runs. [MLOPS-FB-03] The system should have a GUI for creating annotations and labels on the data. [MLOPS-FB-03.1] This interface should support different data types, such as image and time series data. [MLOPS-FB-03.2] This interface should support the different ML application types, such as classification and regression. #### 4.1.1 Monitoring (MT) Without monitoring the ML models' performance, it is unclear when feedback is required and when the performance of the data or models degrades. Requirements for ML model performance monitoring [MLOPS-MT-{01,02}], data quality monitoring [MLOPS-MT-{03}], and system performance monitoring [MLOPS-MT-03.3] are defined. [MLOPS-MT-01] The system must have an interface that can show the actual performance metrics of the models in production. [MLOPS-MT-02] The system should have an alerting system to indicate that feedback or retraining is required. [MLOPS-MT-03] The system should have a component that measures the data quality, such as the drift over time or the changes in data with respect to a baseline. [MLOPS-MT-03.1] The drift monitoring should have limits on these metrics to detect deterioration and send alerts. [MLOPS-MT-03.2] This monitoring should work without providing any manual feedback on the predictions. [MLOPS-MT-03.3] The system should monitor the latency and throughput of the components. #### 4.1.2 Other Non-functional (NF) For some use cases, an additional requirement is that the system comply with critical infrastructure regulations. Furthermore, open-source software is preferred to prevent vendor lock-in and create an architecture that is not dependent on closed-source, as long as it is maintained properly. [MLOPS-NF-01] The system should comply if the applications need Safety and compliance regulations, such as the BSI KRITIS regulations [24]. [MLOPS-NF-02] The system should use open-source packages that are maintained regularly and supported by a large community. ### XAI Requirements In this section, we will explore the key requirements for XAI. These requirements extend the existing MLOps practices and tools with additional explainer components. This means that most MLOps requirements must be implemented before any XAI requirements can be met. Table 2 maps the XAI requirements to the life cycle phases of Figure 1. #### 4.2.1 Explainer Support (ES) These initial XAI requirements define the support for different explainer types and explanations [XAI-ES-{01,02}] and optional data explainers [XAI-ES-03]. [XAI-ES-01] The system must support post-hoc explanations including feature attribution methods [41]. [XAI-ES-02] The system must support interpretable explanations, where the models either are interpretable themselves or provide explanations for their predictions [41]. [XAI-ES-03] The system may support data explanation methods, which provide insight into the underlying data structures without considering the predictions from the application models [41]. #### 4.2.1 Explainer Traceability (ET) Like traceability for the data and models, explainers also require traceability [XAI-ET-01] to be able to reproduce results and improve them over time. They should be linked to the models and data as well [XAI-ET-{02-03}]. [XAI-ET-01] The system must track explainers used with each model. [XAI-ET-02] The system must track generated explanations for different data and model combinations for use of feedback and review later. [XAI-ET-03] The system should track which data is used to generate the explanations. [XAI-ET-03.1] This should also track the domain knowledge used to generate explanations. #### 4.2.2 Explanation-based Review (ER) One of the major contributions of XAI to the life cycle is the explanation-based review phase, where explainers are integrated into existing review systems [XAI-ER-01] and allow incremental improvements of the model during development [XAI-ER-02]. [XAI-ER-01] The system must integrate existing feedback interfaces to visualize explanations on the data used for training, validation, testing, and new data from production. [XAI-ER-02] The system should support explanation-based reviewing, including eXplanatory Interactive (Machine) Learning (XIL) [44]. #### 4.2.3 Explainer Feedback (EF) Explainers are very useful in providing insights into the data and predictions from the models but also require feedback themselves [XAI-EF-{01,02}] and visualization for different data and model combinations [XAI-EF-{03-05}]. [XAI-EF-01] This system must have a GUI to provide feedback to the explainers. \begin{table} \begin{tabular}{|l|r|r|r|r|r|r|r|r|} \hline **Major phases** & \multicolumn{3}{c|}{**Development**} & \multicolumn{3}{c|}{**Production**} \\ \hline _Minor phases_ & D1 & D2 & D3 & D4 & D5 & P1 & P2 & P3 & P4 & P5 & P6 & P7 \\ \hline Explainer Support & & X & X & & & X & & X & X \\ \hline Explainer Traceability & & & X & X & & & X & X & \\ \hline Explanation-based Review & & X & & X & & & & & \\ \hline Explainer Feedback & & & & X & & & & X & \\ \hline Explainer Monitoring & & & & X & & & & X & \\ \hline \end{tabular} \end{table} Table 2: Mapping of the **XAI** requirements to the different phases in the life cycle in Figure 1. The minor phases refer to the life cycle in the same way as in Table 1. [XAI-EF-01.1] This system should be integrated with existing feedback interfaces. [XAI-EF-03] The system must have a GUI to compare multiple explainers and models with the same data. [XAI-EF-04] The system must have a GUI to compare different data with the same model and explainer. [XAI-EF-04.1] This interface should be composable to add new explanations or visualizations for different applications. #### 4.0.1 Explainer Monitoring (EM) Finally, like with ML models, the performance of explainers should also be monitored to verify that they are still providing the right explanations for the right reasons. [XAI-EM-01] The system should have a component that tracks the performance of the explainers. [XAI-EM-01.1] This should include explanation completeness, e.g., how much features contribute to the final model output. [XAI-EM-01.2] This should include explanation stability, e.g., how much slight changes influence the model in the input data. [XAI-EM-01.3] This should include explanation fidelity, e.g., how much the explanation approximates the prediction of the original complex model. [XAI-EM-01.4] This should include explanation relevance, e.g., how much irrelevant information is not shown. ## 5 Architecture In this chapter, we propose our MLOps software architecture that integrates explanation methods flexibly. As shown in the MLOps life cycle in Figure 1, the explainable life cycle involves tasks from different domains that need to be carried out. It is essential to follow the life cycle to ensure the process is transparent and understandable to the stakeholders involved. For this reason, the architectural components are categorized according to the corresponding domains: Data Administration, Model Training, Model Management, Feedback, and Model Observation. The architecture is presented in Figure 2. The components partially use object stores and databases; arrows show the flow of data, e.g., data sets are used by the ML IDE and the model training component, or triggers, e.g., model monitoring triggers retraining of models. The architecture domains are Described in Section 5.1 while the components are described in Section 5.2. ### Domains The steps of our MLOps life-cycle are covered in the architecture simplified by the generic terms of our architecture domains: _1. data administration_, _2. model training_, _3. model management_, _4. user feedback_, and _model observation_. 1. _Data administration (blue)_ includes the steps _requirements identification_, _data collection_, _load live data_ and _prepare live data_ from Figure 1. 2. _Model training (white)_ consists of _interactive training_ and _update improved model_. 3. _Model management (orange)_ is reflected by _deploy_. 4. Finally, the _user feedback_ and _model observation (green)_ include _record model output, user, and system response_. The components that reflect the explanatory functions of the MLOps life-cycle are shown together with their relation to _explanatory modeling_, _explanatory review_, _output explanations_, and _incremental explanatory training_ in Table 3. ### Components 1. **Data Administration** The data management component encapsulates functionality to collect, annotate, and version data. All other relevant metadata is also collected to trace each data sample to its source fully. Next, it Figure 2: EXPLAIN MLOps Software Architecture in five domains and eight components. Major XAI functionality in **bold**, but it is not limited to only those components. defines data sets, which are immutable representations of data and metadata to be used by other components in the architecture. They form the basis of reproducibility. The data can be stored in a database that suits the managed data samples, for example, S3-compatible image storage. The data monitoring component accesses data using data management to check whether the data distributions match the expectations. It is also used by the user feedback component, which can access the data for visualization, after which user feedback can be written back into the data management component. **Data Monitoring**: The model monitoring component is dedicated to tracking the performance of the models and explainers. It monitors data and concept drifts and has the functionality to detect this kind of distribution shift in the data automatically. The model monitoring component also uses these data metrics, which can give a holistic view of the performance of the ML applications this architecture serves. Our architecture provides dashboards for end users, including alarms and notifications when models or explainers are not functioning correctly. User feedback can be important to detect loss in performance - especially for tasks like anomaly detection, where no label is readily available to check for correctness. In cases of underperformance, our architecture can trigger model retraining automatically if the performance is below a certain threshold. 2. **Model Training ML IDE**: The ML IDE component enables data scientists and ML engineers to conduct initial experiments within the model training domain. This component can also retrieve data sets from the data management \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **Domain/Component** & **Explanatory** & **Explanatory** & **Output** & **Incremental** \\ & **Modeling** & **Review** & **Explanations** & **Training** \\ \hline **Data Administration** & & & & \\ Data Management & X & X & & \\ Data Monitoring & & & (X) & \\ \hline **Model Training** & & & & \\ ML IDE Component & X & X & & X \\ Training Component & X & X & & X \\ \hline **Model Management** & & & & \\ Model Registry & & X & & \\ Model Serving & X & & X & \\ \hline **Feedback** & & & & \\ User Feedback & X & & X & X \\ \hline **Model Observation** & & & & \\ Model Monitoring & & & (X) & X \\ \hline \end{tabular} \end{table} Table 3: Overview of the components and their relation to the explainability steps. An X means that an explainability step is partially or completely covered by a component, an (X) means that the component depends on the result of a step. component so it is clear with which immutable data the experiments were run. Together with the data set identifier, all other training parameters, such as the software, model architecture, and hyperparameters, are logged to the model training component. The ML IDE component does not standardize a specific way of working but can be the basis for new ML models and pipelines (data processing steps and model executions). This component serves as a platform to run training jobs on a cluster, which can speed up training. Further, the ML IDE is where hyperparameter tuning can be initialized. **Model Training**: The model training component enables the execution of existing training pipelines and hyperparameter tuning automatically. The data sets required for training the models and, potentially, explainers are streamed from the data management component. Lastly, the model training component must track all experiments. This includes the used data sets, training architecture and pipelines, hyperparameters, and software used. This component also has a view that allows for comparing and selecting the best-trained models using the hyperparameters and metrics as selectors. 3. **Model Management** **Model Registry**: The model registry component can register pipelines and take advantage of versioning capabilities, including explainers. This will easily integrate predictions into a production environment, making the process more efficient and effective. **Model Serving**: The platform's model serving component is designed to be scalable and can accommodate multiple registered and deployed models. Once deployed, the models can serve requests from other production services. The platform also keeps track of the inputs and outputs of the models and explainers it serves. This helps seamlessly update models to new versions e.g., using shadow or canary serving schemes, ensuring a smooth user experience. 4. **Feedback** **User Feedback**: The user feedback component is a valuable tool for operators to have control over the ML process. It allows them to provide feedback on the functionality and quality of a model and also enables them to label data. This feature is crucial to ensure the model performs accurately and effectively. As an operator, having the ability to intervene in the ML process and provide feedback is essential to achieving the desired outcomes. It is also the component that visualizes the explainers in application-specific dashboards, ensuring operators can correctly interpret the predictions made by the models. 5. **Model Observation** **Model Monitoring**: The model monitoring component keeps track of a ML model's performance over time. It constantly evaluates the model's ac curacy and identifies any declines in performance that may occur. Monitoring the model can quickly detect any issues with the data or the model itself and send alerts to the users. This allows for prompt action to rectify the situation. This helps to maintain the model's accuracy and usefulness over time. Next, it also allows for comparing different models on the data to get an idea of which one performs better. Finally, it can also trigger automatic retraining of existing models when there are new data and annotations for the model to use and then deploy the new models after they have been registered. ## 6 Current State We still need to implement the architecture described fully, but we have already implemented some components for testing and improving the architecture. In the majority of cases, partners use their existing components and extend them to implement the architecture. However, these are intended for internal use only and not for publication. The University of Hildesheim started by implementing a solution that even external parties can access [40]. Since not all the explanation methods are available in current libraries and tools, this solution includes manuals and several example XAI implementations. Here, _MLflow_ can store and track explanations. The explanation methods can be stored and versioned using common code repositories. However, the feedback component still is in a prototypical stage and does not yet allow for feedback on explanations or interactions with explanation methods. The components used for our general solution are summarized in Table 4. Next, we describe an example implementation of a software system for managing image data in the electronics domain. The system includes a self-developed \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline **Component** & **Implementation** \\ \hline Data Management & Apache NiFi, MLflow \\ Data Monitoring & Grafana, Evidently \\ Model Training & Kubernetes, MLflow \\ ML IDE & Jupyter, ML workspace \\ Model Management & MLflow \\ Explainability Tools & H20, MLflow, IBM Watson AI explainability 360 \\ Model Serving & Seldon.core, MLflow.deployment, Kubeflow \\ Model Monitoring & Evidently, River, Grafana \\ Feedback Component & Not yet implemented / Prototypical state \\ \hline \end{tabular} \end{table} Table 4: Current components of the general solution [40]. feedback _Django_ GUI for labeling and correcting predictions, a scalable training platform, a model registry, and model deployment capabilities. For model training, a _Ray_ cluster is being set up, to allow for a scalable training platform. The model training experiments are tracked with _MLflow_, which is also used as a model registry. The components are deployed on a self-hosted, on-premises _Kubernetes_ cluster, except for the monitoring component, which uses a _Splunk_ dashboard (performance monitoring). Several image classification models are deployed with _Seldon Core_ which allows for horizontal scalability of model deployments in _Kubernetes_. The electronics use case team is actively improving the feedback GUI for XAI with new explainer visualization capabilities. They plan to incorporate a state-of-the-art library of explainers designed explicitly for image data. The explainers will be registered in the model registry and served through the model serving component, allowing other components to utilize and benefit from them as well. This also requires the explainers to be registered (in the model registry) and served (in the model serving component), so other components can use them. Yet, the software currently lacks automated retraining, alerting, and data monitoring capabilities. ## 7 Conclusion The increasing popularity of ML in industrial operations has opened up a range of opportunities for businesses to optimize their processes and improve productivity. However, deploying and managing ML models in production environments can be complex and challenging, especially in industries requiring high levels of explainability and transparency. To address these challenges, we developed a novel MLOps software architecture that provides an integrated approach to MLOps, allowing for the integration of explanations into the development and deployment processes. Several industrial companies and universities have adopted this architecture, which will be validated and improved in the future within the project EXPLAIN. One of the key benefits of this architecture is its ability to support the explainability of ML models in industrial operations. This is particularly important in industries where regulatory requirements or ethical considerations demand high transparency and accountability. Integrating explanations into the development and deployment processes also enables businesses to understand the behavior of their ML models better and identify potential issues or biases. This can improve the accuracy and reliability of the models and increase trust and confidence in their outputs. Overall, developing this novel MLOps software architecture is a significant step forward in integrating XAI into industrial operations. In the future, we plan to evaluate our implementation and gain experience with the explainability components. We will work towards making it more user-friendly and easier to understand. For this reason, we will develop instructions, solutions, and example implementations, if necessary, to facilitate their use. This way, users will better understand the system and be able to use it more efficiently. As technology evolves, it will be interesting to see how businesses leverage these new capabilities to optimize their processes, improve efficiency, and drive innovation. ## Acknowledgment This work is supported by the project EXPLAIN, funded by the Federal Ministry of Education under grant 01--S22030E, funded by the Netherlands Enterprise Agency RVO under grant AI212001, and funded by Sweden's Innovation Agency (Vinnova) under grant 2021-04336. Any opinions expressed herein are solely by the authors and not the funding agencies.
2309.15039
Can-SAVE: Mass Cancer Risk Prediction via Survival Analysis Variables and EHR
Specific medical cancer screening methods are often costly, time-consuming, and weakly applicable on a large scale. Advanced Artificial Intelligence (AI) methods greatly help cancer detection but require specific or deep medical data. These aspects prevent the mass implementation of cancer screening methods. For this reason, it is a disruptive change for healthcare to apply AI methods for mass personalized assessment of the cancer risk among patients based on the existing Electronic Health Records (EHR) volume. This paper presents a novel Can-SAVE cancer risk assessment method combining a survival analysis approach with a gradient-boosting algorithm. It is highly accessible and resource-efficient, utilizing only a sequence of high-level medical events. We tested the proposed method in a long-term retrospective experiment covering more than 1.1 million people and four regions of Russia. The Can-SAVE method significantly exceeds the baselines by the Average Precision metric of 22.8%$\pm$2.7% vs 15.1%$\pm$2.6%. The extensive ablation study also confirmed the proposed method's dominant performance. The experiment supervised by oncologists shows a reliable cancer patient detection rate of up to 84 out of 1000 selected. Such results surpass the medical screening strategies estimates; the typical age-specific Number Needed to Screen is only 9 out of 1000 (for colorectal cancer). Overall, our experiments show a 4.7-6.4 times improvement in cancer detection rate (TOP@1k) compared to the traditional healthcare risk estimation approach.
Petr Philonenko, Vladimir Kokh, Pavel Blinov
2023-09-26T16:15:54Z
http://arxiv.org/abs/2309.15039v2
# Combining Survival Analysis and Machine Learning ###### Abstract _Introduction:_ Purely medical cancer screening methods are often costly, time-consuming, and weakly applicable on a large scale. Advanced Artificial Intelligence (AI) methods greatly help cancer detection but require specific or complex medical data. These aspects affect the mass implementation of cancer screening methods. For these reasons, it is a disruptive change for healthcare to apply AI methods for mass personalized assessment of the cancer risk among patients based on the existing Electronic Health Records (EHR) volume. This paper proposes a novel method for personalized cancer risk prediction. _Methods & Data:_ We formulate the problem as a binary classification task with diseased and healthy patients as classes. We have data from one of the largest regional-level clinics at our disposal. This dataset contains 175 441 de-identified patient EHRs, of which 2 861 were diagnosed with cancer. As a baseline, we implement a solution based on a recurrent neural network (RNN). This RNN processes the sequence of embeddings constructed for each medical event by a BERT-based language model pre-trained on medical texts. _Proposed Method based on ML and Survival Analysis:_ We propose a method that combines machine learning and survival analysis since these approaches are less computationally heavy, can be combined into an ensemble (the Survival Ensemble), and can be reproduced in most medical institutions. Initially, we train survival models (the Kaplan-Meier estimators and the Accelerated Failure Time model). And then, we carry out feature engineering using the fitted survival models. As a result, the proposed Survival Ensemble is an ML-based method containing both classical ML features (extracted from data manually) and the fitted survival models as features. _Experiments:_ We test the Survival Ensemble in some numeric studies. Firstly, we obtain a significant difference between values of the primary metric (Average Precision) with 22.8% \(\pm\) 2.7% (ROC AUC 83.7% \(\pm\) 1.7%, F1 17.8% \(\pm\) 2.8%) for the Survival Ensemble versus 15.1% \(\pm\) 2.6% (ROC AUC 84.9% \(\pm\) 0.8%, F1 21.4% \(\pm\) 3.1%) for the Baseline method. These confidence intervals were computed at the 95%-th level. Secondly, the performance of the Survival Ensemble is also confirmed during the ablation study. Thirdly, our method exceeds age baselines by a significant margin. Moreover, in the blind retrospective out-of-time experiment, we have clearly shown that the proposed method is reliable in cancer patient detection (9 out of 100 selected). Such results exceed the estimates of medical screening, e.g., the best Number Needed to Screen (9 out of 1000 screenings). _Conclusion:_ This paper presents a novel method for mass cancer risk prediction using EHR data. Among other methods, our one stands out by the minimum data greedy policy, requiring only a history of medical service codes and diagnoses from EHR. Such a feature greatly expands the method's applicability among clinics with varying data completeness. Comparative experiments demonstrate that the proposed method outperforms traditional baseline methods significantly, achieving higher cancer patient detection. This method can help sort the patients' list for scheduled medical examinations, inviting high-risk patients first. Further improvements, such as end-to-end training, enhance the method's performance. keywords: Cancer, EHR, ICD-10, Machine Learning, Survival Analysis, Experiments. ## 1 Introduction Cancer is one of the leading causes of death worldwide. Over the past decades, the world has seen a dramatic increase in yearly cases of oncology detection [1; 2; 3]. On the one hand, this is due to factors of worsening environment [4; 5]. Paradoxically, on the other hand, this is due to the general tendency of an increase in life expectancy and substantial improvement in medical diagnostic methods [6; 7]. While oncology remains a severe disease with no definite cure, there is significant progress in treatment methods, especially effective in the early stages. Successful cure and sustained remission are much more probable for the first stages than similar success in the late ones. However, early detection of cancer development remains an acute problem because the disease can stay asymptomatic for a long time. There are special cancer screening methods (e.g., biological material tests and medical imaging), but they are often costly, time-consuming, and weakly applicable on a large scale. Many artificial intelligence (AI) methods exist in the literature to assess personalized cancer risk. However, these methods re
2309.06723
PIAVE: A Pose-Invariant Audio-Visual Speaker Extraction Network
It is common in everyday spoken communication that we look at the turning head of a talker to listen to his/her voice. Humans see the talker to listen better, so do machines. However, previous studies on audio-visual speaker extraction have not effectively handled the varying talking face. This paper studies how to take full advantage of the varying talking face. We propose a Pose-Invariant Audio-Visual Speaker Extraction Network (PIAVE) that incorporates an additional pose-invariant view to improve audio-visual speaker extraction. Specifically, we generate the pose-invariant view from each original pose orientation, which enables the model to receive a consistent frontal view of the talker regardless of his/her head pose, therefore, forming a multi-view visual input for the speaker. Experiments on the multi-view MEAD and in-the-wild LRS3 dataset demonstrate that PIAVE outperforms the state-of-the-art and is more robust to pose variations.
Qinghua Liu, Meng Ge, Zhizheng Wu, Haizhou Li
2023-09-13T04:54:44Z
http://arxiv.org/abs/2309.06723v1
# PIAVE: A Pose-Invariant Audio-Visual Speaker Extraction Network ###### Abstract It is common in everyday spoken communication that we look at the turning head of a talker to listen to his/her voice. Humans see the talker to listen better, so do machines. However, previous studies on audio-visual speaker extraction have not effectively handled the varying talking face. This paper studies how to take full advantage of the varying talking face. We propose a Pose-Invariant Audio-Visual Speaker Extraction Network (PIAVE) that incorporates an additional pose-invariant view to improve audio-visual speaker extraction. Specifically, we generate the pose-invariant view from each original pose orientation, which enables the model to receive a consistent frontal view of the talker regardless of his/her head pose, therefore, forming a multi-view visual input for the speaker. Experiments on the multi-view MEAD and in-the-wild LRS3 dataset demonstrate that PIAVE outperforms the state-of-the-art and is more robust to pose variations. Qinghua Liu\({}^{1,2}\), Meng Ge\({}^{2,3}\)1, Zhizheng Wu\({}^{2}\), Haizhou Li\({}^{1,2,3}\)\({}^{1}\)Shenzhen Research Institute of Big Data, Shenzhen, China \({}^{2}\)School of Data Science, The Chinese University of Hong Kong, Shenzhen, China \({}^{3}\) Department of Electrical and Computer Engineering, National University of Singapore, Singapore {liuqinghua,gemeng,wuzhizheng,haizhouli}@cuhk.edu.cn Footnote 1: Corresponding author **Index Terms**: speaker extraction, multi-modality, pose variation problem, pose-invariant view ## 1 Introduction The human brain has a remarkable ability to focus auditory attention on a particular voice by masking out the acoustic background in the presence of multiple speakers and background noises [1], that is called cocktail party effect [2]. With the advent of deep learning [3], neural approaches become increasingly popular in solving the cocktail party problem. The speaker extraction is one of such solutions that extracts the target speaker of interest from a multi-talk environment. It relies on an auxiliary cue to direct the attention towards the target speaker. The auxiliary cue may take different forms, such as pre-recorded reference speech [4, 5, 6], speech-synchronized video clips [7, 8, 9, 10], and the target speaker's spatial location [11, 12]. Notably, visual information is highlighted as a particularly useful cue for its immunity to acoustic noise and competing speakers [13] and is more informative than an audio cue [14]. Although recent audio-visual speaker extraction (AVSE) systems have demonstrated significant improvements in performance across a variety of standard datasets, the impact of head pose variations has not been studied. Head pose variations [15] as shown in Figure 1-(a) will result in changes in the orientation of the speaker's face in the camera view. Such variations adversely impact visually-assisted speech processing systems such as lipreading [16] and audio-visual speaker extraction. We argue that an invariant pose could help audio-visual speaker extraction. Hence, we propose a Pose-Invariant Audio-Visual Speaker Extraction Network, namely PIAVE. This approach involves generating pose-invariant view (front view, in this paper) faces from the original face track, enabling that the model receives a consistent frontal view of the talker regardless of head pose variations. Furthermore, the model benefits from multi-view observation of the talking face, namely the generated pose-invariant view and the original pose orientation, thereby aiding in the more accurate identification of the target speaker and obtaining better performance in speaker extraction. The contributions of this paper can be summarized as follows. Firstly, PIAVE represents the first step towards addressing the pose variation problem in AVSE. Secondly, we effectively generate the corresponding pose-invariant face for any given face image input, as shown in Figure 1-(b), which ensures stable visual input for the model. Lastly, when only one camera is used, PIAVE benefits from the multi-view observation of the target speaker, outperforming the state-of-the-art. ## 2 Speaker Extraction with Visual Cues Figure 2 is an illustration of the workflow of the proposed PIAVE network during run-time inference. The network takes an audio mixture \(\mathrm{x_{t}}\) and a video sequence of the target speaker's faces \(\mathrm{v_{r}}\) as inputs. It could be described as: \[\hat{\mathrm{y}}=\mathrm{Decoder}^{a}(\mathrm{Separator}(\mathrm{a_{e}}, \mathrm{v_{e}})\odot\mathrm{a_{e}}), \tag{1}\] where \(\hat{\mathrm{y}}\) is the predicted target audio, \(\odot\) is an operation for element-wise multiplication, \(\mathrm{a_{e}}\) and \(\mathrm{v_{e}}\) is encoded representations of the audio mixture and video sequence. The network consists of four parts: encoder, separator, decoder and pose normalizer. The pose normalizer will be described in Section 3. The audio encoder converts the audio mixture into a spectrum-like representation in the time domain, and the audio decoder is used to transform the masked encoder representation to the target audio following [17]. The visual encoder extracts visual embeddings, which model the temporal synchronization and interaction between visemes and speech. It has a 3D convolutional layer followed Figure 1: (a) An illustration of the pose variation in daily conversations. (b) Pose-invariant faces corresponding to the above row of faces. by an 18-layer ResNet [18], which is pretrained on the lipreading task following [19]. It takes a sequence of pose-invariant faces and the original face track as input and outputs the fixed dimensional feature vectors \(\mathrm{I}_{s}\in\mathbb{R}^{d\times n}\) and \(\mathrm{I}_{o}\in\mathbb{R}^{d\times n}\) for the two video streams, where \(d\) denotes the feature dimension of the visual embedding and \(n\) denotes the number of frames in each video stream. These feature vectors are fused together using addition and passed through a visual adapter that models the temporal dependencies across each frame to capture the temporal dynamics of the visual input. Moreover, since the time resolution of the video stream and the audio stream is different, we upsample the visual embeddings along the temporal dimension to synchronize the audio stream and the video stream by nearest neighbor interpolation. The separator is designed for estimating a mask to let pass the target speaker, and filter out others, conditioned on the encoded audio and visual features. To capture the long-term dependencies in the encoded audio features, the audio block employs a TCN network consisting of multiple TCN blocks, each with a depth-wise separable convolution, PReLU activation, and layer normalization operation. The output of the audio block is gathered with the encoded visual features \(\mathrm{v}_{e}\) through concatenation and then we use two repeats of the TCN network to process the fused audio-visual feature. A convolutional layer is followed by the fusion block to generate the mask of the target speaker. It is then multiplied by the encoded audio representations of the mixture \(\mathrm{a}_{e}\) to obtain the target audio \(\hat{y}\). During training, the Scale-Invariant Signal-to-Distortion ratio (SI-SDR) [20] between the predicted target audio \(\hat{y}\) and the ground-truth target audio \(y\) is used to optimize the network from end to end: \[\mathcal{L}_{\text{SI-SDR}}=-\rho(\hat{y},y), \tag{2}\] \[\rho(\hat{y},y)=20\log_{10}\frac{||(\hat{y}^{T}y/y^{T}y)\cdot y||}{||(\hat{y}^ {T}y/y^{T}y)\cdot y-\hat{y}||}. \tag{3}\] ## 3 Head Pose Normalization ### Problem formulation The variation of head poses has always been a challenge in both visual-only tasks and multi-modal processing. With large head pose variations, the intra-person variance of head representation may drastically increase, sometimes even exceeding inter-person variance. Recent research has shown that pose normalization consistently boosts the performance of face recognition [21, 22]. It should be noted that expression-free pose normalization is required for face recognition tasks. However, in AVSE, lip movements and facial expressions that contribute to speech production correspond with phonetic content and have a strong impact on the ability of humans to focus their auditory attention [23, 24]. For this reason, expression-preserving pose normalization is required in AVSE. In this paper, we propose pretraining the pose normalization module (i.e. pose normalizer in Figure 2-(a)) on the 3D face alignment and reconstruction task and keeping it frozen during the speaker extraction model training. The pose normalizer is expected to generate the expression-preserving pose-invariant face for any given face image input. The challenge is that large pose diversity makes it hard to distinguish facial expression variations and increases the modeling difficulty. To address this problem, we draw inspiration from [25] and propose to disentangle the head pose and facial expression to reduce the complexity of the problem and make it more tractable. In the following sections, we will introduce the pretraining setup for this dual face regression network and the approach we have adopted for head pose normalization. ### Facial geometry representation We separate the 3D face geometry into pose, mean shape, and deformation as: \[\mathrm{G}=f\times\mathrm{R}\times\mathrm{S}+\mathrm{t}, \tag{4}\] \[\mathrm{S}=(\overline{\mathrm{S}}+\mathrm{D}). \tag{5}\] where \(\mathrm{G}\in\mathbb{R}^{3\times m}\) is the 3D mesh of a specific face with \(m\) vertices. The pose parameters consist of the scale factor \(f\), the 3D rotation matrix \(\mathrm{R}\in\mathbb{R}^{3\times 3}\) and the 3D translation \(\mathrm{t}\in\mathbb{R}^{3}\). \(\mathrm{S}\in\mathbb{R}^{3\times m}\) represents the pose-invariant face shape, which is disentangled into the mean shape template \(\overline{\mathrm{S}}\in\mathbb{R}^{3\times m}\) from [26] and the deformation \(\mathrm{D}\in\mathbb{R}^{3\times m}\) between the actual shape \(\mathrm{S}\) and the mean shape \(\overline{\mathrm{S}}\). ### Dual face regression network Following the approach proposed in [25], we adopt a joint regression strategy to simultaneously estimate the pose-invariant face \(\mathrm{S}\) and the pose-dependent face \(\mathrm{P}\). Then, we employ a self-alignment module denoted by \(\phi\) to estimate face pose from the estimated faces: \[\phi(\mathrm{P},\mathrm{S})=f,\mathrm{R},\mathrm{t}. \tag{6}\] Based on the estimated face pose \(\phi(\mathrm{P},\mathrm{S})\) and pose-invariant face \(\mathrm{S}\), we could reconstruct the final shape \(\mathrm{G}\) via transformation defined in Eq. (4). Figure 2: (a) The overall architecture of the proposed PIAVE network. (b) An illustration of masking regions in Table 1 and 2. (c) Multi-view illustration of the MEAD dataset. The joint regression of the pose-invariant face \(\mathrm{S}\) and pose-dependent face \(\mathrm{P}\) is complimentary. For one thing, it helps to separate the effects of non-rigid facial changes due to expression from those resulting from rigid facial changes due to head pose. For another, the learning of pose-dependent face is easy to over-fit to pose and under-fit to facial expressions as the pose variations bring much greater point-to-point distances than the expression variations, resulting in poor reconstruction of expression in various views. By introducing the pose-invariant face \(\mathrm{S}\), which remains unchanged with the pose, the network can focus on modeling facial expressions. Meanwhile, as \(\mathrm{S}\) is disentangled into mean face template \(\overline{\mathrm{S}}\) and the deformation \(\mathrm{D}\), only the zero-centered \(\mathrm{D}\) is required to be predicted, thereby reducing the complexity of the fitting process. To facilitate the self-alignment process, the face geometry \(\mathrm{G}\), pose-invariant face \(\mathrm{S}\), mean face \(\overline{\mathrm{S}}\), deformation \(\mathrm{D}\), and pose-dependent face \(\mathrm{P}\) are transformed into UV space [27] as UV maps. Since pixels with the same coordinates in the UV maps correspond to the same semantic region, the self-alignment module estimates the similarity transformation matrices using two sets of landmarks extracted from \(\mathrm{P}\) and \(\mathrm{S}\), respectively, based on the same UV coordinate set. The dual face regression network is pretrained in a supervised manner on the 300W-LP dataset [28]. After the pretraining, we select the branch of the network that generates the pose-invariant face and employ it as our pose normalizer. This component takes a facial image as input and generates the corresponding pose-invariant face. ## 4 Experimental Setup ### Dataset The experiments are carried out on LRS3 [29] and MEAD [30] dataset. LRS3 is a large-scale in-the-wild dataset that includes videos obtained from the TED YouTube channel. MEAD contains talking-face videos which are simultaneously recorded at seven different views in a strictly-controlled environment. In our experiments, we simulate two-speaker mixtures from these two datasets, respectively. The target speech is mixed with a random interference speech at a random signal-to-noise ratio (SNR) in the range from -10 dB to 10 dB. The audio sampling rate is 16 kHz and the face track video of the target speaker is provided at 25 frames-per-second (FPS). For LRS3-2mix, there are 20,000 and 5,000 speech mixture utterances for training and validation simulated from the trainval set, and 3,000 speech mixtures for testing from 168 unseen and unheard speakers during training. For MEAD-2mix, we only select videos with neutral emotion intensity captured from the frontal view for training and validation. The test set contains videos from seven different views, as shown in Figure 2-(c). In total, it consists of 10,000, 1,000 speech mixture utterances for training and validation, and 1,000 for each individual view for testing. ### Implementation details On LRS3-2mix, we train the whole neural network for 50 epochs with the initial learning rate set to \(1e^{-3}\). On MEAD-2mix, we load the weights of the pretrained model on LRS3 to get a good starting point and fine-tune the neural network for 20 epochs with the initial learning rate set to \(1e^{-4}\). The learning rate is halved if the accuracy on the validation set does not improve for 3 consecutive epochs. The training process would stop if the accuracy does not improve for 6 consecutive epochs. Adam [31] is used as the optimizer. Gradient clipping with a maximum L2-norm of 5 is applied during training. ## 5 Results ### Pose-invariant evaluation on MEAD To analyze the effectiveness of pose-invariant faces in AVSE, we evaluate several variants of PIAVE as shown in Table1. One major impact of pose variations on visually-assisted speech processing systems is decreased performance in mismatched train/test conditions, i.e., the neural network is trained and tested on different poses [32]. Therefore, we conduct evaluations on the MEAD dataset, which includes videos captured from multiple views simultaneously, as illustrated in Figure 2-(c), to assess the performance of different systems under mismatched conditions. Specifically, we train the system using only front-view videos and test it on seven different views. In Table 1, we use the signal-to-distortion ratio (SDR) to measure the signal quality of the extracted speech under multiple views. For pose-invariant evaluation, we used the average SDR across all seven views (Avg(7)) to measure the overall signal quality and Avg(6) to measure the signal quality under mismatched train/test conditions. Our results indicate that the performance of the model degrades as the view distance from the front one increases, demonstrating the impact of pose variations. PIAVE (w/o PF) performs the worst among all variants, indicating the severe degradation of performance without the pose-invariant faces under pose variations. Comparison between PIAVE (Mask Upper) and PIAVE (Mask Lip) reveals that the movement of lip region have a more significant impact than the upper region of pose-invariant faces. The integration of both components, i.e., the entire face region, yields the best performance in PIAVE, with a \(45\%\) average improvement in performance over PIAVE (w/o PF). In conclusion, introducing pose-invariant faces improves the system's performance and makes it more robust against mismatched conditions under pose variations, bringing it one step closer to pose-invariant AVSE. ### Ablation study of PIAVE on LRS3 In Table 2, we present an ablation study on LRS3, a large-scale in-the-wild dataset, to elaborate on the significance of different video streams. Besides SI-SDR and SDR to measure the signal quality, we use the perceptual evaluation of speech quality (PESQ) [33] and the short term objective intelligibility (STOI) [34] to evaluate the perceptual quality and intelligibility of the extracted speech. The higher the better for these metrics. To illustrate the effect of incorporating pose-invariant faces, PIAVE brings about 0.52dB SI-SDR and 0.143 PESQ improvement when compared to PIAVE (w/o PF). When the lip region of pose-invariant faces is masked in PIAVE (Mask Lip), the resulting performance is even worse than that of PIAVE (w/o PF) in terms of SI-SDR and SDR. This result highlights that primarily the lip movements of pose-invariant faces contribute to the performance improvement. Additionally, PIAVE (Mask Upper) outperforms PIAVE (w/o PF), but there is still a gap between PIAVE (Mask Upper) and PIAVE, emphasizing the need for the utilization of the entire face region. Specifically, the upper region of the talking face offers a coarse-grained synchronization cue, while the lip region provides fine-grained viseme-phoneme mapping. We note that when the amount of training data is limited, the benefits of the upper region of the talking face can be larger, as depicted in Table 1. In summary, our results show that for the dataset with various pose variations, providing the model with stable pose-invariant visual input helps machine lis tening. Furthermore, integrating pose-invariant faces and original face tracks results in an improved observation of the talking face from multiple views, including lip movements and facial expressions from the upper region of the face, in which lip movements play a major role. ### PIAVE vs baseline on LRS3 In Table 3, a comparison of the results of our PIAVE model with two recent AVSE models, VisualVoice [8] and TDSE [7], is presented. VisualVoice is a complex spectral mapping approach that is used for speech separation, and TDSE is a time-domain speaker extraction system. We reproduce the aforementioned two baseline models and evaluated them on the same dataset used for training and testing PIAVE, to ensure a fair comparison. The results demonstrate that PIAVE outperforms the current SOTA models in terms of speech quality and intelligibility. ## 6 Discussion Head pose variations affect the identification of the auditory component of audio-visual speech stimuli [35], and how to reduce their impact has been the focus of previous research. This includes attempts to develop pose-invariant lipreading through the learning of multiple classifiers for each specific poses [36] or with a mapping function to transform features to a specific view, e.g. \(30^{\circ}\), \(45^{\circ}\) and \(60^{\circ}\) to the frontal view [37] or to the \(30^{\circ}\) view [38]. However, these approaches have their limitations in terms of being time-consuming with the increase of pose numbers and constrained by specific pose orientations. Recent research in lipreading has explored the efficacy of using multiple views of lips for improved performance [32]. While this approach has shown to be beneficial, it is limited by the need for multiple cameras. Moreover, recent research has investigated the impact of head movements on audio-visual speech enhancement and demonstrates the effectiveness of using face frontalization to remove head movements [39]. However, this approach is only validated on frontal views. We present the first attempt to address the pose variation problem in audio-visual speaker extraction with a pose-invariant view. Unlike previous approaches to this problem, we generate a corresponding pose-invariant view for any visual input with only one camera to enable stable visual input and multi-view observation. We believe this work can provide inspiration for other visually-assisted speech processing algorithms. There are also many open future directions. In the current stage, the pose-invariant face generated by the pose normalizer lacks facial texture, which typically contains identity information such as gender. This limitation may offset the advantages of pose normalization as a cue for the model to accurately identify the target speaker. Furthermore, potential areas for future research also include exploring more effective techniques for feature fusion between two video streams, as well as between audio and visual modality. ## 7 Conclusion In this paper, we have proposed a Pose-invariant Audio-visual Speaker Extraction Network (PIAVE) to address the pose variation problem, which is largely unexplored in AVSE. Specifically, we generate the pose-invariant view from each original pose orientation, which enables the model to receive a consistent frontal view of the talker regardless of his/her head pose. We validate the effectiveness of PIAVE on two datasets: the large-scale in-the-wild LRS3 dataset and the multi-view talking face MEAD dataset. Our experimental results demonstrate that the incorporation of pose-invariant faces results in a more robust model capable of handling variations in the head pose. It also improves the overall performance, by enabling stable input of the visual modality, as well as multi-view observation of talking faces. In summary, PIAVE provides a practical solution for the pose variation problem and is a step forward in modeling the cocktail party effect in uncontrollable circumstances. ## 8 Acknowledgments This work is supported by 1) Huawei Noah's Ark Lab; 2) National Natural Science Foundation of China (Grant No. 62271432); 3) Guangdong Provincial Key Laboratory of Big Data Computing, The Chinese University of Hong Kong, Shenzhen (Grant No. B10120210117-KP02); 4) German Research Foundation (DFG) under Germany's Excellence Strategy (University Allowance, EXC 2077, University of Bremen). \begin{table} \begin{tabular}{l c c c c c c c c} \hline **Model** & **Front** & **Top** & **Down** & **Left\_30** & **Left\_60** & **Right\_30** & **Right\_60** & **Avg(7)** & **Avg(6)** \\ \hline PIAVE (w/o PF) & 9.712 & 5.641 & 5.698 & 4.888 & 1.875 & 7.226 & 4.532 & 5.653 & 4.977 \\ PIAVE (Mask Lip) & 10.277 & 7.078 & 5.301 & 5.328 & 5.804 & 5.277 & 5.107 & 6.310 & 5.649 \\ PIAVE (Mask Upper) & 9.974 & 6.615 & 6.718 & 8.102 & 5.951 & 8.170 & **6.142** & 7.382 & 6.950 \\ PIAVE & **11.773** & **8.923** & **8.514** & **8.583** & **6.118** & **8.387** & 4.935 & **8.176** & **7.577** \\ \hline \end{tabular} \end{table} Table 1: Pose-invariant evaluation on MEAD dataset. Performance is reported with SDR(dB). Different views correspond to Figure 2-(c). PF refers to input with pose-invariant faces, and the masking regions in PIAVE (Mask Lip) and PIAVE (Mask Upper) is illustrated in Figure 2-(b). Avg(7) refers to the average SDR value of all 7 different views. Avg(6) refers to the average SDR value of 6 different views except for the front view. \begin{table} \begin{tabular}{l c c c} \hline **Model** & **SI-SDR** & **SDR** & **PESQ** & **STOI** \\ \hline Visualvoice [8] & 9.603 & 10.112 & 2.089 & 0.897 \\ TDSE [7] & 13.253 & 13.778 & 2.351 & 0.912 \\ PIAVE (Ours) & 15.255 & 15.569 & 2.585 & 0.944 \\ \hline \end{tabular} \end{table} Table 3: Comparison of our PIAVE model with the baseline models on LRS3 dataset.
2310.14913
Cluster soft sets and cluster soft topologies
The cluster soft point is an attempt to introduce a novel generalization of the soft closure point and the soft limit point. A cluster soft set is defined to be the system of all cluster soft points of a soft set. Then the fundamental properties of cluster soft sets are demonstrated. Moreover, the concept of a cluster soft topology on a universal set is introduced with regard to the cluster soft sets. The cluster soft topology is derived from a soft topology with an associated soft ideal, but it is finer than the original soft topology. On the other hand, if we start constructing the cluster soft topology from another cluster soft topology, we will end up with the first cluster soft topology we started with. The implication of cluster soft topologies is highlighted using some examples. Eventually, we represent the cluster soft closed sets in terms of several forms of soft sets.
Zanyar A. Ameen, Samer Al Ghour
2023-09-27T13:48:43Z
http://arxiv.org/abs/2310.14913v1
# Cluster soft sets and cluster soft topologies ###### Abstract The cluster soft point is an attempt to introduce a novel generalization of the soft closure point and the soft limit point. A cluster soft set is defined to be the system of all cluster soft points of a soft set. Then the fundamental properties of cluster soft sets are demonstrated. Moreover, the concept of a cluster soft topology on a universal set is introduced with regard to the cluster soft sets. The cluster soft topology is derived from a soft topology with an associated soft ideal, but it is finer than the original soft topology. On the other hand, if we start constructing the cluster soft topology from another cluster soft topology, we will end up with the first cluster soft topology we started with. The implication of cluster soft topologies is highlighted using some examples. Eventually, we represent the cluster soft closed sets in terms of several forms of soft sets. cluster soft set, soft closure point, soft limit point, cluster soft topology, soft ideal topology, soft derived set. 3 ## 1 Introduction For researchers in various disciplines, including medical science, economics, science, and engineering, uncertainty and insufficient information present numerous challenges. Soft set, defined by Molodtsov (1999), is a significant direction that has given rise to several extensions for overcoming these challenges among the different approaches intended to address them (see Molodtsov (2004)). Other theories, such as rough set theory of Pawlak (1982) and fuzzy set theory of Zadeh (1965), can be considered as mathematical methods for dealing with uncertainty, but each has its own set of difficulties. Ali et al. (2009) developed soft set theory by defining some new operations. The structure of parameter sets, especially those linked to soft sets, provides a consistent framework for modeling uncertain data. This results in the fast development of soft set theory in a short period of time, as well as a large variety of soft set real-world applications (see Alshami (2022); Dalkilic (2021, 2022b); Dalkilic and Demirtas (2022); Liu et al. (2021). For more details, we refer the readers to the survey given by Zhan and Alcantud (2019). It is known that rough set theory uses equivalence classes, while fuzzy set theory depends on the grades of memberships. In soft set theory, membership is determined by appropriate parameters. Despite being very different from one another, all three theories address uncertainty. An effective outcome from the mixing of these theories is possible. Such theories nowadays are called hybrid methods or models. Dubois and Prade (1990) were the first to combine fuzzy and rough set theories. In this fashion many generalization of soft sets appeared in the literature. For example, fuzzy soft set and rough soft set theories were respectively established by Maji et al. (2001) and Ali (2011). For more known and recent types of hybrid models, we encourage readers to check (Santos-Garcia and Alcantud (2023)). These models are not free from the real applications (see Dalkulity (2022a); Maji et al. (2001); Santos-Garcia and Alcantud (2023)). Multiple researchers have applied soft set theory to various mathematical structures like soft group theory Aktas and Cagman (2007), soft ring theory Acar et al. (2010), soft category theory Sardar and Gupta (2013), etc. Soft topology is one of the structures introduced by Shabir and Naz (2011) and Cagman et al. (2011) as a novel generalization of the classical topology. The work in the latter two manuscripts was crucial to the development of the field of soft topology. Many classical concepts in topology have been generalized and extended in soft set settings, for instance, soft separation axioms(Al-shami and El-Shafei (2020); Bayramov and Gunduz (2018)), soft separable spaces (Bayramov and Gunduz (2018)), soft connected spaces (Lin (2013)), soft compact spaces (Aygunoglu and Aygun, soft paracompact spaces (Lin (2013)), soft extremally disconnected spaces (Asaad (2017)), and soft submaximal spaces (Al Ghour and Ameen (2022)). The maximal or minimal elements concerning certain soft topological properties in the lattice of all soft topologies have been studied in (Al Ghour and Ameen (2022a); Ameen and Al Ghour (2022a)). Methods of generating soft topologies over a common universal set are another useful line of research. Terepeta (2019) gave two remarkable formulas for generating soft topologies from crisp topologies. Then, Al-shami and Kocinac (2019) showed that the soft topology generated via one of the formulas is equivalent to the enriched soft topology. Alcantud (2020) improved the formulas given by Terepeta in such a way that one can generate a soft topology from a system of crisp topologies. Ameen and Al Ghour (2022) introduced the so-called soft simple extension of a soft topology. The simple extended soft topology with respect to a soft topology and a soft set is generated by their (soft) union. Azzam et al. (2022) introduced methods of generating soft topologies by various soft set operators. Among them, the soft closure and soft derived set operators. The cluster soft set can be seen as a generalized version of the latter soft set operators. The newly produced topology, from cluster soft sets, is called the cluster soft topology. Cluster soft topological spaces are a mix of soft topological and soft algebraic structures. That is to say, the cluster soft topology is obtained from a soft topology along with a soft ideal. By virtue of some examples, it is shown that cluster soft topologies are the most natural class of soft topologies. Furthermore, the category of cluster soft topologies includes the earlier mentioned soft topologies. The rest of the paper is organized as follows. In Section 2, we provide a summary of the literature on soft set theory and soft topology. In Section 3, we introduce the notion of cluster soft sets and their fundamental properties to establish a new soft topology. In Section 4, the concept of cluster soft topology is introduced. Moreover, it is determined how original soft topology and cluster soft topology relate to one another. In Section 5, we propose characterizations of cluster soft closed sets in terms of some different types of soft sets. Finally, we conclude our paper with a short summary and possible lines of future work. ## 2 Preliminaries Let \(X\) be a universal set, \(\widetilde{\Omega}\) a set of parameters, and \(\mathcal{P}(X)\) the set of all subsets of \(X\). If \(F\colon\Omega\to\mathcal{P}(X)\) is a set-valued mapping and \(\Omega\subseteq\widetilde{\Omega}\), then the collection \((F,\Omega)=\{(\alpha,F(\alpha))\colon\alpha\in\Omega\}\) is said to be a soft set over \(X.\) By \(S_{\Omega}(X)\) we mean the set of soft sets over \(X\) parameterized by \(\Omega.\) The soft complement \((F,\Omega)^{c}(\)Shabir and Naz (2011)\()\) of a soft set \((F,\Omega)\) is a soft set \((F^{c},\Omega),\) where \(F^{c}:\Omega\to\mathcal{P}(X)\) is a mapping having the property that \(F^{c}(\alpha)=X-F(\alpha)\) for all \(\alpha\in\Omega.\) A soft set \((F,\Omega)\in S_{\Omega}(X)\) is called null (Maji et al. (2003)), denoted by \(\widetilde{\Phi},\) if \(F(\alpha)=\emptyset\) for all \(\alpha\in\Omega\) and called absolute (Maji et al. (2003)), denoted by \(\widetilde{X},\) if \(F(\alpha)=X\) for all \(\alpha\in\Omega.\) Evidently, \(\widetilde{X}^{c}=\widetilde{\Phi}\) and \(\widetilde{\Phi}^{c}=\widetilde{X}.\) A soft set \((F,\Omega)\) is called finite (Das and Samanta (2013)) if \(F(\alpha)\) is finite for each \(\alpha\in\Omega.\) Otherwise, it is called infinite. A soft element (Nazmul and Samanta (2013)), denoted by \(x_{\alpha},\) is a soft set \((F,\Omega)\) over \(X\) whenever \(F(\alpha)=\{x\}\) and \(F(\lambda)=\emptyset\) for all \(\lambda\in\Omega\) with \(\lambda\neq\alpha,\) where \(\alpha\in\Omega\) and \(x\in X.\) The soft element is called a soft point in (Xie (2015)). We prefer to use the concept of soft point in the sequel. By a statement \(x_{\alpha}\in(F,\Omega)\) we mean \(x\in F(\alpha).\) By \(P_{\Omega}(X)\) we denote the set of all soft points in \(X.\) A soft set \((A,\Omega_{1})\) is a soft subset of \((B,\Omega_{2})\) (see Molodtsov (1999)), write \((A,\Omega_{1})\subseteq(B,\Omega_{2})\), if \(\Omega_{1}\subseteq\Omega_{2}\subseteq\widetilde{\Omega}\) and \(A(\alpha)\subseteq B(\alpha)\) for all \(\alpha\in\Omega_{1}\), and \((A,\Omega_{1})=(B,\Omega_{2})\) if \((A,\Omega_{1})\subseteq(B,\Omega_{2})\) and \((B,\Omega_{2})\subseteq(A,\Omega_{1}).\) The soft union of soft sets \((A,\Omega),(B,\Omega)\) is represented by \((F,\Omega)=(A,\Omega)\)\(\widetilde{\Omega}\)\((B,\Omega),\) where \(F(\alpha)=A(\alpha)\cup B(\alpha)\) for all \(\alpha\in\Omega.\) The soft intersection of soft sets \((A,\Omega),(B,\Omega)\) is given by \((F,\Omega)=(A,\Omega)\)\(\widetilde{\Omega}\)\((B,\Omega),\) where \(F(\alpha)=A(\alpha)\cap B(\alpha)\) for all \(\alpha\in\Omega.\) The soft set difference \((A,\Omega)-(B,\Omega)\) is the soft set \((F,\Omega)=(A,\Omega)-(B,\Omega),\) where \(F(\alpha)=A(\alpha)-B(\alpha)\) for \(\alpha\in\Omega\) (see Terepeta (2019)). The definitions of soft union and soft intersection of two soft sets with respect to arbitrary subsets of \(\Omega\) were given by Maji et al. (2003). But it turns out that these definitions are misleading and ambiguous as reported by Ali et al. (2009) and Terepeta (2019). We follow that the definitions of soft union and soft intersection of soft sets given by Terepeta, which coincide with that adopted by Ali at al. (2009). **Definition 2.1** (Cagman et al. (2011); Shabir and Naz (2011)): _A collection \(\widetilde{\mathcal{T}}\) of \(S_{\Omega}(X)\) is said to be a soft topology on \(X\) if it satisfies the following axioms:_ 1. \(\widetilde{\Phi},\widetilde{X}\in\widetilde{\mathcal{T}}.\)__ 2. If \((F,\Omega),(G,\Omega)\in\widetilde{\mathcal{T}},\) then \((F,\Omega)\)\(\widetilde{\Omega}\)\((G,\Omega)\in\widetilde{\mathcal{T}}.\)__ 3. If \(\{(F_{i},\Omega)\colon i\in I\}\subseteqneqneq\widetilde{\mathcal{T}},\) then \(\widetilde{\Omega}_{i\in I}\)\((F_{i},\Omega)\in\widetilde{\mathcal{T}}.\)__ We call the triple \((X,\widetilde{\mathcal{T}},\Omega)\) a soft topological space on \(X.\) We call the elements of \(\widetilde{\mathcal{T}}\) soft open sets. We call the complement of every soft open or elements of \(\widetilde{\mathcal{T}}^{c}\) soft closed sets. By \(T_{\Omega}(X)\) we mean the lattice of all soft topologies on \(X\) (see Al Ghour and Ameen (2022a)). **Definition 2.2** (Nazmul and Samanta (2013)): _Let \((N,\Omega)\in S_{\Omega}(X)\) and \(\widetilde{\mathcal{T}}\in T_{\Omega}(X).\) Then \((N,\Omega)\) is called a soft neighborhood of \(x_{\alpha}\in P_{\Omega}(X)\) if there exists \((U,\Omega)\in\widetilde{\mathcal{T}}(x_{\alpha})\) such that \(x_{\alpha}\in(U,\Omega)\subseteqneq(N,\Omega),\) where \(\widetilde{\mathcal{T}}(x_{\alpha})\) is the family of all elements of \(\widetilde{\mathcal{T}}\) that contain \(x_{\alpha}.\)_ **Definition 2.3** (Cagman et al. (2011)): _Given a soft topology \(\tilde{\mathcal{T}}.\) A (countable) soft base for \(\tilde{\mathcal{T}}\) is a (countable) subcollection \(\mathcal{B}\subseteq\tilde{\mathcal{T}}\) such that elements of \(\tilde{\mathcal{T}}\) are unions of elements of \(\mathcal{B}.\)_ **Definition 2.4** (Ameen and Al Ghour (2022a)): _Let \(\mathcal{F}\subseteqneqneq S_{\Omega}(X).\) The intersection of all soft topologies on \(X\) containing \(\mathcal{F}\) is called a soft topology generated by \(\mathcal{F}\) and is referred to \(\tilde{\mathcal{T}}[\mathcal{F}].\)_ **Lemma 2.5** (Shabir and Naz (2011)): _Let \((X,\tilde{\mathcal{T}},\Omega)\) be a soft topological space, then for each \(\alpha\in\Omega,\) the collection \(\tilde{\mathcal{T}}(\alpha)=\{F(\alpha):(F,\Omega)\in\tilde{\mathcal{T}}\}\) is a (crisp) topology on \(X.\)_ **Definition 2.6** (Shabir and Naz (2011)): _Let \((B,\Omega)\in S_{\Omega}(X)\) and \(\tilde{\mathcal{T}}\in T_{\Omega}(X)\)._ 1. The soft closure of \((B,\Omega)\) is \(cl(B,\Omega):=\widehat{\Omega}\{(F,\Omega)\colon(B,\Omega)\ \widehat{\subseteq}\ (F,\Omega),(F,\Omega)\in\tilde{ \mathcal{T}}^{c}\}\). 2. The soft interior of \((B,\Omega)\) is \(int(B,\Omega)\):=\(\widetilde{\Omega}\{(F,\Omega)\colon(F,\Omega)\ \widehat{\subseteq}\ (B,\Omega),(F,\Omega)\in\tilde{ \mathcal{T}}\}\). **Definition 2.7** (Cagman et al. (2011)): _Let \((B,\Omega)\in S_{\Omega}(X)\) and \(\tilde{\mathcal{T}}\in T_{\Omega}(X)\). A soft point \(x_{\alpha}\in P_{\Omega}(X)\) is called a limit soft point of \((B,\Omega)\) if \((G,\Omega)\ \widehat{\Omega}\ (B,\Omega)-\{x_{\alpha}\}\neq\tilde{ \mathcal{P}}\) for all \((G,\Omega)\in\tilde{\mathcal{T}}(x_{\alpha})\). The set of all limit soft points is symbolized by \(\mathcal{D}(B,\Omega)\)._ Then \(cl(F,\Omega)=(F,\Omega)\ \widetilde{\cup}\ \mathcal{D}(F,\Omega)\) (see Theorem 5 in Cagman et al. (2011)). **Definition 2.8** (Kandil et al. (2014)): _A non-null class \(\tilde{I}\ \widehat{\subseteq}\ S_{\Omega}(X)\) is termed a soft ideal on \(X\) if \(\tilde{I}\) satisfies the following conditions:_ 1. If \((R,\Omega),(S,\Omega)\in\tilde{I}\), then \((R,\Omega)\ \widetilde{\cup}\ (S,\Omega)\in\tilde{I}\); and 2. If \((R,\Omega)\in\tilde{I}\) and \((S,\Omega)\ \widehat{\subseteq}\ (R,\Omega)\), then \((S,\Omega)\in\tilde{I}\). \(\tilde{I}\) is called a soft \(\sigma\)-ideal if (1) holds for countably many soft sets. We denote the family of soft ideals on \(X\) by \(I_{\Omega}(X)\). Applying the definition, for any \(\tilde{I},\tilde{J}\in I_{\Omega}(X)\), one can directly show that \(\tilde{I}\ \widehat{\Omega}\ \tilde{J}\) and \(\tilde{I}\ \widetilde{\cup}\ \tilde{J}\) are also soft ideals, where \(\tilde{I}\ \widetilde{\cup}\ \tilde{J}\): \(=\{(A,\Omega)\ \widetilde{\cup}\ (B,\Omega)\colon(A,\Omega)\in\tilde{I},(B,\Omega)\in \tilde{J}\}\). **Lemma 2.9** (Matejdes (2016); Matejdes (2021)): _Let \(Gr(F)=\{(\alpha,x)\in\Omega\times X\colon x\in F(\alpha)\}\), which is the graph of the set-valued function \(F\colon\Omega\to\mathcal{P}(X)\). Then_ 1. If \((\Omega\times X,\mathcal{T})\) is a topological space, then \((X,\tilde{\mathcal{T}},\Omega)\) is a soft topological space, where \(\tilde{\mathcal{T}}=\{(F,\Omega)\colon Gr(F)\in\mathcal{T}\}\). 2. If \((X,\tilde{\mathcal{T}},\Omega)\) is a soft topological space, then \((\Omega\times X,\mathcal{T})\) is a topological space, where \(\mathcal{T}=\{Gr(F)\colon(F,\Omega)\in\tilde{\mathcal{T}}\}\). **Lemma 2.10** (Kandil et al. (2014)): _Let \(\tilde{I}\) be a soft ideal on \(X\). Then for each \(\alpha\in\Omega\), the collection \(\tilde{I}(\alpha)=\{A(\alpha)\colon(A,\Omega)\in\tilde{I}\}\) is a crisp ideal on \(X\)._ ## 3 Cluster soft sets **Definition 3.1**: _Let \((R,\Omega)\in S_{\Omega}(X)\), \(\tilde{\mathcal{T}}\in T_{\Omega}(X)\), and \(\tilde{I}\in I_{\Omega}(X)\). A soft point \(x_{\alpha}\in P_{\Omega}(X)\) is a cluster soft point of \((R,\Omega)\) if \((R,\Omega)\ \widehat{\cap}\ (U,\Omega)\not\in\tilde{I}\) for each \((U,\Omega)\in\tilde{\mathcal{T}}(x_{\alpha})\). We shall not make difference between the terminologies "cluster soft point"and "soft cluster point". The set of all the cluster soft points of \((R,\Omega)\) is called the cluster soft set of \((R,\Omega)\) and is denoted by \(\operatorname{\,c}_{(\tilde{J},\Omega)}(R,\Omega)\) or shortly \(\operatorname{\,c}(R,\Omega)\)._ **Remark 3.2**: _Given \((R,\Omega)\in S_{\Omega}(X)\), \(\tilde{\mathcal{T}}\in T_{\Omega}(X)\), and \(\tilde{I}\in I_{\Omega}(X)\). We shall remark that_ 1. If \(\tilde{I}=\{\widetilde{\Phi}\}\), then \(\operatorname{\,c}(R,\Omega)=cl(R,\Omega)\). That is, the soft cluster points are identical to the soft closure points of \((R,\Omega)\). If \(\tilde{I}=\{(F,\Omega)\colon(F,\Omega)\in S_{\Omega}(X),(F,\Omega)\) is finite\(\}\), then \(c(R,\Omega)=\mathcal{D}(R,\Omega)\). That is, the soft cluster points are identical to the soft limit points of \((R,\Omega)\). Now, we present some properties of soft cluster sets. **Proposition 3.3**: _Let \((R,\Omega),(S,\Omega)\in S_{\Omega}(X),\ \tilde{\mathcal{T}}\in S_{\Omega}(X)\), and \(\tilde{I}\in I_{\Omega}(X)\). The following properties hold:_ 1. If \((R,\Omega)\in\tilde{I}\), then \(c(R,\Omega)=\widetilde{\Phi}\). 2. If \((R,\Omega)\subsetneq(S,\Omega)\), then \(c(R,\Omega)\subsetneqneq c(S,\Omega)\). 3. \(c((R,\Omega)\ \widetilde{\cap}\ (S,\Omega))\subsetneqneq c(R,\Omega)\ \widetilde{\cap}\ \ c(S,\Omega)\). 4. \(c((R,\Omega)\ \widetilde{\cup}\ (S,\Omega))=c(R,\Omega)\ \widetilde{\cup}\ \ c(S,\Omega)\). 5. \(c(R,\Omega)-c(S,\Omega)\subsetneqneq c((R,\Omega)-(S,\Omega))\). _Proof._ 1. Let \((R,\Omega)\in\tilde{I}\) and \(x_{\alpha}\in P_{\Omega}(X)\). Then for each \((U,\Omega)\in\tilde{\mathcal{T}}(x_{\alpha}),\ (R,\Omega)\ \widetilde{\cap}\ (U,\Omega)\subsetneq(R,\Omega)\) implies \((R,\Omega)\ \widetilde{\cap}\ (U,\Omega)\in\tilde{I}\). Therefore, \(x_{\alpha}\notin c(R,\Omega)\) and so \(c(R,\Omega)=\widetilde{\Phi}\). 2. Let \(x_{\alpha}\in P_{\Omega}(X)\). If \(x_{\alpha}\notin c(S,\Omega)\), then there exists \((U,\Omega)\in\tilde{\mathcal{T}}(x_{\alpha})\) such that \((S,\Omega)\ \widetilde{\cap}\ (U,\Omega)\in\tilde{I}\). Since \((R,\Omega)\subsetneqneq(S,\Omega)\), then \((R,\Omega)\ \widetilde{\cap}\ (U,\Omega)\subsetneqneq(S,\Omega)\ \widetilde{\cap}\ (U,\Omega)\) and hence \((R,\Omega)\ \widetilde{\cap}\ (U,\Omega)\in\tilde{I}\). Thus, \(x_{\alpha}\notin c(R,\Omega)\). 3. Since \((R,\Omega)\ \widetilde{\cap}\ (S,\Omega)\subsetneqneq(R,\Omega)\) and \((R,\Omega)\ \widetilde{\cap}\ (S,\Omega)\subsetneq(S,\Omega)\). By (2), \(c((R,\Omega)\ \widetilde{\cap}\ (S,\Omega))\subsetneq c(R,\Omega)\) and \(c((R,\Omega)\ \widetilde{\cap}\ (S,\Omega))\subsetneq c(S,\Omega)\). Therefore, \(c((R,\Omega)\ \widetilde{\cap}\ (S,\Omega))\subsetneqneq c(R,\Omega)\ \widetilde{\cap}\ (S,\Omega)\). 4. By a similar technique of (3), one can obtain \(c(R,\Omega)\ \widetilde{\cup}\ (c(S,\Omega)\subsetneqneq c((R,\Omega)\ \widetilde{\cup}\ (S,\Omega))\). On the other hand, let \(x_{\alpha}\in P_{\Omega}(X)\) and \(x_{\alpha}\notin c(R,\Omega)\ \widetilde{\cup}\ (S,\Omega)\). Then there exist \((U,\Omega),(V,\Omega)\in\tilde{\mathcal{T}}(x_{\alpha})\) such that \((R,\Omega)\ \widetilde{\cap}\ (U,\Omega),(S,\Omega)\ \widetilde{\cap}\ (V,\Omega)\in\tilde{I}\). So \((U,\Omega)\ \widetilde{\cap}\ (V,\Omega)\in\tilde{\mathcal{T}}(x_{\alpha})\) and \([(R,\Omega)\ \widetilde{\cap}\ (U,\Omega)]\ \widetilde{\cup}\ [(S,\Omega)\ \widetilde{\cap}\ (V, \Omega)]\in\tilde{I}\). But \([(R,\Omega)\ \widetilde{\cup}\ (S,\Omega)]\ \widetilde{\cap}\ [(U,\Omega)\ \widetilde{\cap}\ (V, \Omega)]\subseteq[(R,\Omega)\ \widetilde{\cap}\ (U,\Omega)]\ \widetilde{\cup}\)\([[S,\Omega)\ \widetilde{\cap}\ (V,\Omega)]\) implies \([(R,\Omega)\ \widetilde{\cup}\ (S,\Omega)]\ \widetilde{\cap}\ [(U,\Omega)\ \widetilde{\cap}\ (V, \Omega)]\in\tilde{I}\). Hence, \(x_{\alpha}\in c((R,\Omega)\ \widetilde{\cup}\ (S,\Omega))\). The proof is finished. 5. Since \((R,\Omega)=((R,\Omega)-(S,\Omega))\ \widetilde{\cup}\ ((R,\Omega)\ \widetilde{\cap}\ (S,\Omega))\). By (4), \(c(R,\Omega)=c((R,\Omega)-(S,\Omega))\ \widetilde{\cup}\ ((R,\Omega))\). By (2), \(c(R,\Omega)\subsetneqneq c((R,\Omega)-(S,\Omega))\ \widetilde{\cup}\ (S,\Omega)\). Therefore, \(c(R,\Omega)-c(S,\Omega)\subseteqneq c((R,\Omega)-(S,\Omega))\). For any index set \(J\) and any family \(\widetilde{\mathcal{S}}\) of finite subsets of \(J\), we have the following statements: **Proposition 3.4**: _Let \((R_{j},\Omega)\in S_{\Omega}(X)\) for \(j\in J,\ \tilde{\mathcal{T}}\in T_{\Omega}(X)\), and \(\tilde{I}\in I_{\Omega}(X)\). The following properties hold:_ 1. \(c(\widetilde{\cup}_{j\in N}(R_{j},\Omega))=\widetilde{\cup}_{j\in N}c(R_{j}, \Omega)\), where \(N\in\widetilde{\mathcal{S}}\). 2. \(\widetilde{\cup}_{j\in J}c(R_{j},\Omega)\subseteqneq c(\widetilde{\cup}_{j\in J }(R_{j},\Omega))\). 3. \(\varsigma(\widehat{\bigcap}_{j\in J}(R_{j},\Omega))\cong\widehat{\bigcap}_{j\in J }\varsigma(R_{j},\Omega).\) 4. \(\varsigma(\widetilde{\bigcup}_{j\in J}(R_{j},\Omega))=\widetilde{\bigcup}_{j \in J}\varsigma(R_{j},\Omega)\ \widetilde{\cup}\ [\widehat{\bigcap}_{N\in\widetilde{\otimes}} \varsigma(\widetilde{\bigcup}_{j\in J-N}(R_{j},\Omega))].\) _Proof._ 1. Let \(N\) be a finite subset of \(J.\) Since \((R_{j},\Omega)\cong\widetilde{\bigcup}_{j\in N}(R_{j},\Omega)\) for each \(j,\) by (2) in Proposition 3.3, \(\varsigma(R_{j},\Omega)\cong\varsigma(\widetilde{\bigcup}_{j\in N}(R_{j},\Omega))\) and hence \(\widetilde{\bigcup}_{j\in N}\varsigma(R_{j},\Omega)\cong\varsigma( \widetilde{\bigcup}_{j\in N}(R_{j},\Omega)).\) On the other hand, if there exists \(x_{\alpha}\in P_{\alpha}(X)\) such that \(x_{\alpha}\in\widetilde{\bigcup}_{j\in N}\varsigma(R_{j},\Omega),\) then, for each \(j\in N,\) there exists \((U_{j},\Omega)\in\widetilde{\mathcal{F}}(x_{\alpha})\) such that \((R_{j},\Omega)\ \widetilde{\cap}\ (U_{j},\Omega)\in\widetilde{I}.\) Therefore, \(\widetilde{\bigcup}_{j\in N}(U_{j},\Omega)\in\widetilde{\mathcal{F}}(x_{ \alpha})\) and \(\widetilde{\bigcup}_{j\in N}[(R_{j},\Omega)\ \widetilde{\cap}\ (U_{j},\Omega)]\in \widetilde{I}.\) But \[[\widetilde{\bigcup}_{j\in N}(R_{j},\Omega)]\ \widehat{\cap}\ [\widehat{\bigcap}_{j\in N} (U_{j},\Omega)]\ \cong\ \widetilde{\bigcup}_{j\in N}\ [(R_{j},\Omega)\ \widehat{\cap}\ (U_{j},\Omega)].\] This implies that \([\widetilde{\bigcup}_{j\in N}(R_{j},\Omega)]\ \widehat{\cap}\ [\widetilde{\bigcup}_{j\in N}(U_{j},\Omega)]\in \widetilde{I}\) and so, \(x_{\alpha}\notin\varsigma(\widetilde{\bigcup}_{j\in N}(R_{j},\Omega))\). Thus, \(\varsigma(\widetilde{\bigcup}_{j\in N}(R_{j},\Omega))\cong\widetilde{\bigcup}_ {j\in N}\varsigma(R_{j},\Omega).\) Both of the inclusions prove (1). 2. Since \(\big{(}R_{j},\Omega\big{)}\cong\widetilde{\bigcup}_{j\in J}\big{(}R_{j},\Omega \big{)}\) for each \(j\in J,\) by (2) in Proposition 3.3, \(\varsigma(R_{j},\Omega)\cong\varsigma(\widetilde{\bigcup}_{j\in J}(R_{j},\Omega))\) and so \(\widetilde{\bigcup}_{j\in J}\varsigma(R_{j},\Omega)\cong\varsigma( \widetilde{\bigcup}_{j\in J}(R_{j},\Omega)).\) 3. Since \(\widetilde{\bigcap}_{j\in J}\big{(}R_{j},\Omega\big{)}\cong\big{(}R_{j},\Omega \big{)}\) for each \(j\in J,\) by (2) in Proposition 3.3, \(\varsigma\left(\widetilde{\bigcap}_{j\in J}\big{(}R_{j},\Omega\big{)}\right)\)\(\cong\)\(\varsigma(R_{j},\Omega),\) for each \(j,\) and thus \(\varsigma(\widetilde{\bigcap}_{j\in J}(R_{j},\Omega))\cong\widehat{\bigcap}_{j\in J }\varsigma(R_{j},\Omega).\) 4. By (2), the inclusion \(\widetilde{\bigcup}_{j\in J}\varsigma(R_{j},\Omega)\ \widetilde{\cup}\ [\widehat{\bigcap}_{N\in \widetilde{\otimes}}\varsigma(\widetilde{\bigcup}_{j\in J-N}(R_{j},\Omega))] \subseteq\varsigma(\widetilde{\bigcup}_{j\in J}(R_{j},\Omega))\) can be followed. To prove the other direction, we choose any \(N\in\widetilde{\otimes}.\) From (1), we can have \[\varsigma(\widetilde{\bigcup}_{j\in J}\ (R_{j},\Omega))=\widetilde{\bigcup}_{j\in N }\varsigma(R_{j},\Omega)\ \ \widetilde{\cup}\ \ \varsigma(\widetilde{\bigcup}_{j\in J-N}\ (R_{j},\Omega)).\] Therefore, \[\varsigma(\widetilde{\bigcup}_{j\in J}\ (R_{j},\Omega))\ \widetilde{\subseteq}\ \widetilde{\bigcup}_{j\in J }\ \varsigma(R_{j},\Omega)\ \ \widetilde{\cup}\ \ \varsigma(\widetilde{\bigcup}_{j\in J-N}\ (R_{j},\Omega)).\] Since \(N\) was chosen arbitrarily, so \[\varsigma(\widetilde{\bigcup}_{j\in J}\ (R_{j},\Omega))\ \widetilde{\subseteq}\ \widetilde{\bigcup}_{j\in J }\ \varsigma(R_{j},\Omega)\ \ \widetilde{\cup}\ [\widehat{\bigcap}_{N\in \widetilde{\otimes}}\varsigma(\widetilde{\bigcup}_{j\in J-N}\ (R_{j},\Omega))].\] Hence the proof. **Lemma 3.5**: _Let \((R,\Omega)\in S_{\Omega}(X),\ \widetilde{\mathcal{F}}\in T_{\Omega}(X),\) and \(\tilde{I}\in I_{\Omega}(X).\) Then_ 1. \(\varsigma(R,\Omega)\ \widetilde{\subseteq}\ cl(R,\Omega).\)__ 2. \(\varsigma(R,\Omega)\in\widetilde{\mathcal{F}}^{c}.\)__ 3. \(\varsigma[\varsigma(R,\Omega)]\ \widetilde{\subseteq}\ \varsigma(R,\Omega).\)__ _Proof._ 1. This follows as \(\widetilde{\Phi}\in\tilde{I}.\) To prove (2) we relate each soft point \(x_{\alpha}\in\tilde{X}-c(R,\Omega)\) to a soft set \((U_{x_{\alpha}},\Omega)\in\tilde{\mathcal{T}}(x_{\alpha})\) such that \((R,\Omega)\ \tilde{\mathcal{T}}\ (U_{x_{\alpha}},\Omega)\in\tilde{I}.\) So for any \(y_{\lambda}\in(U_{x_{\alpha}},\Omega),\) we have \(y_{\lambda}\in\tilde{X}-c(R,\Omega).\) Then \[\tilde{X}-c(R,\Omega)=\widetilde{\bigcup}_{x_{\alpha}\in\tilde{X}-c(R,\Omega) }\ (U_{x_{\alpha}},\Omega).\] Thus, \(\tilde{X}-c(R,\Omega)\in\tilde{\mathcal{T}}\) as it is a union of soft open sets. Consequently, \(c(R,\Omega)\in\tilde{\mathcal{T}}^{c}.\) 3. By (1) and Proposition 3.3 (2), we have \(c[c(R,\Omega)]\ \widehat{\subseteq}\ cl[c(R,\Omega)].\) But \(cl[c(R,\Omega)]=c(R,\Omega)\) from (2). Hence, \(c[c(R,\Omega)]\ \widehat{\subseteq}\ c(R,\Omega).\) According to the above results, a soft cluster set is a generalization of a soft closure set and \((R,\Omega)\ \widehat{\subseteq}\ cl(R,\Omega)\) for any \((R,\Omega)\in S_{\Omega}(X).\) However, we cannot have \((R,\Omega)\ \widehat{\subseteq}\ c(R,\Omega)\) in general. **Example 3.6**: _For a set of parameters \(\Omega=\{\alpha,\lambda\},\) let \(\tilde{I}=\{(\alpha,A(\alpha)),(\lambda,A(\lambda))\colon A(\alpha)\subseteq \mathbb{Q},A(\lambda)\)_ _(finite) \(\subseteq\mathbb{R}\}\) and let \(\tilde{\mathcal{T}}\) be the soft topology on the set of real numbers \(\mathbb{R}\) generated by_ \[\{\{(\alpha,B(\alpha)),(\lambda,B(\lambda))\}\colon B(\alpha)=(a,b),B(\lambda )=(c,d];a,b,c,d\in\mathbb{R};a<b,c<d\}.\] Take \((R,\Omega)=\{(\alpha,\{5\}),(\lambda,\{\pi\})\}.\) Then \(c(R,\Omega)=\tilde{\Phi}\) and hence \((R,\Omega)\ \widehat{\subseteq}\ c(R,\Omega).\) We end this part by commenting that the cluster soft set of \((R,\Omega)\) is called a soft local function (") of \((R,\Omega)\) in Kandil et al. (2014). The reader may find some results in this direction in Kandil et al. (2014), but our theory is totally different. Furthermore, the conclusion (8) in Theorem 3.2 in Kandil et al. (2014) is false. **Example 3.7**: _If \(\tilde{\mathcal{T}}\) is the soft topology on the set of real numbers \(\mathbb{R}\) generated by_ \[\{((a,b),\Omega);a,b\in\mathbb{R};a<b\},\] where \(\Omega\) is any set of parameters and \(\tilde{I}\) is a soft ideal of finite soft subsets of \(\mathbb{R}.\) Take \((A_{n},\Omega)=(\{1/n\},\Omega)\) for \(n\in\mathbb{N}\). Then \(c(A_{n},\Omega)=\tilde{\Phi}\) for each \(n\) and so \(\widetilde{\bigcup}_{n}c(A_{n},\Omega)=\tilde{\Phi}\), while \(c(\widetilde{\bigcup}_{n}(A_{n},\Omega))=(\{0\},\Omega).\) Thus, \(c(\widetilde{\bigcup}_{n}(A_{n},\Omega))\neq\widetilde{\bigcup}_{n}c(A_{n},\Omega).\) ## 4 Cluster soft topologies In order to introduce the cluster soft topology on \(X,\) we need to define the following concept: **Definition 4.1**: _Let \((R,\Omega)\in S_{\Omega}(X),\ \tilde{\mathcal{T}}\in T_{\Omega}(X),\) and \(\tilde{I}\in I_{\Omega}(X).\) Then \((R,\Omega)\) is said to be a cluster soft closed set (shortly, soft \(c\)-closed set) if \(c(R,\Omega)\ \widehat{\subseteq}\ (R,\Omega).\)_ **Lemma 4.2**: _Let \(\tilde{\mathcal{T}}\in T_{\Omega}(X)\) and \(\tilde{I}\in I_{\Omega}(X).\) The following statements are valid:_ 1. \(\tilde{\Phi},\tilde{X}\) are soft \(c\)-closed. 2. Each element of \(\tilde{I}\) is soft \(c\)-closed. 3. Each soft closed set is soft \(c\)-closed. 4. Any intersection of soft \(c\)-closed sets is soft \(c\)-closed. 5. A finite union of soft \(c\)-closed sets is soft \(c\)-closed. _Proof._ 1. \(c(\widetilde{\Phi})=\widetilde{\Phi}\) by Proposition 3.3 (1) as \(\widetilde{\Phi}\in\tilde{I}\) and \(c(\widetilde{X})\subseteqneq\widetilde{X}\) always. Therefore, \(\widetilde{\Phi},\widetilde{X}\) are soft \(c\)-closed. 2. Given \((R,\Omega)\in S_{\Omega}(X)\). If \((R,\Omega)\in\tilde{I}\), by Proposition 3.3 (1), \(c(R,\Omega)=\widetilde{\Phi}\)\(\subseteqneq(R,\Omega)\). Thus, \((R,\Omega)\) is soft \(c\)-closed. 3. Given \(x_{\alpha}\in P_{\Omega}(X)\). Assume \((R,\Omega)\in\tilde{J}^{c}\) and \(x_{\alpha}\in c(R,\Omega)\). Then for each \((U,\Omega)\in\tilde{J}(x_{\alpha})\), \((R,\Omega)\)\(\widetilde{\Omega}\)\((U,\Omega)\)\(\neq\)\(\widetilde{\Phi}\). Since \((R,\Omega)^{c}\in\tilde{J}\) and \((R,\Omega)\)\(\widetilde{\Omega}\)\((R,\Omega)^{c}=\widetilde{\Phi}\), we shall obtain that \(x_{\alpha}\notin(R,\Omega)^{c}\). This implies that \(x_{\alpha}\in(R,\Omega)\). Hence, \(c(R,\Omega)\)\(\subseteqneq(R,\Omega)\). This shows that \((R,\Omega)\) is soft \(c\)-closed. 4. Let \(\{(R_{j},\Omega)\colon j\in J\}\) be a family of soft \(c\)-closed sets over \(X\). For each \(j\), we then have \(c(R_{j},\Omega)\)\(\subseteqneq(R_{j},\Omega)\). By Proposition 3.4 (3) and the latter statement, we have \[c(\widetilde{\Omega}_{j\in J}(R_{j},\Omega))\)\(\subseteqneq\)\(\widetilde{\Omega}_{j\in J}c(R_{j},\Omega)\)\(\subseteqneq\)\(\widetilde{\Omega}_{j\in J}(R_{j},\Omega)\). Thus, \(\widetilde{\Omega}_{j\in J}(R_{j},\Omega)\) is soft \(c\)-closed. 5. Let \((R_{j},\Omega)\) be a soft \(c\)-closed sets over \(X\), for \(j=1,2,...,n\). By Definition 4.1, for each \(j\), we have \(c(R_{j},\Omega)\)\(\subseteqneq(R_{j},\Omega)\) and therefore \(\widetilde{\Omega}_{j=1}^{n}c(R_{j},\Omega)\)\(\subseteqneq\)\(\widetilde{\Omega}_{j=1}^{n}(R_{j},\Omega)\). By Proposition 3.4 (1), we have \[c(\widetilde{\Omega}_{j=1}^{n}(R_{j},\Omega))=\widetilde{\Omega}_{j=1}^{n}c( R_{j},\Omega)\)\(\subseteqneq\)\(\widetilde{\Omega}_{j=1}^{n}(R_{j},\Omega)\). Hence, \(\widetilde{\Omega}_{j=1}^{n}(R_{j},\Omega)\) is soft \(c\)-closed. It is essential to note that the reverse of (3) is not true in general. **Example 4.3**: _Let \((X,\tilde{J},\Omega)\) be the indiscrete soft topological space and let \(\tilde{I}=S_{\Omega}(X)\) be the soft ideal on \(X\). Any proper soft subset of \(\tilde{X}\) is soft \(c\)-closed but not soft closed._ **Remark 4.4**: _Given a soft topological space \((X,\tilde{J}^{*},\Omega)\) and a soft ideal \(\tilde{I}\) on \(X\). A soft subset of \(X\) is called \(c\)-open if its complement is soft \(c\)-closed. By Lemma 4.2 (1), (4) and (5), one can see that the family of all \(c\)-open sets over \(X\) forms a soft topology on \(X\) and it is called a cluster soft topology or shortly \(a\) soft \(c\)-topology. We denote \(a\) cluster soft topology on \(X\) by \(\tilde{J}_{c}(\tilde{I})\) or simply \(\tilde{J}_{c}^{*}\) when no confusion caused. Each soft open set is soft \(c\)-open but not the reverse (by Lemma 4.2 (3)). This proves that \((X,\tilde{J}_{c},\Omega)\) is finer than \((X,\tilde{J},\Omega)\)._ Notice that crisp (general) cluster topologies are called ideal topologies (see Jankovic and Hamlett (1990)). The Lemma 11 and Theorem 6 in Azzam et al. (2022) guarantee that the cluster soft topology is equivalent soft \(\tilde{J}^{*}\)-topology constructed differently in Kandil et al. (2014). **Theorem 4.5**: _Let \((X,\tilde{J},\Omega)\) be a soft topological space and \(\tilde{I}\in I_{\Omega}(X)\). Then_ \[\mathcal{B}=\{(R,\Omega)-(A,\Omega)\colon(R,\Omega)\in\tilde{J},(A,\Omega)\in \tilde{J}\}\] forms a soft base for the cluster soft topology Proof.: First we show that \(\mathcal{B}\) covers \(\tilde{X}\). Let \(x_{\alpha}\in P_{\alpha}(X)\) and let \((U,\Omega)\in\tilde{\mathcal{T}}_{\mathrm{c}}\) that contains \(x_{\alpha}\). Then \(\tilde{X}-(U,\Omega)\) is soft \(\mathrm{c}\)-closed and so \(\mathrm{c}(\tilde{X}-(U,\Omega))\cong\tilde{X}-(U,\Omega)\). This implies that \((U,\Omega)\cong\tilde{X}-\mathrm{c}(\tilde{X}-(U,\Omega))\) and \(x_{\alpha}\notin\mathrm{c}(\tilde{X}-(U,\Omega))\). Then there exists \((W,\Omega)\in\tilde{\mathcal{T}}(x_{\alpha})\) such that \((W,\Omega)\,\widetilde{\Omega}\,[\tilde{X}-(U,\Omega)]\in\tilde{I}\). If we set \((A,\Omega)=(W,\Omega)\,\widetilde{\Omega}\,[\tilde{X}-(U,\Omega)]\), then we will have \(x_{\alpha}\in(W,\Omega)-(A,\Omega)\cong(U,\Omega)\). The proof of Theorem 3.4 in Kandil et al. (2014) showed that \(\mathcal{B}\) is closed under finite intersections. Thus, the result is proved. **Lemma 4.6**: _If \((X,\tilde{\mathcal{T}}_{\mathrm{c}}(\tilde{I}),\Omega)\) is a cluster soft topological space, then_ \[\tilde{\mathcal{T}}_{\mathrm{c}}(\tilde{I})(\alpha)=\tilde{\mathcal{T}}( \alpha)(\tilde{I}(\alpha)),\] _where \(\tilde{\mathcal{T}}(\alpha)(\tilde{I}(\alpha))\) is a crisp cluster (ideal) topology with respect to the crisp topoloft \(\tilde{\mathcal{T}}(\alpha)\) and crisp ideal \(\tilde{I}(\alpha)=\{A(\alpha)\colon(A,\Omega)\in\tilde{I}\}\), and \(\tilde{\mathcal{T}}_{\mathrm{c}}(\tilde{I})(\alpha)=\{R(\alpha)\colon(R, \Omega)\in\tilde{\mathcal{T}}_{\mathrm{c}}(\tilde{I})\}\) for each \(\alpha\in\Omega\)._ Proof.: Let \(\alpha\in\Omega\) and let \(R(\alpha)\in\tilde{\mathcal{T}}_{\mathrm{c}}(\tilde{I})(\alpha)\). Then \((R,\Omega)\in\tilde{\mathcal{T}}_{\mathrm{c}}(\tilde{I})\). By Theorem 4.5, \[(R,\Omega)=\widetilde{\bigcup}_{j\in I}\big{[}(G_{j},\Omega)-\big{(}A_{j}, \Omega\big{)}\big{]},\] where \((G_{j},\Omega)\in\tilde{\mathcal{T}}\) and \((A_{j},\Omega)\in\tilde{I}\). This means that \[R(\alpha)=\widetilde{\bigcup}_{j\in I}[G_{j}(\alpha)-A_{j}(\alpha)],\] where \(G_{j}(\alpha)\in\tilde{\mathcal{T}}(\alpha)\) and \(A_{j}(\alpha)\in\tilde{I}(\alpha)\). Since \(\tilde{\mathcal{T}}(\alpha)\) and \(\tilde{I}(\alpha)\) are crips topology and ideal for each \(\alpha\) (by Lemmas 2.5 and 2.10), therefore, by Theorem 3.1 in Jankovic and Hamlett (1990), each \(G_{j}(\alpha)-A_{j}(\alpha)\) is basic open set in crisp cluster topology \(\tilde{\mathcal{T}}(\alpha)(\tilde{I}(\alpha))\). Thus, \(R(\alpha)\) is open in \(\tilde{\mathcal{T}}(\alpha)(\tilde{I}(\alpha))\) and so, \(\tilde{\mathcal{T}}_{\mathrm{c}}(\tilde{I})(\alpha)\subseteq\tilde{\mathcal{ T}}(\alpha)(\tilde{I}(\alpha))\). The reverse of the inclusion can be proved by a similar technique. Hence, \(\tilde{\mathcal{T}}_{\mathrm{c}}(\tilde{I})(\alpha)=\tilde{\mathcal{T}}(\alpha)( \tilde{I}(\alpha))\). Nevertheless, there might exist a family of soft sets (need not be a soft topology) and a soft ideal on a set \(X\) for which their crisp parts generate cluster topologies, as shown in the following illustration: **Example 4.7**: _Let \(X=\{x,y,z\}\) and \(\Omega=\{\alpha,\beta\}\). Consider the soft ideal \(\tilde{I}=\{\tilde{\mathcal{P}},(A_{1},\Omega),(A_{2},\Omega),\)\(...,(A_{15},\Omega)\}\), where_ \[(A_{1},\Omega)=\{(\alpha,\emptyset),(\beta,\{x\})\},\] \[(A_{2},\Omega)=\{(\alpha,\emptyset),(\beta,\{z\})\},\] \[(A_{3},\Omega)=\{(\alpha,\emptyset),(\beta,\{x,z\})\},\] \[(A_{4},\Omega)=\{(\alpha,\{y\}),(\beta,\emptyset)\},\] \[(A_{5},\Omega)=\{(\alpha,\{y\}),(\beta,\{x\})\},\] \[(A_{6},\Omega)=\{(\alpha,\{y\}),(\beta,\{z\})\},\] \[(A_{7},\Omega)=\{(\alpha,\{y\}),(\beta,\{x,z\})\},\] \[(A_{8},\Omega)=\{(\alpha,\{z\}),(\beta,\emptyset)\},\] \((A_{9},\Omega)=\{(\alpha,\{z\}),(\beta,\{x\})\}\), \((A_{10},\Omega)=\{(\alpha,\{z\}),(\beta,\{z\})\}\), \((A_{11},\Omega)=\{(\alpha,\{z\}),(\beta,\{x,z\})\}\), \((A_{12},\Omega)=\{(\alpha,\{y,z\}),(\beta,\emptyset)\}\), \((A_{13},\Omega)=\{(\alpha,\{y,z\}),(\beta,\{x\})\}\), \((A_{14},\Omega)=\{(\alpha,\{y,z\}),(\beta,\{z\})\},and\) \((A_{15},\Omega)=\{(\alpha,\{y,z\}),(\beta,\{x,z\})\}\). Let \(T=\{\widetilde{\Phi},(R_{1},\Omega),(R_{2},\Omega),(R_{3},\Omega),(R_{4}, \Omega),\widetilde{X}\}\) be family of soft sets, where \((R_{1},\Omega)=\{(\alpha,\{x\}),(\beta,\{y\})\}\), \((R_{2},\Omega)=\{(\alpha,\{x,y\}),(\beta,\{x,y\})\}\), \((R_{3},\Omega)=\{(\alpha,\{x,y\}),(\beta,\{y,z\})\},and\) \((R_{4},\Omega)=\{(\alpha,\{x,z\}),(\beta,\{x,y\})\}\). One can easily conclude that \(T\) is not a soft topology, which means that we cannot generate a cluster soft topology from \(T\) and \(\tilde{I}\). On the other hand, both of \(T(\alpha)=\{\emptyset,\{x\},\{x,y\},\{x,z\},X\}\) and \(T(\beta)=\{\emptyset,\{y\},\{x,y\},\{y,z\},X\}\) define crisp topologies with respect to the respective ideals \(\tilde{I}(\alpha)=\{\emptyset,\{y\},\{z\},\{y,z\}\}\) and \(\tilde{I}(\beta)=\{\emptyset,\{x\},\{z\},\{x,z\}\}\) such that \(T(\alpha)=T_{\rm c}(\alpha)\) and \(T(\beta)=T_{\rm c}(\beta)\). However, one can always construct a cluster soft topology by the following scheme: **Remark 4.8**: _Let \((X,\tilde{\cal F},\Omega)\) be a soft topological space and let \(\tilde{I}\) be a soft ideal. By Lemma 2.9, \((\Omega\times X,{\cal T})\) and \(I=\{Gr(A)\colon(A,\Omega)\in\tilde{I}\}\) are respectively a crisp topological space and an ideal on \(\Omega\times X\). Furthermore, \((X,\tilde{\cal F}_{\rm c},\Omega)\) forms a cluster soft topological space in which \({\rm c}_{(\tilde{\cal F},\Omega)}(R,\Omega)\) is a soft set given by a set-valued mapping which graph is equal to \((D_{({\cal T},I)}(Gr(R))\), where \(D_{({\cal T},I)}(A)=\{(\alpha,x)\in\Omega\times X\colon A\cap U\not\in I,( \alpha,x)\in U\in{\cal T}\},A\subset\Omega\times X\)_(see Definition 2.2, Jankovic and Hamlett (1990))_. This means that many results concerning cluster soft topologies can be derived from corresponding results from the theory of ideal topological spaces._ Here, we shall acknowledge that the one-to-one correspondence between soft and crisp topologies mentioned in the preceding remark was suggested by one of the reviewers. **Proposition 4.9**: _Let \((X,\tilde{\cal F},\Omega)\) be a soft topological space and \(\tilde{I},\tilde{J}\in I_{\Omega}(X)\). For any \((R,\Omega)\in S_{\Omega}(X)\), we have_ \[{\rm c}_{(\tilde{\cal F},\Omega\tilde{J})}(R,\Omega)={\rm c}_{(\tilde{\cal F }(J),\tilde{J})}(R,\Omega)\,\widehat{\Gamma}\,\,{\rm c}_{(\tilde{\cal F}_{\rm c }(J),\tilde{J})}(R,\Omega).\] _Proof_. Given \(x_{\alpha}\in P_{\Omega}(X)\) and suppose \(x_{\alpha}\notin{\rm c}_{(\tilde{\cal F},\tilde{J}\tilde{J})}(R,\Omega)\). Then there is \((U,\Omega)\in\tilde{\cal F}(x_{\alpha})\) such that \((R,\Omega)\,\widehat{\Gamma}\,(U,\Omega)\in\tilde{I}\,\widehat{\emptyset}\,\). Set \((R,\Omega)\,\widehat{\Gamma}\,(U,\Omega)=(A,\Omega)\,\widehat{\Omega}\,(B,\Omega)\) for some \((A,\Omega),(B,\Omega)\) in \(\tilde{I},\tilde{J}\) respectively. Since \(\tilde{I}\) is closed under soft subsets, so we can assume that \((A,\Omega)\,\widehat{\Gamma}\,(B,\Omega)=\widetilde{\Phi}\). Therefore, \([(R,\Omega)\,\widehat{\Gamma}\,(U,\Omega)]-(A,\Omega)=(B,\Omega)\) and \([(R,\Omega)\,\widehat{\Gamma}\,(U,\Omega)]-(B,\Omega)=(A,\Omega)\). This means \([(R,\Omega)\stackrel{{\sim}}{{\cap}}(U,\Omega)]-(A,\Omega)\in\bar{J}\) and \([(R,\Omega)\stackrel{{\sim}}{{\cap}}(U,\Omega)]-(B,\Omega)\in\bar{I}\). Then we have \(x_{\alpha}\not\in\mbox{\rm c}_{(\vec{\tau}(\bar{\Omega}),\bar{J})}(R,\Omega)\) or \(x_{\alpha}\not\in\mbox{\rm c}_{(\vec{\tau}(\bar{\Omega}),\bar{J})}(R,\Omega)\) as either \(x_{\alpha}\in\bar{I}\) or \(x_{\alpha}\in\bar{J}\). Thus, \[\mbox{\rm c}_{(\vec{\tau}(\bar{\Omega}),\bar{J})}(R,\Omega)\stackrel{{ \sim}}{{\cap}}\mbox{\rm c}_{(\vec{\tau}(\bar{\Omega}),\bar{J})}(R,\Omega) \stackrel{{\sim}}{{\subseteq}}\mbox{\rm c}_{(\vec{\tau},\bar{ \Omega})\bar{J})}(R,\Omega).\] Conversely, if \(x_{\alpha}\not\in\mbox{\rm c}_{(\vec{\tau}(\bar{\Omega}),\bar{J})}(R,\Omega)\), then there exist \((V,\Omega)\in\vec{\tau}(x_{\alpha})\) and \((A,\Omega)\in\bar{I}\) such that \([(V,\Omega)-(A,\Omega)]\stackrel{{\sim}}{{\cap}}(R,\Omega)\in\bar{J}\). Since \(\bar{I}\) is closed under soft subsets, we let \((A,\Omega)\stackrel{{\sim}}{{\subseteq}}(R,\Omega)\). If \((B,\Omega)=[(V,\Omega)-(A,\Omega)]\stackrel{{\sim}}{{\cap}}(R, \Omega)\), then \((V,\Omega)\stackrel{{\sim}}{{\cap}}(R,\Omega)=(A,\Omega)\stackrel{{ \sim}}{{\cup}}(B,\Omega)\in\bar{I}\stackrel{{\sim}}{{\bar{\emptyset }}}\bar{J}\) and hence \(x_{\alpha}\not\in\mbox{\rm c}_{(\vec{\tau},\bar{\Omega})\bar{J})}(R,\Omega)\). This shows that \(\mbox{\rm c}_{(\vec{\tau},\bar{\Omega})\bar{J})}(R,\Omega)\stackrel{{ \sim}}{{\subseteq}}\mbox{\rm c}_{(\vec{\tau}(\bar{\Omega}),\bar{J})}(R,\Omega)\). Symmetrically, we can obtain that \(\mbox{\rm c}_{(\vec{\tau},\bar{\Omega})\bar{J})}(R,\Omega)\stackrel{{ \sim}}{{\subseteq}}\mbox{\rm c}_{(\vec{\tau}(\bar{J}),\bar{J})}(R,\Omega)\). In conclusion, we get \(\mbox{\rm c}_{(\vec{\tau},\bar{\Omega})\bar{J})}(R,\Omega)\stackrel{{ \sim}}{{\subseteq}}\mbox{\rm c}_{(\vec{\tau}(\bar{\Omega}),\bar{J})}(R,\Omega)\stackrel{{ \sim}}{{\cap}}\mbox{\rm c}_{(\vec{\tau}(\bar{\Omega}),\bar{J})}(R,\Omega)\). The proof ends. The next result illustrates that by constructing the cluster soft topology twice, you will get the first obtained cluster soft topology. **Theorem 4.10**: _Let \((X,\vec{\cal F},\Omega)\) be a soft topological space and \(\bar{I}\in I_{\Omega}(X)\). Then \(\vec{\cal F}_{\rm cc}=\vec{\cal F}_{\rm cc}\)._ _Proof_. By assuming \(\bar{I}=\bar{J}\) in Proposition 4.9, we obtain that a soft set \((R,\Omega)\) is soft \(c\)-closed iff it is soft \(cc\)-closed. This implies that \(\vec{\cal F}_{\rm cc}=\vec{\cal F}_{\rm cc}\), where \(\vec{\cal F}_{\rm cc}\) is the cluster soft topology of \(\vec{\cal F}_{\rm cc}(\bar{I})\). Next, we present a few illustrations highlighting the significance of cluster soft topologies. **Example 4.11**: _Given any soft topological space \((X,\vec{\cal F},\Omega)\) and any soft ideal \(\bar{I}\in I_{\Omega}(X)\). If \(\bar{I}\) is trivial (i.e., \(\bar{I}=\{\vec{\Phi}\}\)), by Remark 3.2 (1), \(\mbox{\rm c}(R,\Omega)=cl(R,\Omega)\) which implies that \(\vec{\cal F}_{\rm cc}=\vec{\cal F}\)._ **Example 4.12**: _Let \((X,\vec{\cal F},\Omega)\) be a soft topological space and let \(\bar{I}\in I_{\Omega}(X)\). If \(\bar{I}=S_{\Omega}(X)\), then \(\mbox{\rm c}(R,\Omega)=\vec{\Phi}\) which implies that each \((R,\Omega)\in S_{\Omega}(X)\) is soft \(c\)-closed. Therefore, \(\vec{\cal F}_{\rm cc}=\vec{\cal F}_{\rm dis}\), where \(\vec{\cal F}_{\rm dis}\) is the soft discrete topology._ **Example 4.13**: _Let \((X,\vec{\cal F}_{\rm ind},\Omega)\) be the indiscrete soft topological space and let \(\bar{I}\in I_{\Omega}(X)\), where \(\bar{I}=\{(A,\Omega)\colon(A,\Omega)\in S_{\Omega}(X),(A,\Omega)\mbox{ is finite}\}\). If \((R,\Omega)\in\bar{I}\), then \(\mbox{\rm c}(R,\Omega)=\vec{\Phi}\). If \((R,\Omega)\notin\bar{I}\), then \(\mbox{\rm c}(R,\Omega)=\vec{X}\). Consequently, each finite soft set is soft \(c\)-closed together with \(\vec{X}\). Therefore, \(\vec{\cal F}_{\rm cc}=\vec{\cal F}_{\rm cc}\), where \(\vec{\cal F}_{\rm cc}\) is the soft co-finite topology (c.f., Theorem 5.1)._ **Example 4.14**: _Let \((X,\vec{\cal F}_{\rm ind},\Omega)\) be the indiscrete soft topological space, \(x_{\alpha}\in S_{\Omega}(X)\), and \(\bar{I}\in I_{\Omega}(X)\). Suppose \(\bar{I}=\{(A,\Omega)\colon(A,\Omega)\in S_{\Omega}(X),x_{\alpha}\not\in(A, \Omega)\}\). If \((R,\Omega)\in\bar{I}\), then \(\mbox{\rm c}(R,\Omega)=\vec{\Phi}\). If \((R,\Omega)\not\in\bar{I}\), then \(\mbox{\rm c}(R,\Omega)=\vec{X}\). Therefore, each soft set excluding \(x_{\alpha}\) is soft \(c\)-closed together with \(\vec{X}\). Therefore, \(\vec{\cal F}_{\rm cc}=\vec{\cal F}_{\rm inc}\), where \(\vec{\cal F}_{\rm inc}=\{(A,\Omega)\colon(A,\Omega)\in S_{\Omega}(X),x_{\alpha} \in(A,\Omega)\}\stackrel{{\sim}}{{\cup}}\{\bar{\Phi}\}\), (it is called included soft point topology in Example 2 in Al Ghour and Ameen (2022))._ ## 5 Characterizations of soft \(\mbox{\rm c}\)-closed sets In this section, we characterize soft \(\mbox{\rm c}\)-closed sets in terms of some other classes of soft sets when the underlying soft topology or the related soft ideal possesses certain properties. **Theorem 5.1**: _Let \(\tilde{\mathcal{T}}\in T_{\alpha}(X)\) and \(\tilde{I}=\{(A,\Omega)\colon(A,\Omega)\in S_{\alpha}(X),(A,\Omega)\) is finite\(\}\). Then each soft c-closed set is soft closed iff each finite soft set is soft closed._ _Proof._ The first part is easy as each finite soft set in \(\tilde{I}\), by Lemma 4.2 (2), all finite soft sets are c-closed and so they are soft closed by the assumption. Conversely, suppose each finite soft set is soft closed and \(x_{\alpha}\in P_{\Omega}(X)\). Let \((R,\Omega)\) be a soft c-closed set. If \(x_{\alpha}\notin(R,\Omega)\), then by Remark 3.2 (2), \(x_{\alpha}\notin\mathcal{D}(R,\Omega)\). Therefore, there exists \((U,\Omega)\in\tilde{\mathcal{T}}(x_{\alpha})\) such that \((R,\Omega)\ \widehat{\Omega}\ (U,\Omega)\) is a finite soft set. Set \((Q,\Omega)=(R,\Omega)\ \widehat{\Omega}\ (U,\Omega)\). Since \(x_{\alpha}\notin(Q,\Omega)\), so the soft set \((V,\Omega)=(U,\Omega)-(Q,\Omega)\in\tilde{\mathcal{T}}(x_{\alpha})\) and \((R,\Omega)\ \widehat{\Omega}\ (V,\Omega)=\widehat{\Phi}\). This means that \(x_{\alpha}\notin cl(R,\Omega)\). Hence, \((R,\Omega)\) is a soft closed set. **Corollary 5.2**: _Let \(\tilde{\mathcal{T}}\in T_{\alpha}(X)\) and \(\tilde{I}=\{(A,\Omega)\colon(A,\Omega)\in S_{\alpha}(X),(A,\Omega)\) is finite\(\}\). Then \(\tilde{\mathcal{T}}=\tilde{\mathcal{T}}_{\mathrm{c}}\) iff the complement of each finite soft set is soft open._ **Definition 5.3**: _Given \(\tilde{\mathcal{T}}\in T_{\alpha}(X)\) and \(\tilde{I}\in I_{\Omega}(X)\). Then \(\tilde{I}\) is called a soft adherent ideal if \((R,\Omega)-\mathrm{c}(R,\Omega)\in\tilde{I}\) for each \((R,\Omega)\in S_{\Omega}(X)\)._ The next example illustrates that a soft ideal of finite soft sets need not be soft adherent. **Example 5.4**: _Let \((X,\tilde{\mathcal{T}}_{dis},\Omega)\) be the discrete soft topological space, where \(X\) is infinite, and let \(\tilde{I}\) be a soft ideal defined by \(\tilde{I}=\{(A,\Omega)\colon(A,\Omega)\in S_{\Omega}(X),(A,\Omega)\) is finite\(\}\). Then \(\mathrm{c}(R,\Omega)=\tilde{\Phi}\) for all \((R,\Omega)\in S_{\Omega}(X)\). Therefore, for each infinite \((R,\Omega)\in S_{\Omega}(X)\), \((A,\Omega)-\mathrm{c}(A,\Omega)\notin\tilde{I}\). Thus, \(\tilde{I}\) is not soft adherent._ However, the following result informs us that a soft \(\sigma\)-ideal in certain soft topological spaces is soft adherent. **Theorem 5.5**: _If \(\tilde{\mathcal{T}}\in T_{\Omega}(X)\) has a countable soft base, then each soft \(\sigma\)-ideal on \(X\) is soft adherent._ _Proof._ Let \(\tilde{I}\) be a soft \(\sigma\)-ideal on \(X\) and let \((R,\Omega)\in S_{\alpha}(X)\). For each \(x_{\alpha}\in(R,\Omega)-\mathrm{c}(R,\Omega)\) we associate a \((U_{x_{\alpha}},\Omega)\in\tilde{\mathcal{T}}(x_{\alpha})\) such that \((R,\Omega)\ \widehat{\Omega}\ (U_{x_{\alpha}},\Omega)\in\tilde{I}\). Set \(\mathcal{U}=\{(U_{x_{\alpha}},\Omega)\colon x_{\alpha}\in(R,\Omega)-\mathrm{c} (R,\Omega)\}\). Since \(\tilde{\mathcal{T}}\) has a countable soft base, so there is a countable soft subset \((D,\Omega)\) of \((R,\Omega)-\mathrm{c}(R,\Omega)\) such that \(\tilde{\mathcal{U}}\mathcal{U}=\)\(\tilde{U}_{x_{\alpha}\in(D,\Omega)}\ (U_{x_{\alpha}},\Omega)\). Therefore, we have \[(R,\Omega)-\mathrm{c}(R,\Omega)\cong\widetilde{\mathcal{U}}_{x_{\alpha}\in(R, \Omega)-\mathrm{c}(R,\Omega)}\ [(R,\Omega)\ \widehat{\Omega}\ (U_{x_{\alpha}},\Omega)]=\widetilde{\mathcal{U}}_{x_{\alpha}\in(D,\Omega)}\ [(R,\Omega)\ \widehat{\Omega}\ (U_{x_{\alpha}},\Omega)].\] Since \(\widetilde{\mathcal{U}}_{x_{\alpha}\in(D,\Omega)}\ [(R,\Omega)\ \widehat{\Omega}\ (U_{x_{ \alpha}},\Omega)]\in\tilde{I}\), then \((R,\Omega)-\mathrm{c}(R,\Omega)\in\tilde{I}\) and hence \(\tilde{I}\) is soft adherent. **Theorem 5.6**: _Let \(\tilde{\mathcal{T}}\in T_{\Omega}(X)\) and let \(\tilde{I}\in I_{\Omega}(X)\) be soft adherent. A soft set \((R,\Omega)\) over \(X\) is soft c-closed iff it can be written as a disjoint union of a soft closed set and an element in \(\tilde{I}\)._ _Proof._ Suppose \((R,\Omega)\) is soft c-closed. Set \((Q,\Omega)=\mathrm{c}(R,\Omega)\). By Lemma 3.5 (2), \((Q,\Omega)\) is soft closed. If \((A,\Omega)=(Q,\Omega)-\mathrm{c}(R,\Omega)\), by assumption, \((A,\Omega)\in\tilde{I}\). Thus, \((R,\Omega)=(Q,\Omega)\ \widetilde{\mathcal{U}}\ (A,\Omega)\) is the required form. Conversely, suppose such that representation exists. Since each element \((A,\Omega)\) in \(\tilde{I}\) is soft c-closed and each soft closed set \((Q,\Omega)\) is soft c-closed, so \((Q,\Omega)\ \widetilde{\mathcal{U}}\ (A,\Omega)\) is soft c-closed (_c.f._, Theorem 4.3 in Kandil et al. (2014)). The following is a direct consequences of the above result: **Corollary 5.7**: _Let \(\tilde{\mathcal{F}}\in T_{\Omega}(X)\) and let \(\tilde{I}\in I_{\Omega}(X)\) be soft adherent. A soft set \((W,\Omega)\) is soft c-open iff it has the form \((W,\Omega)=(U,\Omega)-(A,\Omega)\), where \((U,\Omega)\in\tilde{\mathcal{F}}\) and \((A,\Omega)\in\tilde{I}\)._ **Theorem 5.8**: _Let \((X,\tilde{\mathcal{T}},\Omega)\) be a soft topological space of a countable soft base \(\mathcal{B}\) and let \(\tilde{I}\) be a soft \(\sigma\)-ideal on \(\tilde{X}\). Then \(\tilde{\mathcal{F}}_{\varsigma}=\{(R,\Omega)-(A,\Omega)\colon(R,\Omega)\in \tilde{\mathcal{F}},(A,\Omega)\in\tilde{I}\}\)._ _Proof_. Let \(\mathcal{B}=\{(R,\Omega)-(A,\Omega)\colon(R,\Omega)\in\tilde{\mathcal{T}},(A, \Omega)\in\tilde{I}\}\). We first show that \(\tilde{\mathcal{T}}_{\varsigma}\subseteq\mathcal{B}\). Let \((R,\Omega)\in\tilde{\mathcal{T}}_{\varsigma}\). Since \((X,\tilde{\mathcal{T}},\Omega)\) has a countable soft base, by Theorem 5.5, \(\tilde{I}\) is a soft adherent ideal. Then Corollary 5.7 guarantees that \((R,\Omega)=(U,\Omega)-(A,\Omega)\) for some \((U,\Omega)\in\tilde{\mathcal{T}}\) and \((A,\Omega)\in\tilde{I}\). This implies by Theorem 4.5 that \((R,\Omega)\in\mathcal{B}\). On the other hand, if \((R,\Omega)\in\mathcal{B}\), then \((R,\Omega)=(U,\Omega)-(A,\Omega)\), where \((U,\Omega)\in\tilde{\mathcal{T}}\) and \((A,\Omega)\in\tilde{I}\). By the same reason above, \((R,\Omega)\in\tilde{\mathcal{T}}_{\varsigma}\) and thus, \(\mathcal{B}\subseteq\tilde{\mathcal{T}}_{\varsigma}\). Hence, \(\tilde{\mathcal{T}}_{\varsigma}=\mathcal{B}\). **Definition 5.9**: _Let \((R,\Omega)\in S_{\Omega}(X)\), \(\tilde{\mathcal{T}}\in T_{\Omega}(X)\), and \(\tilde{I}\in I_{\Omega}(X)\). Then \((R,\Omega)\) is said to be_ 1. soft c-crowded if \((R,\Omega)\subseteq\varsigma(R,\Omega)\). 2. soft c-regular if \(\varsigma(R,\Omega)=(R,\Omega)\). **Theorem 5.10**: _Let \(\tilde{\mathcal{T}}\in T_{\Omega}(X)\) and \(\tilde{I}\in I_{\Omega}(X)\). If \(\tilde{I}\) is a soft adherent-ideal, then each soft set can be represented as a disjoint union of a soft c-crowded set and an element in \(\tilde{I}\)._ _Proof_. Given a soft set \((R,\Omega)\). Consider the following decomposition \[(R,\Omega)=[(R,\Omega)\ \widehat{\Omega}\ \varsigma(R,\Omega)]\ \widetilde{\Omega}\ [(R,\Omega)-\varsigma(R,\Omega)]. \tag{1}\] Since \(\tilde{I}\) is soft adherent, \((R,\Omega)-\varsigma(R,\Omega)\in\tilde{I}\). It remains to show that \((R,\Omega)\ \widehat{\Omega}\ \varsigma(R,\Omega)\) is a soft c-crowded set. Now, by Proposition 3.3 (1) and (4), we have \[(R,\Omega)\ \widehat{\Omega}\ \varsigma(R,\Omega)\subseteq\varsigma(R, \Omega)=\varsigma[(R,\Omega)\ \widehat{\Omega}\ \varsigma(R,\Omega)]\ \widetilde{\Omega}\ \varsigma(R,\Omega)-\varsigma(R,\Omega)]=\varsigma[(R,\Omega)\ \widehat{\Omega}\ \varsigma(R,\Omega)].\] Therefore, \((R,\Omega)\ \widehat{\Omega}\ \varsigma(R,\Omega)\) is a soft c-crowded set. Hence the proof. **Lemma 5.11**: _Let \((R,\Omega),(S,\Omega)\in S_{\Omega}(X)\), \(\tilde{\mathcal{T}}\in T_{\Omega}(X)\), and \(\tilde{I}\in I_{\Omega}(X)\). If \((R,\Omega)\) is soft c-regular and \((S,\Omega)\) is soft c-closed in \((R,\Omega)\), then \((R,\Omega)-(S,\Omega)\) is soft c-crowded._ _Proof_. Given \(x_{\alpha}\in P_{\Omega}(X)\) and set \((Q,\Omega)=(R,\Omega)-(S,\Omega)\). Suppose \(x_{\alpha}\in(Q,\Omega)\). Since \((Q,\Omega)\subseteq(R,\Omega)\subseteq\varsigma(R,\Omega)\), then for each \((U,\Omega)\in\tilde{\mathcal{T}}(x_{\alpha})\) such that \((U,\Omega)\ \widehat{\Omega}\ (R,\Omega)\notin\tilde{I}\). Since \(x_{\alpha}\notin(S,\Omega)\) and \((S,\Omega)\) is soft c-closed, so \(x_{\alpha}\notin\varsigma(S,\Omega)\). Therefore, there exists \((V,\Omega)\in\tilde{\mathcal{T}}(x_{\alpha})\) such that \((V,\Omega)\ \widehat{\Omega}\ (S,\Omega)\in\tilde{I}\). Set \((W,\Omega)=(U,\Omega)\ \widehat{\Omega}\ (V,\Omega)\). Also \((W,\Omega)\in\tilde{\mathcal{T}}(x_{\alpha})\). Now, we have \[(R,\Omega)\ \widehat{\Omega}\ (W,\Omega)=[(Q,\Omega)\ \widehat{\Omega}\ (W,\Omega)]\ \widetilde{\Omega}\ [(S,\Omega)\ \widehat{\Omega}\ (W, \Omega)].\] Since \((W,\Omega)\ \widehat{\Omega}\ (R,\Omega)\notin\tilde{I}\), we must have \((W,\Omega)\ \widehat{\Omega}\ (Q,\Omega)\notin\tilde{I}\). Hence, \(x_{\alpha}\in\varsigma(Q,\Omega)\) and thus \((R,\Omega)-(S,\Omega)\) is soft c-crowded. **Theorem 5.12**: _Let \(\tilde{\mathcal{T}}\in T_{\Omega}(X)\) and \(\tilde{I}\in I_{\Omega}(X)\). If \(\tilde{I}\) is a soft adherent-ideal, then each soft c-closed set can be uniquely represented as a disjoint union of a soft c-regular set and an element in \(\tilde{I}\)._ Proof.: Let \((R,\Omega)\) be soft c-closed. We first prove that such a representation exists and then show that it is unique. Consider, \[(R,\Omega)=[(R,\Omega)\ \widehat{\Omega}\ \mathfrak{c}(R,\Omega)]\ \widehat{\Omega}\ [(R,\Omega)-\mathfrak{c}(R,\Omega)].\] Since \(\ \tilde{I}\) is soft adherent, \((R,\Omega)-\mathfrak{c}(R,\Omega)\in\tilde{I}\). We now show that \((R,\Omega)\ \widehat{\Omega}\ \mathfrak{c}(R,\Omega)\) is a soft c-regular set. Since \((R,\Omega)\) is soft c-closed, then \(\ \mathfrak{c}(R,\Omega)\ \widehat{\subseteq}\ (R,\Omega)\). Therefore, by Proposition 3.3 (2), we have \[\mathfrak{c}((R,\Omega)\ \widehat{\Omega}\ \mathfrak{c}(R,\Omega)) \widehat{\subseteq}\ \mathfrak{c}(R,\Omega)=(R,\Omega)\ \widehat{\Omega}\ \mathfrak{c}(R,\Omega).\] The opposite of the inclusion is obtained from Theorem 5.10. Thus, \((R,\Omega)\ \widehat{\Omega}\ \mathfrak{c}(R,\Omega)\) is a soft c-regular set. Suppose \((R,\Omega)\) has two different representations. Namely, \((R,\Omega)=(L,\Omega)\ \ \widehat{\Omega}\ (A,\Omega)=(M,\Omega)\ \widehat{\Omega}\)\((B,\Omega)\), where \((L,\Omega),(M,\Omega)\) are disjoint soft c-regular and \((A,\Omega),(B,\Omega)\in\tilde{I}\) with \((A,\Omega)\ \widehat{\Omega}\ (B,\Omega)=\widehat{\Phi}\). Since \((L,\Omega)-[(L,\Omega)\ \widehat{\Omega}\ (M,\Omega)]\ \widehat{\subseteq}\ (B,\Omega)\) and \((B,\Omega)\in\tilde{I}\), by Lemma 5.11, we conclude \[(L,\Omega)-[(L,\Omega)\ \widehat{\Omega}\ (M,\Omega)]\ \widehat{\subseteq}\ \mathfrak{c}[(L,\Omega)-((L,\Omega)\ \widehat{\Omega}\ (M,\Omega))]\ \widehat{\subseteq}\ \mathfrak{c}(B,\Omega)=\widehat{\Phi}.\] Thus, \((L,\Omega)=(L,\Omega)\ \widehat{\Omega}\ (M,\Omega)\). By the same way, one can obtain \((M,\Omega)=(L,\Omega)\ \widehat{\Omega}\ (M,\Omega)\). Hence, \((L,\Omega)=(M,\Omega)\) and \((A,\Omega)=(B,\Omega)\) **Theorem 5.13**: _Let \(\tilde{\mathcal{F}}\in T_{\Omega}(X)\) and let \(\tilde{I}\in I_{\Omega}(X)\) be soft adherent. A soft set \((R,\Omega)\) over \(X\) is soft c-closed iff it can be written as a disjoint union of a soft c-regular set and an element in \(\tilde{I}\)._ Proof.: The first direction follows from Theorem 5.12. For the converse, if \((R,\Omega)=(Q,\Omega)\ \ \widehat{\Omega}\ (A,\Omega)\), where \((Q,\Omega)\) is soft c-regular and \((A,\Omega)\in\tilde{I}\). Then \[\mathfrak{c}(R,\Omega)=\mathfrak{c}(Q,\Omega)\ \widehat{\Omega}\ \mathfrak{c}(A, \Omega)=\mathfrak{c}(Q,\Omega)=(Q,\Omega)\ \widehat{\subseteq}\ (Q,\Omega)\ \widehat{\Omega}\ (A,\Omega)=(R,\Omega).\] This proves that \((R,\Omega)\) is soft c-closed. ## Conclusion In this paper, we have considered a cluster soft point as an extension or a unification of a soft closure and a soft limit point. The class of all cluster soft points of a soft set \((F,\Omega)\) is called a cluster soft set or soft local function of \((F,\Omega)\) in Kandil et al. (2014). The cluster set of crisp points was first studied by Vaidyanathaswamy (1944) and developed by Jankovic and Hamlett (1990). The concept of soft ideal played a significant role in determining the cluster soft set. Then we have defined cluster soft closed sets and shown that the system of all cluster soft open sets, which are the complements of cluster soft closed sets, constitutes a soft topology and is called a cluster soft topology. We have demonstrated that the cluster soft topology \(\tilde{\mathcal{F}}_{\mathfrak{c}}\) of the soft topology \(\tilde{\mathcal{F}}\) is finer than \(\tilde{\mathcal{F}}\) (i.e., \(\tilde{\mathcal{F}}\ \widehat{\subseteq}\ \tilde{\mathcal{F}}_{\mathfrak{c}}\)). In addition, we have established that the cluster soft topology \(\tilde{\mathcal{F}}_{\mathfrak{c}}\) of \(\tilde{\mathcal{F}}_{\mathfrak{c}}\) is similar to \(\tilde{\mathcal{F}}_{\mathfrak{c}}\). Several examples demonstrated that the cluster soft topologies are among the most natural elements in the lattice of soft topologies on a universal set. We have characterized the cluster soft closed sets in terms of some other types of soft sets. At the end, we shall remark that our results have built on the soft point theory given in (Nazmul and Samanta (2013); Xie (2015)). The obtained results are natural generalizations of those found in (Jankovic and Hamlett (1990); Vaidyanathaswamy (1944)), according to Theorem 1 in Terepeta (2019). As a piece of future work, one can study the concepts of separation axioms, compactness, connectedness, etc, in cluster soft topological spaces. It is also possible to define and investigate some weak or strong classes of cluster soft open sets after introducing the soft \(\,\mathfrak{c}\)-interior and the soft \(\,\mathfrak{c}\)-closure of a soft set. ## Compliance with ethical standards ### Funding Not applicable. #### Availability of data andmaterials Not applicable. #### Conflict of interest The authors declare that they have no competing interests. #### Human and animal rights statement This article does not contain any studies with human participants performed by any of the authors.
2310.20308
A physics-informed GAN Framework based on Model-free Data-Driven Computational Mechanics
Model-free data-driven computational mechanics, first proposed by Kirchdoerfer and Ortiz, replace phenomenological models with numerical simulations based on sample data sets in strain-stress space. In this study, we integrate this paradigm within physics-informed generative adversarial networks (GANs). We enhance the conventional physics-informed neural network framework by implementing the principles of data-driven computational mechanics into GANs. Specifically, the generator is informed by physical constraints, while the discriminator utilizes the closest strain-stress data to discern the authenticity of the generator's output. This combined approach presents a new formalism to harness data-driven mechanics and deep learning to simulate and predict mechanical behaviors.
Kerem Ciftci, Klaus Hackl
2023-10-31T09:33:03Z
http://arxiv.org/abs/2310.20308v1
# A physics-informed GAN Framework based on Model-free Data-Driven Computational Mechanics ###### Abstract Model-free data-driven computational mechanics, first proposed by Kirchdoerfer and Ortiz, replace phenomenological models with numerical simulations based on sample data sets in strain-stress space. In this study, we integrate this paradigm within physics-informed generative adversarial networks (GANs). We enhance the conventional physics-informed neural network framework by implementing the principles of data-driven computational mechanics into GANs. Specifically, the generator is informed by physical constraints, while the discriminator utilizes the closest strain-stress data to discern the authenticity of the generator's output. This combined approach presents a new formalism to harness data-driven mechanics and deep learning to simulate and predict mechanical behaviors. keywords: Model-free Data-Driven, Generative Adversarial Networks, Data-Driven Computing, Physics-informed Neural Networks + Footnote †: journal: Computer Methods in Applied Mechanics and Engineering November 1, 2023 ## 1 Introduction The simulation of boundary value problems typically contains two equations: conservation and constitutive laws. While conservation laws are derived from universal principles, constitutive laws are usually obtained by fitting model parameters to given strain-stress data [1]. Nevertheless, material modeling can be ill-posed and adds uncertainties to the solutions, particularly in highly complex systems. The model-free data-driven method, introduced by Kirchdoerfer and Ortiz [2], bypasses the step of material modeling, incorporating experimental data directly into the numerical simulations of boundary-value problems. The data-driven scheme bypasses the empirical material modeling step by computing the closest point in the material data set consistent with the problem's compatibility and equilibrium condition. Consequently, it provides an alternative formulation of the classical initial-boundary-value problem based on nearest-neighbor clustering. The approach has been fine-tuned for diverse applications: from non-linear elasticity [2; 3; 4; 5; 6] to dynamics [7] and finite strain [8]. It's also been adapted for material data identification [9], non-local mechanics [10], electro-mechanical problems [11], homogenization schemes [12], and model-driven coupling [13]. Ibanez et al. [14; 15] refined the approach using a manifold learning method that maps data into a lower-dimensional space to use the locally linear embeddings. Eggersmann et al. [16] presented a second-order data-driven approach that uses tensor voting [17] to obtain point-wise tangent space, enabling the search for additional states close to the original data. For inelastic boundary value problems, Eggersmann et al. [18] include local histories in the data set to investigate materials with memory. Karapiperis et al. [19] have also suggested a variation of the scheme, considering multi-scale modeling. In addition, we recently developed a paradigm incorporating the tangent space into the distance-minimizing data-driven formulation and classifies the underlying data structure into subsets according to various material behaviors [20]. The framework features a parametrization of the material history and an optimal sampling of the mechanical system's state space. The paradigm's dependence on the nearest-neighbor clustering of data points proposes research areas in machine-learning methods, particularly Artificial Neural Networks (ANNs), that are known to approximate any continuous function for appropriate network parameters [21; 22]. The flexibility and quality of neural networks led to success in a wide range of applications, e.g., image recognition [23], language processing [24], or generative modeling [25; 26]. An extension to neural networks is physics-informed deep learning, successfully used in solving physical-related problems such as fluid mechanics [27; 28], aerodynamics [29; 30], shell structures [31] or material science [32; 33]. Physics-Informed Neural Networks (PINNs) can be trained to fulfill training data and learn optimal solutions for allocated physics-governing equations by specifying appropriate loss functions [34; 35]. The physics-based loss competes against a data-based loss, which is needed to provide fundamental knowledge of the system. Thus, partial differential equations act as additional constraints during network training, resulting in a multi-objective optimization problem. Optimizing data and physics give physics-informed neural networks flexibility in solving forward and inverse problems [28, 29, 30, 36, 37, 38]. The trade-off between the individual losses can be influenced using hyper-parameter [39, 40, 41]. For example, adaptive activation functions [42, 43], or manually weighted losses [44], can improve the quality of the neural network for specific problems. Another approach to overcome the local convergence issue due to global approximation is the usage of adaptive training strategies and domain decomposition [45]. This investigation combines the model-free data-driven approach with Generative Adversarial networks (GANs). In machine learning, GANs have emerged as a powerful tool consisting of two neural networks - the generator, which creates data, and the discriminator, which evaluates the authenticity of the generated data. Through their adversarial game, GANs are adept at generating high-fidelity data, often indistinguishable from actual data [26]. An extension is the integration of physics-informed neural networks with the GAN structure. For instance, the pursuit of robust uncertainty quantification within the framework of PINNs has led to recent methodologies. The PIG-GAN framework [46] harnesses the capabilities of a physics-informed generator to address adversarial uncertainty. On the other hand, the PID-GAN approach [47] uses a physics-informed discriminator, carving out a distinct avenue to achieve reliable uncertainty quantification while maintaining fidelity to the governing physics. Another stride in this direction is the DeqGAN, which offers a unique perspective on PINNs by learning the loss function via generative adversarial networks. This methodology provides a robust avenue for solving the challenges traditionally associated with defining appropriate loss functions for PINNs [48]. In our approach, the generator is a physics-informed neural network, and the discriminator employs the closest strain-stress data to evaluate the authenticity of the generator's results. This synergized methodology matches model-free data-driven computational mechanics and deep learning principles, aspiring to more accurately simulate and predict intricate mechanical behaviors. Section 2 provides a general setting by introducing the definitions and derivation of the distance-minimizing data-driven computing method based on [4]. Section 3 introduces the framework of artificial neural networks and generative adversarial networks. In addition, we propose using a physics-informed GAN to solve the distance-minimizing data-driven problem. Section 4 exhibits the performance of the proposed method using a numerical example involving a non-linear elastic in-plane boundary value problem. Finally, Section 5 summarizes the results and suggests future research subjects. ## 2 Model-free Data-driven setting The following will summarize the classical data-driven computational mechanics method for the reader's convenience based on the definitions and formulations in [4]. We consider an elastic body \(\Omega\subset\mathbb{R}^{d}\) whose internal states are defined by displacement field \(\mathbf{u}:\Omega\to\mathbb{R}^{d}\) and the compatibility and equilibrium conditions \[\begin{split}&\mathbf{\varepsilon}(\mathbf{x})-\nabla^{\mathrm{sym}} \mathbf{u}(\mathbf{x})=\mathbf{0},\ \ \ \text{in}\ \Omega,\\ &\nabla\cdot\mathbf{\sigma}(\mathbf{x})-\mathbf{f}(\mathbf{x})=\mathbf{0}, \ \ \ \text{in}\ \Omega,\end{split} \tag{1}\] and boundary conditions \[\begin{split}&\mathbf{u}(\mathbf{x})=\mathbf{g}(\mathbf{x}),\ \ \ \ \ \ \ \ \ \ \ \ \ \text{on}\ \Gamma_{D},\\ &\mathbf{\sigma}(\mathbf{x})\cdot\mathbf{n}(\mathbf{x})=\mathbf{t}(\mathbf{x }),\ \ \text{on}\ \Gamma_{N},\end{split} \tag{2}\] where \(\mathbf{\varepsilon}:\Omega\to\mathbb{R}^{d\times d}_{\mathrm{sym}}\) is the strain field and \(\mathbf{\sigma}:\Omega\to\mathbb{R}^{d\times d}_{\mathrm{sym}}\) is the stress field. The boundary \(\Gamma\) of the domain \(\Omega\) is defined by the Dirichlet (\(\Gamma_{D}\)) and Neumann (\(\Gamma_{N}\)) with \(\Gamma=\Gamma_{D}\cup\Gamma_{N}\) and \(\Gamma_{D}\cap\Gamma_{N}=\emptyset\). In addition, \(\mathbf{f}:\Omega\to\mathbb{R}^{d}\) is the body force, and \(\mathbf{g},\mathbf{t},\mathbf{n}:\Gamma\to\mathbb{R}^{d}\) define the boundary displacement, applied traction and outer normal, respectively. We define \(Z_{\mathrm{loc}}\subset\mathbb{R}^{d\times d}_{\mathrm{sym}}\times\mathbb{R}^ {d\times d}_{\mathrm{sym}}\) as the local phase space consisting of pairs \(\mathbf{z}(\mathbf{x})=(\mathbf{\varepsilon}(\mathbf{x}),\mathbf{\sigma}(\mathbf{x}))\) describing the local state of the system at material point \(\mathbf{x}\). The global phase space \(Z\) is defined as the collection of the state functions, i.e. \[Z=\{\mathbf{z}\,:\,\mathbf{z}\in Z_{\mathrm{loc}}\}. \tag{3}\] The data-driven distance-minimization problem, introduced by [2], reads \[\operatorname*{arg\,min}_{\mathbf{z}\in\mathcal{C},\hat{\mathbf{z}}\in \mathcal{D}}d(\mathbf{z},\hat{\mathbf{z}}), \tag{4}\] where \(\mathcal{C}\subset Z\) denotes the constraint set defined by \[\mathcal{C}:=\Big{\{}\mathbf{z}\in Z:(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq where \(n_{e}\in\mathbb{N}\) is the number of local data points associated with the material point. The distance \(d:Z\times Z\to\mathbb{R}\) is defined by \[d(\mathbf{z},\hat{\mathbf{z}}):=\|\mathbf{z}-\hat{\mathbf{z}}\|, \tag{7}\] metricized by the norm \[\|\mathbf{z}\|^{2}:=\int\limits_{\Omega}\left(\frac{1}{2}\mathbf{C}\mathbf{\varepsilon} \cdot\mathbf{\varepsilon}+\frac{1}{2}\mathbf{C}^{-1}\mathbf{\sigma}\cdot\mathbf{\sigma}\right) \mathbf{dx}, \tag{8}\] where \(\mathbf{C}\in\mathbb{R}_{\mathrm{sym}}^{d\times d}\) is a symmetric positive definite matrix typically being of the type of elastic stiffness. Thus, the data-driven paradigm aims to find the closest point \(\mathbf{z}\) in the constraint set \(\mathcal{C}\) to \(\hat{\mathbf{z}}\) in the material data set \(\mathcal{D}\). Challenges such as data availability, noise, inconsistency, and high dimensionality frequently arise in the data-driven paradigm. Traditional analytical and computational methods may need to be adjusted when addressing these issues. Consequently, the incorporation of machine learning, particularly methods like generative adversarial networks coupled with physics-informed generators, is considered. This integration is aimed at effectively handling the complexities of data-driven datasets, ensuring the outcomes remain consistent with domain-specific knowledge. The following sections will present a detailed discussion on the principles of artificial neural networks and physics-informed neural networks, illustrating the approach of physics-informed generative adversarial networks to solve the data-driven boundary value problem (4). Generative adversarial networks with physics-informed generators for model-free data-driven problems This section delves into the application of Generative Adversarial Networks (GANs) equipped with Physics-Informed Generators for addressing the model-free data-driven problem. A GAN involves a competitive dynamic between two neural networks, forming a zero-sum game: one network's success implies the other's setback. To harness GANs for resolving the data-driven boundary value problem depicted in (4), Section 3.1 initiates with a concise overview of Artificial Neural Networks (ANNs) and explains physics-informed neural networks (PINNs). Section 3.2 lays out the foundational principles of GANs, and in Section 3.3, we pivot to the novel approach of leveraging GANs augmented with PINNs to solve the data-driven boundary value problem. ### Physics-informed neural networks Based on the universal function approximation theorem [49], an artificial neural network is a parametrized, non-linear function composition that can approximate arbitrary Borel measurable functions. This section introduces the basic concept based on the definitions and formulations in [45]. For this purpose, we introduce a densely connected feed-forward neural network, denoted by the map \(\mathcal{N}:\mathbb{R}^{d_{x}}\times[0,T]\to\mathbb{R}^{d_{y}}\), which is defined by a composition of \(n_{L}\in\mathbb{N}\) non-linear functions: \[\mathcal{N}:\mathbb{R}^{d_{x}}\times[0,T] \to\mathbb{R}^{d_{y}} \tag{9}\] \[(\mathbf{x},t) \mapsto\mathcal{N}(\mathbf{x},t)=\mathbf{y}^{(\ell)}\circ\ldots \circ\mathbf{y}^{(0)}=\mathbf{y}, \tag{10}\] for \(\ell=1,\ldots,n_{L}\), where \(\mathbf{x}\) denotes the spatial part of the input vector of dimension \(d_{x}\in\mathbb{N}\) at time \(t\in[0,T]\) with \(T>0\) and \(\mathbf{y}\) denotes the output vector of dimension \(d_{y}\in\mathbb{N}\). In this context, \(\mathbf{y}^{(0)}\) and \(\mathbf{y}^{(n_{L})}\) are called the input and output layer, such that \[\mathbf{y}^{(0)}=(\mathbf{x},t),\qquad\mathbf{y}^{(n_{L})}=\mathcal{N}( \mathbf{x},t). \tag{11}\] The functions \(\mathbf{y}^{(\ell)}\) are called hidden layers and define a \(\ell-\)fold composition, mapping the input \((\mathbf{x},t)\) to the output \(\mathbf{y}\) by \[\mathbf{y}^{(\ell)}=\{\mathbf{y}^{(\ell)}_{\eta},\,\eta=1,\ldots,\eta_{u}\}, \text{ with }\mathbf{y}^{(\ell)}_{\eta}=\operatorname{act}^{(\ell)}\left(\mathbf{W}^{( \ell)}_{\eta}\mathbf{y}^{(\ell-1)}+\mathbf{b}^{(\ell)}_{\eta}\right). \tag{12}\] We call \(\mathbf{y}^{(\ell)}_{\eta}\) the \(\eta^{\text{th}}\) neural unit of the \(\ell^{\text{th}}\) layer \(\mathbf{y}^{(\ell)}\), where \(\eta_{u}\in\mathbb{N}\) is the total number of neural units per layer. \(\mathbf{W}^{(\ell)}_{\eta}\) and \(\mathbf{b}^{(\ell)}_{\eta}\) denote the weight matrix and bias vector of the \(\eta^{\text{th}}\) neural unit in the \(\ell^{\text{th}}\) layer \(\mathbf{y}^{(\ell)}\). Furthermore \(\operatorname{act}^{(\ell)}(\cdot):\mathbb{R}\to\mathbb{R}\) is a non-linear activation function. All weights and biases of all layers \(\mathbf{y}^{(\ell)}\) are assembled in \[\boldsymbol{\theta}=\Big{\{}\left(\mathbf{W}^{(\ell)}_{\eta},\mathbf{b}^{( \ell)}_{\eta}\right);\,\ell=1,\ldots,n_{L},\,\eta=1,\ldots,\eta_{u}\Big{\}}, \tag{13}\] including all parameters of the neural network \(\mathcal{N}(\mathbf{x},t)\). As a result, the notation \(\mathcal{N}(\mathbf{x},t;\boldsymbol{\theta})\) highlights the dependence of a neural network's output on the input and the current realization of the weights and biases. Figure 1 illustrates the network's topology, a combination of layers, neural units, and activation functions. The main idea of solving boundary value problems with an artificial neural network is the reformulation to an optimization problem [35; 50; 51], where the residual of the differential equations is to be minimized. To solve the differential equation (1) and (2), a suitable topology for the artificial neural network and, consequently, the physics-informed neural networks described in Section 3.1 has to be chosen. Since (1) is stationary, we can reduce the artificial neural network to \(\mathcal{N}(\mathbf{x};\mathbf{\theta})\). Thus, we can define neural networks as an ansatz for the displacement and stress field i.e. \[\mathbf{u}(\mathbf{x},t) \approx\mathcal{N}_{u}(\mathbf{x};\mathbf{\theta}_{u}), \tag{14}\] \[\mathbf{\sigma}(\mathbf{x},t) \approx\mathcal{N}_{\sigma}(\mathbf{x};\mathbf{\theta}_{\sigma}), \tag{15}\] with trainable network parameters \(\mathbf{\theta}:=\{\mathbf{\theta}_{u},\mathbf{\theta}_{\sigma}\}\). Notably, there is no separate network for the strain tensor. The strain tensor is deduced using the kinematics and differentiation applied to the displacement network, i.e. \(\mathbf{\varepsilon}=\nabla^{\mathrm{sym}}\mathcal{N}_{u}(\mathbf{x};\mathbf{\theta} _{u})\). The architecture of this artificial neural network is visualized in Fig.2. Figure 1: Schematic representation of the neural network architecture \(\mathcal{N}(\mathbf{x},t;\mathbf{\theta})\), starting with multi-input parameters \(\mathbf{x}\) and \(t\) at input layer \(\mathbf{y}^{(0)}\), progressing through sequential hidden layer \(\mathbf{y}^{(1)},\dots,\mathbf{y}^{(n_{L}-1)}\). The architecture concludes with an output layer \(\mathbf{y}^{(n_{L})}\), producing the final output \(\mathbf{y}\). Using the neural network ansatz we can rewrite the physics (1) and (2) as \[\begin{array}{ll}R_{\Omega}=\nabla\cdot\mathcal{N}_{\sigma}(\mathbf{x},\mathbf{ \theta}_{\sigma})-\mathbf{f}(\mathbf{x}),&\text{in }\Omega,\\ R_{\Gamma_{D}}=\mathcal{N}_{u}(\mathbf{x},\mathbf{\theta}_{u})-\mathbf{g}(\mathbf{x}),& \text{on }\Gamma_{D},\\ R_{\Gamma_{N}}=\mathcal{N}_{\sigma}(\mathbf{x},\mathbf{\theta}_{\sigma})\cdot\mathbf{n }(\mathbf{x})-\mathbf{t}(\mathbf{x}),&\text{on }\Gamma_{N},\end{array} \tag{16}\] where \(R_{\text{PDE}}\) penalizes the residual of the equilibrium equation, and the equations \(R_{\Gamma_{D}}\) and \(R_{\Gamma_{N}}\) describe the discrepancy of the Dirichlet and Neumann boundary conditions. Notice that if \(\mathcal{N}_{u}\) and \(\mathcal{N}_{\sigma}\) is a solution to the original boundary value problem, they minimize the differential equation-based residuals. The parameters \(\mathbf{\theta}\) of the networks can be found by incorporating the physics-induced residuals into the training process of a neural network as components of the loss function. For this, we use a collocation method discretizing the domain \(\Omega\) and the boundary \(\Gamma:=\Gamma_{D}\cup\Gamma_{N}\) into sets of sample points \(S_{\Omega}\) and \(S_{\Gamma}\) with cardinalities \(|S_{\Omega}|\) and \(|S_{\Gamma}|\). Then, an optimization problem to find the optimal parameters \(\mathbf{\theta}^{\star}\), also called training, is defined as \[\mathbf{\theta}^{\star}=\operatorname*{arg\,min}_{\mathbf{\theta}}L_{\mathcal{C}} \tag{17}\] Figure 2: Schematic representation of the neural network’s topology, illustrating the progression from input through multiple hidden layers, resulting in displacement components \(u_{i}\) and stress components \(\sigma_{ij}\). with \(L_{\mathcal{C}}:=L_{\Omega}(\mathbf{x},\boldsymbol{\theta})+L_{\Gamma}(\mathbf{x}, \boldsymbol{\theta})\) given by the local losses \[L_{\Omega} =\frac{1}{|S_{\Omega}|}\sum_{\mathbf{x}\in S_{\Omega}}\|R_{\Omega} (\mathbf{x};\boldsymbol{\theta})\|_{2}^{2}, \tag{18}\] \[L_{\Gamma} =\frac{1}{|S_{\Gamma}|}\left(\sum_{\mathbf{x}\in S_{\Gamma_{D}}} \|R_{\Gamma_{D}}(\mathbf{x};\boldsymbol{\theta})\|_{2}^{2},+\sum_{\mathbf{x} \in S_{\Gamma_{N}}}\|R_{\Gamma_{N}}(\mathbf{x};\boldsymbol{\theta})\|_{2}^{2} \right). \tag{19}\] The expressions penalize the residual of the governing equations and the discrepancy of the Dirichlet and Neumann boundary conditions, respectively. Notice that in the three-dimensional setting, one defines the neural networks as tuples, i.e. \[\mathcal{N}_{u}(\mathbf{x};\boldsymbol{\theta}_{u}) =\{\mathcal{N}_{u_{i}}(\mathbf{x};\boldsymbol{\theta}_{u_{i}})\,| \,i=1,2\}, \tag{20}\] \[\mathcal{N}_{\sigma}(\mathbf{x};\boldsymbol{\theta}_{\sigma}) =\{\mathcal{N}_{\sigma_{ij}}(\mathbf{x};\boldsymbol{\theta}_{ \sigma_{ij}})\,|\,i,j=1,2,3\text{ and }ij=ji\}, \tag{21}\] including the three components \(u_{i}\) of displacement \(\boldsymbol{u}\) and six stress components \(\sigma_{ij}\), where \(ij=ji\) ensures the symmetry of the stress tensor \(\boldsymbol{\sigma}\). Fig. 3 illustrates the complete network's structure. While the PINN framework provides a straightforward method to solve physical-enhanced problems, it has challenges. Notably, there have been instances where the optimization yields solutions with unexpected or non-physical behaviors even when carefully tailored to encapsulate the physics [31]. Additionally, the current PINN formulation must minimize the difference between the network's outputs and the available strain-stress data \(\mathcal{D}\) due to the nature of the data-driven distance minimization problem (4). If Figure 3: Schematic representation of the physics-informed neural network’s topology, illustrating the progression from input through multiple hidden layers, resulting in displacement components \(u_{i}\) and stress components \(\sigma_{ij}\). These outputs undergo automatic differentiation to compute a physics-based loss function \(L_{\mathcal{C}}\). The loss is minimized using an optimizer to refine the network parameters \(\theta\). we integrate the distance as an additional loss into the global loss, the whole problem becomes a nested optimization, leading to training challenges. The neural network could optimize in an undesired direction during each training epoch. If the approximated strain-stress point is not accurate, the corresponding data point might be suboptimal concerning the optimization algorithm, further complicating the learning process. To address these challenges, we consider the integration of PINNs with generative adversarial networks. GANs are proficient at generating outputs with the same properties as actual data, providing a potential approach to generating realistic strain-stress solutions. Their flexibility ensures adaptability across diverse data types suited for various physical conditions. Moreover, the inherent capability of GANs to discern and capitalize on intricate patterns may lead to a more robust representation of underlying physics. Additionally, with conditional GANs, generating outputs based on specific conditions becomes feasible, allowing for more targeted solutions. The combined PINN-GAN approach seeks to ensure physical consistency and alignment with observed data, leveraging the strengths of both methodologies. For clarity, we will provide a brief overview of GAN theory in the following. ### Intermezzo to generative adversarial networks Introduced by Goodfellow et al. (2016), generative adversarial networks illustrate a novel approach to generating data using neural architectures. These networks comprise two distinctive neural entities: the generator (\(G\)) and the discriminator (\(D\)). The underlying goal of a GAN is to generate data instances that emulate the properties of actual data. The generation is achieved by setting the two networks against each other in a competitive game, often described as a dual-player minimax game. Taking reference from the definitions provided in (9), we define the real data space as \(\mathbb{D}_{\mathrm{real}}\subset\mathbb{R}^{d_{y}}\), where \(d_{y}\) is the dimension of the space, i.e., \(d_{y}=\dim(\mathbb{D}_{\mathrm{real}}).\) The main objective of GANs is to produce synthetic data denoted as \(\mathbf{y}_{\mathrm{syn}}\), residing in the same space as our real data \(\mathbf{y}_{\mathrm{real}}\). The generator can be defined as a function \(G:\mathbb{R}^{d_{x}}\rightarrow\mathbb{R}^{d_{y}}\), which transforms a random noise vector \(\mathbf{x}\) into synthetic data \(\mathbf{y}_{\mathrm{syn}}\). In contrast, the discriminator operates as a function \(D:\mathbb{R}^{d_{y}}\rightarrow\mathbb{R}\), that provides a measure of authenticity for a given data sample. Mathematically, these networks can be illustrated as: \[\begin{split} G:\mathbb{R}^{d_{x}}&\rightarrow \mathbb{R}^{d_{y}}\\ \mathbf{x}&\mapsto\mathcal{N}_{G}(\mathbf{x};\mathbf{ \theta}_{G}),\end{split}\qquad\text{and}\qquad\qquad\begin{split} D:\mathbb{R}^{d_{y}}&\rightarrow[0,1]\\ \mathbf{y}&\mapsto\mathcal{N}_{D}(\mathbf{y};\mathbf{ \theta}_{D}).\end{split} \tag{22}\] Here, \(\mathcal{N}_{G}(\mathbf{x};\mathbf{\theta}_{G})\) and \(\mathcal{N}_{D}(\mathbf{y};\mathbf{\theta}_{D})\) describe the neural networks with their corresponding trainable parameters \(\mathbf{\theta}_{G}\) and \(\mathbf{\theta}_{D}\). The adversarial game between the generator and the discriminator during training can be encapsulated in the following objective \[L(G,D)=\mathbb{E}_{\mathbf{y}\sim p_{\text{data}}}[\ln D(\mathbf{y})]+\mathbb{ E}_{\mathbf{x}\sim p_{\mathbf{x}}}[\ln(1-D(G(\mathbf{x})))], \tag{23}\] leading to the optimization: \[\min_{G}\max_{D}L(G,D), \tag{24}\] where \(\mathbb{E}\) represents a random variable's expectation or expected value. It provides a weighted average of a function concerning its probability distribution. Specifically, \[\mathbb{E}_{\mathbf{y}\sim p_{\text{data}}}[\ln D(\mathbf{y})] \tag{25}\] represents the average logarithmic score assigned by the discriminator to actual data samples drawn from the distribution \(p_{\text{data}}\). On the other hand, the expression \[\mathbb{E}_{\mathbf{x}\sim p_{\mathbf{x}}}[\ln(1-D(G(\mathbf{x})))] \tag{26}\] reflects the average logarithmic score the discriminator accords to the synthetic or generated data, which is created from a random noise vector \(\mathbf{x}\) following the noise distribution \(p_{\mathbf{x}}\). The competition between the two networks is straightforward: the generator \(G\) aims to produce data that the discriminator \(D\) cannot distinguish from accurate data. In contrast, the discriminator tries to better distinguish real data from fake data produced by \(G\). The probability distributions \(p_{\text{data}}\) and \(p_{\mathbf{x}}\) depict the actual data and noise distributions, respectively. The terms in the objective function essentially capture the average confidence levels of the discriminator in judging the authenticity of both original and fake data samples. The procedure of the GAN's interplay between the generator and the discriminator is illustrated in Fig. 4. Traditional GANs deploy a sigmoid activation function for the discriminator's final layer, ensuring its outputs fall within [0,1]. The GANs can suffer from issues like mode collapse (where the generator generates limited varieties of samples), vanishing gradients, and general training instability. To address some of these challenges, the Wasserstein GAN (WGAN) [52] changes the objective function to leverage the Wasserstein distance [53]. The WGAN objective can be described as: \[L_{WGAN}(G,D)=\mathbb{E}_{\mathbf{y}\sim p_{\text{data}}}[D(\mathbf{y})]- \mathbb{E}_{\mathbf{x}\sim p_{\mathbf{x}}}[D(G(\mathbf{x}))], \tag{27}\] leading to the following optimization: \[\min_{G}\max_{D}L_{WGAN}(G,D). \tag{28}\] WGANs are known to provide more stable and consistent training dynamics [52]. Building on the WGAN, the Wasserstein GAN with Gradient Penalty (WGAN-GP) introduced a regularization term to ensure that the discriminator's gradients remain bounded [54]. This gradient penalty aims to enforce the Lipschitz continuity condition, which addresses the vanishing gradient problem. The gradient penalty is defined as: \[\text{GP}=\mathbb{E}\left[\left(\|\nabla_{\tilde{\mathbf{y}}}D(\tilde{\mathbf{ y}})\|_{2}-1\right)^{2}\right], \tag{29}\] where \(\tilde{\mathbf{y}}=\delta\mathbf{y}_{\text{real}}+(1-\delta)\mathbf{y}_{ \text{syn}}\) and \(\delta\) is sampled from a uniform distribution in \([0,1]\). The optimization for WGAN-GP thus becomes \[\min_{G}\max_{D}L_{WGAN}(G,D)+\omega\cdot\text{GP}, \tag{30}\] Figure 4: Schematic representation of a generative adversarial network (GAN) showcasing the interaction between the generator producing data from random input and the discriminator evaluating the authenticity of both real and generated data. where \(\omega\in\mathbb{R}_{+}\) is a hyperparameter determining the weight of the gradient penalty in the overall objective [55]. ### Physics-informed GANs for data-driven mechanics problems In the classical data-driven computational mechanics paradigm Section 2, the objective is to find the closest point \(\mathbf{z}\) in the constraint set \(\mathcal{C}\) to \(\hat{\mathbf{z}}\) in the material dataset \(\mathcal{D}\), as formalized in equation (4). This context motivates our modified GAN approach for data-driven mechanics problems. To utilize GANs for solving differential equations in a data-driven mechanics setting, we propose a novel approach wherein the generator in the GAN architecture is identified as a physics-informed neural network (PINN). In this paradigm, while the generator outputs plausible solutions adhering to the underlying physics, the discriminator is trained to distinguish between the generator's predictions and actual strain-stress data. In the conventional GAN setup from Section 3.2, the generator \(G\) maps the input vector \(\mathbf{x}\) into synthetic data, \(\mathbf{y}_{\text{syn}}\). Instead of treating \(\mathbf{x}\) as a random noise vector, it represents the collocation points \(\mathbf{x}\) in the domain \(S_{\Omega}\). Thus, the generator is formalized as a mapping \(G:S_{\Omega}\rightarrow(\mathcal{N}_{u},\mathcal{N}_{\sigma})\), where \(\mathcal{N}_{u}\) and \(\mathcal{N}_{\sigma}\) represents the neural network approximation for the displacement and stress field, respectively. Therefore, the generator can be defined as: \[G(\mathbf{x},\mathbf{\theta}_{G}):=(\mathcal{N}_{u}(\mathbf{x};\mathbf{\theta}_{u}), \mathcal{N}_{\sigma}(\mathbf{x};\mathbf{\theta}_{\sigma})) \tag{31}\] where \(\mathbf{\theta}_{G}:=(\mathbf{\theta}_{u},\mathbf{\theta}_{\sigma})\) denotes the trainable parameters of the generator network. Building upon the physics-informed aspect, we differentiate \(\mathcal{N}_{u}\) and employ the kinematics equation to obtain the strain \(\mathbf{\varepsilon}\). Given \(\mathbf{\varepsilon}=\nabla^{\text{sym}}\mathcal{N}_{u}\), the generator's output evolves from merely the neural network predictions \(\mathcal{N}_{u}\) and \(\mathcal{N}_{\sigma}\) to the strain-stress pair \(\mathbf{z}:=(\mathbf{\varepsilon},\mathcal{N}_{\sigma})\). Once we obtain the strain-stress output from the generator, to stay consistent with the data-driven mechanics' paradigm, we compute the strain-stress data points \(\hat{\mathbf{z}}\in\mathcal{D}\) closest to the output \(\mathbf{z}\), which corresponds to: \[\hat{\mathbf{z}}=\operatorname*{arg\,min}_{\hat{\mathbf{z}}\in\mathcal{D}}d( \mathbf{z},\hat{\mathbf{z}}), \tag{32}\] with distance (7). We then use \(\mathbf{z}\) and \(\hat{\mathbf{z}}\) as synthetic and real data for the discriminator's training. For the discriminator \(D(\mathbf{y},\mathbf{\theta}_{D})\), we establish the mapping \(D:\mathbb{R}^{2d}\rightarrow[0,1]\), aligning with the conventional GAN framework. To accommodate strain-stress pairs as inputs for the discriminator, we convert a pair into a \(2d\)-vector \(\mathbf{y}\) by applying Voigt-Notation to both the strain and stress, then merging them into a single vector. Given strain-stress data \(\hat{\mathbf{z}}\in\mathcal{D}\), it assesses the data's authenticity, furnishing scores to guide the generator's training. With the generator now representing a PINN, the adversarial loss in equation (23) has to integrate the physics-informed loss \(L_{\mathcal{C}}\), derived from the residuals of the governing differential equations: \[L(G,D)=\mathbb{E}_{\hat{\mathbf{z}}\sim p_{\mathcal{D}}}[\ln D(\hat{\mathbf{z}} )]+\mathbb{E}_{\mathbf{x}\sim p_{S_{\Omega}}}[\ln(1-D(G(\mathbf{x})))+L_{ \mathcal{C}}]. \tag{33}\] The collaborative training between the discriminator and the physics-informed generator ensures that the latter learns to craft data that confounds the discriminator and aligns closely with intrinsic physics. Fig. 5 illustrated the physics-enhanced GAN approach for the data-driven mechanics problem. Regarding Wasserstein GANs and their gradient penalty variants, their objectives concerning the physics-informed generator must be modified. For instance, with the Wasserstein GAN objective, the loss function becomes: \[L_{\text{WGAN}}(G,D)=\mathbb{E}_{\hat{\mathbf{z}}\sim p_{\mathcal{D}}}[D(\hat{ \mathbf{z}})]-\mathbb{E}_{\mathbf{x}\sim p_{S_{\Omega}}}[D(G(\mathbf{x}))+L_{ \mathcal{C}}], \tag{34}\] Moreover, for the WGAN-GP, the combined objective is: \[L(G,D)=L_{\text{WGAN}}+\omega\cdot\text{GP}. \tag{35}\] By incorporating GANs with physics-informed principles, the models produce data that adheres to the statistics of observed datasets and the underlying differential equations. This integration addresses the nested optimization issue commonly found in the PINN-based data-driven mechanics. With the capability of GANs to generate outputs mirroring accurate data, Figure 5: Schematic representation of a physics-informed generative adversarial network (GAN) incorporating collocation points and strain-stress data for physical point generation and discrimination. the solutions are both statistically relevant and in line with physical principles. Using GANs simplifies the optimization process, making the training more stable and less prone to errors from inaccurate strain-stress approximations. However, it is worth noting that the loss values obtained while training a traditional GAN are often unreliable. In many studies, qualitative and quantitative evaluation methods are employed to assess the performance of the GAN. Qualitative evaluations, while offering a quick visual validation, can be subjective. Typically, they involve human observers who evaluate the realism of a generated sample. The overall presumption has been that if the generated sample appears realistic, the GAN's training is deemed successful, regardless of potential fluctuations in loss values. Nevertheless, such evaluations can be biased and do not always represent the complete performance spectrum of the GAN. For instance, the generated samples might still appear high quality even in scenarios where mode collapse occurs. Considering these challenges, especially in the context of our work where the goal is not generating images but accurately representing strain-stress states, we decided on WGAN + GP. Unlike traditional GANs, the loss of WGANs has a convergence point. Ideally, this point is reached when the generator is so adept at producing samples that no Lipschitz continuous discriminator can differentiate between real and generated samples. This characteristic of WGAN provides a more stable and consistent evaluation metric, ensuring that the generated strain-stress states are physically accurate. The effectiveness of this method will be showcased in a two-dimensional numerical example. ## 4 Numerical benchmark of a non-Linear elastic plate with hole This section illustrates the application of GANs to the data-driven computing paradigm [2] in a typical benchmark, considering stress analysis of non-linear elastic material. We discuss the problem setup and test environments and give a proper definition of the geometry and boundary conditions and the material parameters for data generation. We limit the simulation to noiseless synthetic data sets, which consist of strain-stress points created numerically using a material model rather than obtained by actual experimental measurements. However, experimental data is often noisy and contains outliers. This issue can be addressed with noise reduction algorithms such as tensor voting [56], Kalman filtering [57], and deep learning-based techniques. In this benchmark, we investigate a \(2d\) in-plain plate with a hole subject to a distributed force. The geometry, boundary conditions, and displacements are chosen according to a similar test presented in [16] and illustrated in Fig. 6. _Geometry:_ The system is defined by \(\Omega=\left[\,-\frac{\ell}{2},\frac{\ell}{2}\right]^{2}\setminus B_{r}(0)\), where \(B_{r}\) refers to the open ball of radius \(r=\frac{\ell}{4}\) centered at the origin \((0,0)\). The side lengths of the plate are equal to \(\ell=2\)m. Due to the symmetry of geometry, only one-quarter of the system is simulated, cf. Fig 6. Displacements are fixed at the quarter plate's left surface \(x=0\) in \(x\)-direction and at the bottom surface \(y=0\) in \(y\)-direction. The corresponding conditions read as follows: \[\begin{cases}u_{x}=0,&\text{if }x=0;\\ u_{y}=0,&\text{if }y=0;\end{cases} \tag{36}\] where \(\sigma_{x}\) is the stress and \(u_{x}\) and \(u_{y}\) are the displacements in \(x\) and \(y\)-directions, respectively. In addition, we define boundary conditions for the stress, especially for \(x=\frac{\ell}{2}\) the plate is subjected to a distributed force \(t(y)=200\cos(\frac{\pi y}{2})\) in \(x\)-direction. The boundary conditions for the stress components read \[\begin{cases}\sigma_{xx}=t(y),&\text{if }x=1;\\ \sigma_{yy}=0,&\text{if }y=1;\\ \sigma_{xy}=0,&\text{if }(x,y)\in\partial\Omega.\end{cases} \tag{37}\] Figure 6: Illustration of a square plate subjected to external forces, alongside its top-right quadrant representing the symmetry section with specified boundary conditions and force distribution \(t(y)\) applied. Notice that numerical methods based on the weak form of a boundary value problem innately satisfy shear-free boundary conditions on free boundaries. However, our PINN approach utilizes the strong form of the boundary value problem, so it is crucial to impose the zero stress boundary conditions directly [45]. To train the network, we utilize \(128^{2}\) quasi-random points produced using the Sobol sequence [58]. For testing, we generate \(256^{2}\) domain points using a uniform random distribution. _Material parameters:_ The boundary value problem considers the non-linear elastic material behavior of [16] defined by \[\mathbf{\sigma}=\lambda g(\mathrm{tr}(\mathbf{\varepsilon}))\mathbf{I}+\mu\mathbf{\varepsilon }+\mathbf{C}\mathbf{\varepsilon}, \tag{38}\] with \(g(x)=((|x|+a)^{p}-a^{p})\mathrm{sgn}(x)\) and \(a,p\in\mathbb{R}\). The applied material parameters are Young's modulus \(E\), Poisson's ratio \(\nu\), and orthotropic elasticity tensor for plane strain given by \[\mathbf{C}=\begin{pmatrix}C_{11}&2\nu(\bar{\lambda}+G_{\perp})&0\\ 2\nu(\bar{\lambda}+G_{\perp})&\bar{\lambda}+2G_{\perp}&0\\ 0&0&G_{\parallel}\end{pmatrix}, \tag{39}\] where \(\lambda=\frac{E\nu}{(1+\nu)(1-2\nu)}\) and \(\mu=\frac{E}{2(1+\nu)}\) are the well known Lame constants and \(C_{11}=4.6875E\), \(G_{\perp}=0.3E\), \(G_{\parallel}=0.2E\) and \(\bar{\lambda}=\frac{2\nu^{2}+1}{15-20\nu^{2}}E\) are additional material parameters. The exact parameter values used for the reference solution and synthetic data are given in Table 1. _Synthetic data:_ In order to simulate actual experimental measurements, we generate data artificially using the non-linear material model (38) based on the given material parameters. We investigate normal data distributions of \(100^{3}\) strain-stress data points with a fixed random seed. The data is created by a zero-mean normal distribution with a standard deviation of \(0.005\) in all strain dimensions. \begin{table} \begin{tabular}{c c c c} \hline \hline \(E\,[\mathrm{MPa}]\) & \(\nu\,[-]\) & \(a\,[-]\) & \(p\,[-]\) \\ \hline \(1\times 10^{4}\) & 0.3 & 0.001 & 0.005 \\ \hline \hline \end{tabular} \end{table} Table 1: Material parameters _WGAN parameter:_ For the adversarial network, the model consists of the generator and the discriminator setup. The generator \[G(\mathbf{x},\boldsymbol{\theta}_{G})=\{\mathcal{N}_{u}(\mathbf{x},\boldsymbol{ \theta}_{u}),\mathcal{N}_{\sigma}(\mathbf{x},\boldsymbol{\theta}_{\sigma})\} \tag{40}\] with \(\mathbf{x}=(x,y)\), \(\boldsymbol{\theta}_{G}=(\boldsymbol{\theta}_{u},\boldsymbol{\theta}_{\sigma})\) and \[\mathcal{N}_{u}(\mathbf{x},\boldsymbol{\theta}_{u}) =\{\mathcal{N}_{u_{i}}(\mathbf{x},\theta_{u_{i}})\,|\,i=x,y\}, \tag{41}\] \[\mathcal{N}_{\sigma}(\mathbf{x},\boldsymbol{\theta}_{\sigma}) =\{\mathcal{N}_{\sigma_{ij}}(\mathbf{x},\theta_{\sigma_{ij}})\,| \,i,j=x,y\}, \tag{42}\] is constructed with a series of fully connected layers. The architecture utilizes 4 hidden layers, each with 64 neurons. The activation function used across these layers is the Swish function, defined as \[\text{Hardswish}(x)=\begin{cases}0&\text{if }x\leq-3,\\ x&\text{if }x\geq 3,\\ \frac{x^{2}+3x}{6}&\text{otherwise}.\end{cases} \tag{43}\] In addition, to optimize the network training, we hard enforce the boundary conditions from (36) and (37), such that the output of the generator is given by \[\mathcal{N}_{u_{x}}(\mathbf{x};\theta_{u_{x}}) =x\cdot\hat{\mathcal{N}}_{u_{x}}(\mathbf{x};\theta_{u_{x}}),\] \[\mathcal{N}_{u_{y}}(\mathbf{x};\theta_{u_{y}}) =y\cdot\hat{\mathcal{N}}_{u_{x}}(\mathbf{x};\theta_{u_{y}}),\] \[\mathcal{N}_{\sigma_{xx}}(\mathbf{x};\theta_{\sigma_{xx}}) =(1-x)\cdot\hat{\mathcal{N}}_{\sigma_{xx}}(\mathbf{x};\theta_{ \sigma_{xx}}), \tag{44}\] \[\mathcal{N}_{\sigma_{yy}}(\mathbf{x};\theta_{\sigma_{yy}}) =(1-y)\cdot\hat{\mathcal{N}}_{\sigma_{yy}}(\mathbf{x};\theta_{ \sigma_{yy}}),\] \[\mathcal{N}_{\sigma_{xy}}(\mathbf{x};\theta_{\sigma_{xy}}) =xy(x^{2}+y^{2}-0.25)\cdot\hat{\mathcal{N}}_{\sigma_{xy}}(\mathbf{x}; \theta_{\sigma_{xy}}),\] with \(\mathbf{x}=(x,y)\) and \(\boldsymbol{\theta}=(\theta_{u_{i}},\theta_{\sigma_{ij}})\) being the tuple of all trainable network parameters regarding the displacement and stress component. In order to obtain the strains and optimize the loss function, the spatial derivatives are obtained by automatic differentiation. On the other hand, the discriminator \(D(\mathbf{y},\boldsymbol{\theta}_{D})\) comprises a network architecture of 3 hidden layers, each with 16 neurons, which uses the LeakyReLU activation function defined as \[\text{LeakyReLU}(x)=\begin{cases}x&\text{if }x\geq 0,\\ \alpha x&\text{if }x<0,\end{cases} \tag{45}\] with a slope of \(\alpha=0.2\) for negative values. Regarding optimization, both the generator and the discriminator use the ADAM optimizer with a learning rate of \(0.02\). The beta values for the moment estimates are set as \((0.5,0.999)\). A learning rate scheduler is employed with a maximum learning rate of \(0.02\). It is set to adjust the rate over a total of \(200\) steps for both the Generator and Discriminator. _Result:_ The WGAN+GP frameworks were utilized in our numerical evaluations to investigate their effectiveness in computing non-linear elastic materials through a data-driven approach. Figure 7 depicts the distribution of strain-stress achieved after \(200\) training epochs. Due to the utilization of batch processing during training, the number of training steps exceeded this epoch count. The findings provide a profound understanding of the training quality and effectiveness. We investigated the loss values during training for a clearer perspective on model behavior. The Wasserstein-enhanced architectures showcased robustness and consistency during training. Fig. 8 displays the losses for the discriminator and generator of the model across the epochs. Notably, shallow loss values for either the generator or the discriminator can be counterproductive. It generally indicates that one network is dominating the other, leading to a stagnation in the training process. Ideally, there should be a balance where both networks challenge each other, encouraging continuous improvement. Despite their effectiveness, traditional adversarial networks present non-interpretable loss values, making it challenging to discern training quality. The WGAN+GP approach offers direct insights into the quality of data generation, making it more user-friendly in solution analysis. In addition, we plot the minimum distance between the generated states and the data set \(\mathcal{D}\) in Fig. 9. Given the data-driven approach, the learning process trains with a finite set of data points. Consequently, the losses do not converge to zero but to a positive lower bound. In our case, both losses decrease over training, representing this convergence. In data-driven mechanics, the approach displayed a commendable ability to simulate stress-strain distributions. WGAN, with improved loss interpretability and smoother training, stands out as the preferable choice for intricate computational mechanics tasks. Figure 7: Visualization of displacement and stress distribution after 200 training epochs offering insights into the material’s behavior under the applied loads and conditions (36) and (37). From top-left to bottom-right: \(u_{x}\) showcases a gradient, indicating a maximum displacement at \((x,y)=(1,0)\); \(u_{y}\) reveals a displacement trend with negative values highlighted in \(x=0\); \(\sigma_{x}\) shows a maximum stress magnitude at \(y=0\); \(\sigma_{y}\) displays a similar gradient; and \(\sigma_{xy}\) captures a pronounced shear stress distribution inside the plate. Figure 8: Comparative visualization of the discriminator and generator loss metrics over training iterations for a WGAN+GP model, showcasing the dynamic interplay and convergence patterns. The shaded area shows the maximum range of loss for individual training batches. ## 5 Conclusion The model-free data-driven method, developed by Kirchdoerfer and Ortiz, uses experimental data directly in simulations, bypassing the entire material modeling step. The paradigm uses nearest-neighbor clustering to reformulate boundary value problems. The approach has been diversified for many applications. Challenges such as data availability, noise, inconsistency, and high dimensionality frequently arise in the data-driven paradigm. Traditional analytical and computational methods may need to be adjusted when addressing these issues. Consequently, the incorporation of machine learning methods is considered, especially physics-informed neural networks. In solving boundary value problems with ANNs, the idea is to transform it into an optimization problem. The residual of the differential equations is minimized, and the neural network approximates the displacement and stress field. However, there Figure 9: Visualization of the distance metrics over 200 epochs.The distance of the generated \(\mathbf{z}\) to the dataset \(\mathcal{D}\) illustrates how closely the model-generated outputs match the dataset over training iterations. are challenges with PINNs. There have been instances where the optimization yields solutions with unexpected or non-physical behaviors even when carefully tailored to encapsulate the physics. If we integrate the distance as an additional loss into the global loss, the whole problem becomes a nested optimization, leading to training challenges. In addition, approximated strain-stress fields can correspond to suboptimal data points influencing the direction and rate of the convergence. To address these challenges, we consider the integration of PINNs with generative adversarial networks. GANs are proficient at generating outputs with the same properties as actual data, providing a potential approach to generating realistic strain-stress solutions. Their flexibility ensures adaptability across diverse data types suited for various physical conditions. Moreover, the inherent capability of GANs to distinguish and capitalize on intricate patterns may lead to a more robust representation of underlying physics. The combined PINN-GAN approach seeks to ensure physical consistency and alignment with observed data, leveraging the strengths of both methodologies. This research introduced an approach to WGANs + GP tailored for data-driven mechanics problems. The generator is identified as a PINN, ensuring that generated outputs conform to underlying physical principles. Instead of random noise, the generator utilizes collocation points from the domain and maps them to neural network approximations of strain and stress fields. The discriminator is then trained using the generated and the closest actual strain-stress data. By integrating WGANs with physics-informed principles, the model outputs adhere to observed dataset statistics and differential equations. This results in improved optimization, more stable training, and accurate, physically consistent solutions. In this regard, we investigated a non-linear elastic plate with a hole benchmark. The results indicate that our proposed method provides reasonable outcomes. Furthermore, we observed robust and consistent training of the networks and noted the convergence of the data-driven solution as data size increased. As we advance our research, we aim to delve deeper into other convergence criteria for the GAN or WGAN. We plan to explore metrics such as the Inception Score [59], Frechet Inception Distance [60], and perceptual similarity measures [61] to provide a broader assessment of the generated outputs. These metrics will help to analyze the quality of the generated material states. Another area of interest is using the discriminator in the GAN framework for material identification. The discriminator's ability to distinguish between actual and generated outputs can be used to identify different mate rial states. This approach could offer a novelty to classify materials, and we want to explore this further. In addition, we plan to extend our method to more complex and varied material properties. We also consider integrating advanced machine learning techniques to improve prediction accuracy, especially when dealing with sparse datasets. We are considering hybrid network architectures that combine convolutional and regression layers. The traditional image-based GAN structure inspires this design. By adding these layers, we hope to combine the advantages of image-based GANs with our current data-focused method.
2304.00147
Propagating Parameter Uncertainty in Power System Nonlinear Dynamic Simulations Using a Koopman Operator-Based Surrogate Model
We propose a Koopman operator-based surrogate model for propagating parameter uncertainties in power system nonlinear dynamic simulations. First, we augment the a priori known state-space model by reformulating parameters deemed uncertain as pseudo-state variables. Then, we apply the Koopman operator theory to the resulting state-space model and obtain a linear dynamical system model. This transformation allows us to analyze the evolution of the system dynamics through its Koopman eigenfunctions, eigenvalues, and modes. Of particular importance for this letter, the obtained linear dynamical system is a surrogate that enables the evaluation of parameter uncertainties by simply perturbing the initial conditions of the Koopman eigenfunctions associated with the pseudo-state variables. Simulations carried out on the New England test system reveal the excellent performance of the proposed method in terms of accuracy and computational efficiency.
Yijun Xu, Marcos Netto, Lamine Mili
2023-03-31T21:52:16Z
http://arxiv.org/abs/2304.00147v1
Propagating Parameter Uncertainty in Power System Nonlinear Dynamic Simulations Using a Koopman Operator-Based Surrogate Model ###### Abstract We propose a Koopman operator-based surrogate model for propagating parameter uncertainties in power system nonlinear dynamic simulations. First, we augment a priori known state-space model by reformulating parameters deemed uncertain as pseudo-state variables. Then, we apply the Koopman operator theory to the resulting state-space model and obtain a linear dynamical system model. This transformation allows us to analyze the evolution of the system dynamics through its Koopman eigenfunctions, eigenvalues, and modes. Of particular importance for this letter, the obtained linear dynamical system is a surrogate that enables the evaluation of parameter uncertainties by simply perturbing the initial conditions of the Koopman eigenfunctions associated with the pseudo-state variables. Simulations carried out on the New England test system reveal the excellent performance of the proposed method in terms of accuracy and computational efficiency. Koopman operator; parameter uncertainty; statistical dynamic simulation; uncertainty propagation. ## I Introduction The uncertainties associated with electricity demand and supply, weather forecasting, measurement systems errors, and modeling accuracy bring grand challenges to the design and operation of modern power systems. Thus, uncertainty quantification (UQ) has driven substantial research within the power system community. See, e.g., [1, 2, 3]. In particular, propagating uncertainties in power system nonlinear dynamic simulations is an important problem and the focus of this letter. Monte Carlo (MC) simulation is arguably the prevailing method for uncertainty propagation. Though straightforward, MC simulation exhibits a prohibitive computational burden for practical applications in sizeable electric power systems. Analytical methods based on a linear approximation [1] of a nonlinear system model improve the computational efficiency of the simulations but at the expense of a significant loss of accuracy when the simulations involve events that push the system far from the system stable equilibrium point. Likewise, second-order approximations [2] improve the accuracy but lose computational efficiency because they require the numerical evaluation of higher-order derivatives. Conversely, statistical methods [3] simplify the approximation procedure while maintaining high computational efficiency but often lack physical meaning and interpretability. This letter proposes an alternative approach to propagate parameter uncertainty in power system nonlinear dynamic simulations based on the Koopman operator. Unlike analytical methods that perform first-order or second-order approximations of the system nonlinear model, the Koopman operator-based surrogate model captures the full nonlinear dynamics and is derivative-free. Unlike statistical methods, the proposed method retains physical interpretability and therefore is suitable for applications such as coherency identification [4] and selective modal analysis [5], among others. Furthermore, a Koopman operator-based surrogate [6, 7] of a power system nonlinear dynamic model enables the evaluation of a large set of parameters with low computational cost and high accuracy while propagating parameter uncertainties in power system dynamic simulations. ## II Koopman Operator Let an autonomous nonlinear dynamical system evolving on a finite-dimensional manifold \(M\) be governed by \[\dot{\mathbf{x}}(t)=\mathbf{f}(\mathbf{x}(t)), \tag{1}\] where \(t\in\mathbb{R}\), \(\mathbf{x}\in\mathbb{R}^{n_{x}}\subset M\) is the state, and \(\mathbf{f}:M\to M\) is a nonlinear function. Let an observable \(g(\mathbf{x})\) be a continuous function defined in \(M\), \(g:M\to\mathbb{R}\). The _Koopman operator_, \(\mathcal{K}_{t}\), is a linear, infinite-dimensional operator that acts on \(g\), \[\mathcal{K}_{t}\,g=g(\mathbf{S}_{t}), \tag{2}\] where \(\mathbf{S}_{t}:M\to M\); \(\mathbf{x}(0)\to\mathbf{x}(t)=\mathbf{x}(0)+\int_{0}^{t}\mathbf{f}(\mathbf{x}(\tau))d\tau\) is called the flow. Because the Koopman operator is linear, its eigenvalues, \(\lambda_{i}\), and eigenfunctions, \(\phi_{i}\), are defined by \(\mathcal{K}_{t}\phi_{i}=e^{\lambda_{i}t}\phi_{i}\), \(i=1,...,\infty\). In practice, one estimates a subset of the Koopman eigenvalues and eigenfunctions. To this end, let \(\mathbf{g}:M\to\mathbb{R}^{n_{d}}\), \(n_{d}\geq n_{x}\). If all \(n_{d}\) elements of \(\mathbf{g}\) lie within the span of the eigenfunctions \(\phi_{i}\), then \[\mathbf{g}(\mathbf{x}(t))=\sum_{i=1}^{n_{d}}\phi_{i}(\mathbf{x}(t))\,\mathbf{v}_{i}=\sum_{i=1} ^{n_{d}}\phi_{i}(\mathbf{x}(0))\,\mathbf{v}_{i}\,e^{\lambda_{i}t}, \tag{3}\] where \(\mathbf{v}_{i}\in\mathbb{C}\), \(i=1,...,n_{d}\), are the Koopman modes. The interpretation of (2)-(3) is straightforward. Instead of focusing on the evolution of the state, \(\mathbf{x}\), one shifts the focus to the observables, \(\mathbf{g}(\mathbf{x})\). The advantage is that the observables evolve linearly with time, see (3), without neglecting the nonlinear dynamics of the underlying dynamical system (1). The linear representation (3) is crucial to the proposed method's accuracy and computational efficiency, irrespective of nonlinearities. Note that \(\mathbf{g}(\mathbf{x})\) can be any continuous function of the state, \(\mathbf{x}\), including the state itself. See, e.g., [8] for a principled way of selecting these observables. Given a set of observables, it is straightforward to estimate a subset of the Koopman tuples \(\{\lambda_{i},\phi_{i},\mathbf{v}_{i}\}\). To this end, this work adopts the extended dynamic mode decomposition (EDMD) method [9]. Following [9], "if the data provided to the EDMD method are generated by a Markov process instead of a deterministic dynamical system, the algorithm approximates the eigenfunctions of the Kolmogorov backward equation, which could be considered as the stochastic Koopman operator." ## III The Proposed Method Let a deterministic power system model be \[\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x},\mathbf{y}),\quad\mathbf{0}=\mathbf{h}(\mathbf{x},\mathbf{y}), \tag{4}\] where \(\mathbf{y}\in\mathbb{R}^{n_{y}}\) denotes algebraic variables, \(\mathbf{h}:M\rightarrow\mathbb{R}^{n_{y}}\) is a nonlinear function, and \(\mathbf{x}\) and \(\mathbf{f}\) are as defined in (1). Further, let \(\mathbf{\xi}\) be a random vector following a given probability density function. Now, suppose that \(\mathbf{m}(\mathbf{\xi})\), a subset of the model parameters1, is uncertain. To propagate the parameter uncertainty through the system model, consider a set of \(n_{mc}\) samples, drawn from a multivariate probability distribution of \(\mathbf{\xi}\), \(\{\mathbf{\xi}^{(j)}\}_{j=1}^{n_{m}}\). Then, for each \(\mathbf{\xi}^{(j)}\), \(j=1,...,n_{mc}\), one evaluates a modified model given by Footnote 1: We consider synchronous generators’ instead of transmission lines’ model parameters because the former directly impact the differential equations. \[\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x},\mathbf{y},\mathbf{m}(\mathbf{\xi}^{(j)})),\quad\mathbf{0}=\mathbf{h}( \mathbf{x},\mathbf{y}), \tag{5}\] to obtain \(n_{mc}\) trajectories, from which one can quantify the sample mean and the sample variance of the states. Obviously, this MC simulation can be computationally costly for real-time applications in sizeable electric power networks. Now, let us introduce the propagation of parameter uncertainties using the Koopman operator. ### _Reformulation of the Dynamic Model_ The kernel idea in this letter is to augment (1) with \(n_{m}\) differential equations [10], as follows: \[\begin{cases}\dot{\mathbf{x}}(t)=\mathbf{f}(\mathbf{x}(t),\mathbf{m}(t)),\\ \dot{\mathbf{m}}(t)=\mathbf{0},\end{cases} \tag{6}\] thereby allowing one to cast the problem of parameter uncertainty propagation into the Koopman operator framework. The dimension of the augmented model (6) is \(n_{x}+n_{m}\), where \(n_{m}\) is the number of parameters deemed uncertain. Note that generator model parameters are time-invariant, constant values represented by _pseudo-state variables_ in (6). Now, we are in the position to act on the parameter space using the Koopman operator formalism. Note that although the model parameters are typically considered time-invariant, as they are here, exceptions do exist, e.g., adaptive control gain in inverter-based resources. Nonetheless, one can still capture these exceptions in (6) as long as ordinary differential equations can describe them; in that case, specifically, \(\dot{\mathbf{m}}(t)=\mathbf{0}\) would be modified accordingly. This fact demonstrates the flexibility in reformulating the augmented model, though this specific case goes beyond the scope of this letter. ### _Simulation-Based Data Collection_ We are now in a position to estimate the Koopman operator. The estimation of the Koopman operator relies exclusively on data, either numerical or experimental. In this letter, we use numerical data obtained from simulations. To this end, we first perturb the initial conditions of the pseudo-states--namely, the parameters \(\mathbf{m}(0)\) in (6)--at different random values, \(\mathbf{\xi}^{(j)}\), \(j=1,\ldots,n_{t}\). More specifically, we adopt a model \[\mathbf{m}^{(j)}(0)=\mathbf{m}+\mathbf{\xi}^{(j)} \tag{7}\] to obtain a set \(\{\mathbf{m}^{(j)}(0)\}_{j=1}^{n_{t}}\), where \(n_{t}\) denotes the number of sampled trajectories. Note that the values of \(\mathbf{m}\) can be obtained from the manufacturer data. Then, we repeatedly evaluate \[\begin{cases}\dot{\mathbf{x}}(t)=\mathbf{f}(\mathbf{x}(t),\mathbf{m}(t)),&\mathbf{x}(0),\\ \dot{\mathbf{m}}(t)=\mathbf{0},&\mathbf{m}^{j}(0),j=1,\ldots,n_{t},\end{cases} \tag{8}\] to obtain \(n_{t}\) trajectories of the system states, including the pseudo-states, as the training data. Note that to ensure the training efficiency, \(n_{t}\) should be designed to be a small number while maintaining a faster convergence rate than the MC sampling. Specifically, we generate \(\{\mathbf{\xi}^{j}\}_{j=1}^{n_{t}}\) via the Latin hypercube sampling technique for its well-known capability in experiment design. Using the simulated data obtained with the augmented model (8), we estimate a subset of the Koopman tuples, \(\{\lambda_{i},\phi_{i},\mathbf{v}_{i}\}\), using the EDMD method [8, 9]. Note that identifying a Koopman operator-based surrogate model requires the computation of the Moore-Penrose pseudo-inverse of a data matrix. The latter might be time-consuming depending on the matrix dimension and the numerical implementation. Nevertheless, highly efficient implementations of the Moore-Penrose pseudo-inverse are available. ### _UQ through Koopman Operator-Based Surrogate Model_ For convenience, define \(\mathbf{x}_{a}^{\top}=[\mathbf{x}^{\top}\,\mathbf{m}^{\top}]\). Let us use (3) to mimic the system performances described in (6) as a Koopman operator-based surrogate model. Obviously, (3) is in a much simpler functional form than (6) to represent a complex dynamical system [6], such as the dynamic power system considered here. This surrogate allows us to efficiently conduct uncertainty quantification, i.e., \(\mathbf{x}_{ak}\approx\sum_{i=1}^{n_{d}}\phi_{i}(\mathbf{x}_{a0})\mathbf{v}_{i}\mu_{i}^{k}\), at a large number of parameter values, \(\{\mathbf{m}^{(j)}\}_{j=1}^{n_{mc}}\). Note that \(\mu_{i}\) relates to the continuous-time Koopman eigenvalues \(\lambda_{i}=\ln\left(\mu_{i}\right)/\Delta t\), where \(\Delta t\) is the data sampling time. To numerically achieve this realization procedure, we simply assign each parameter sample, \(\mathbf{m}^{(j)}\), as the initial conditions to the associated pseudo-states while keeping the initial conditions of the true system states unchanged to get an updated \(\mathbf{x}_{a0}^{(j)}\), whose randomness can be further reflected in the _Koopman eigenfunctions_ through \(\mathbf{\phi}(\mathbf{x}_{a0}^{(j)})\approx\mathbf{L}\mathbf{g}(\mathbf{x}_{a0}^{(j)})\). The matrix \(\mathbf{L}\) stands for the left eigenvectors of the finite-dimensional approximation to the Koopman operator; refer to [8] for details. The other part in (3) remains unchanged, and then we have \[\mathbf{x}_{ak}^{(j)}\approx\sum_{i=1}^{n_{d}}\phi_{i}(\mathbf{x}_{a0}^{(j)})\mathbf{v}_{i} \mu_{i}^{k},\quad j=1,\ldots,n_{mc}. \tag{9}\] Now, using the set of \(\{\mathbf{x}_{ak}^{(j)}\}_{j=1}^{n_{mc}}\), we can quantify the uncertainties--e.g., the mean, the variance, the probability density function--in the system states at any given time, \(k\). ## IV Simulation Results Using the proposed method, we test its performance on the \(10\)-machine, \(39\)-bus New England power system with a classic generator model. The system dynamics are triggered by opening Line \(15\)-\(16\). We assume that the parameter values of the inertia for each generator are not well known. We suppose that they follow a Gaussian distribution with the mean being the original manufacturer data and the standard deviation being \(10\%\) of the mean value to account for the parameter uncertainties. We use an MC simulation with \(10,000\) samples to obtain the benchmark results for comparison. For the Koopman method, we set \(n_{t}=75\), and we select the quantity of interest as the rotor angle of Generator \(2\) with respect to that of Generator \(10\), denoted as \(\delta_{2-10}\), as an example. Note that the choice of the observables for the Koopman operator is an open research topic; therefore, we demonstrate two test cases with different observables. For the first case, we use the second-order multivariate _Hermite polynomials_. For the second case, in addition to the Hermite polynomials, we further introduce a cosine function and a sine function for each true state variable separately. The evolution of their means and standard deviations are depicted in Fig. 1, which shows that, under different observables, their means are quite accurate. Regarding the variance, although small differences are obtained during the first \(5\) s, these values increase as time evolves. This makes sense because the errors can accumulate over time [3]. This is precisely what is observed when executing the polynomial-chaos-expansion (PCE) method based on the sparse-grid rule [11], whose computing time amounts to \(32\) seconds. A similar computing time for the Koopman operator method is recorded. However, while the Koopman operator method has been applied with some success to coherency identification, stability assessment and modal analysis, among others, it still calls for further research. Indeed, the Koopman surrogate approach can not only serve as an alternative of the PCE method in UQ, but it can also help us to better deal with power system uncertainties. Also, as observed in Fig. 2, which depicts the probability density function of \(\delta_{2-10}\) at \(t=2\) s, the Koopman surrogate has the capability of accurately representing the full probability density of the system state at a given time. Compared with the MC simulations, which, as indicated in Table I, take nearly \(0.5\) hour to complete, the Koopman surrogate using Hermite polynomials takes only \(0.5\) minute, hence achieving a speedup of more than \(50\times\) while maintaining a good accuracy. Note that parallel computing is directly applicable to the training and the partial realization of the Koopman surrogate, resulting in a significant improvement of the computational efficiency of the method. In addition, these simulations demonstrate the flexibility of using different observables in the Koopman approximation. Note that by adequately tuning the observable functions [8], the Koopman surrogate still has the potential to be further improved in long-term dynamic simulations and its commuting efficiency for larger-dimensional systems. The proposed method is also able to deal with non-Gaussian uncertainties. Indeed, once the Koopman surrogate is trained, it can be directly evaluated by processing non-Gaussian distributed samples, \(\{\mathbf{m}^{(j)}\}_{j=1}^{n_{mc}}\), to propagate uncertainties. Considering power system applications, let us assume that the parameter values of the inertia for each synchronous generator follows a uniform probability distribution with \(10\%\) errors. The other settings remain unchanged. From Fig. 3, we can see that the Koopman method works well in approximating the mean and the variance under an uniform distribution. However, when it comes to higher moments, such as the skewness and the kurtosis as shown in Fig. 3, the Koopman method does not provide accurate results. Therefore, improving the performance of the Koopman method in higher-order moments deserves further exploration. ## V Conclusions In this letter, we propose a Koopman surrogate method for propagating uncertainties in power system dynamic simulations that achieve good performance in terms of accuracy and computational efficiency. Fig. 1: Sample mean and standard deviation of \(\delta_{2-10}\) obtained with MC simulation, PCE-based, and Koopman operator-based methods under Gaussian distribution. Fig. 2: Probability density function of \(\delta_{2-10}\) obtained with MC simulation and the Koopman operator-based methods.
2304.00153
Continuum limit for a discrete Hodge-Dirac operator on square lattices
We study the continuum limit for Dirac-Hodge operators defined on the $n$ dimensional square lattice $h\mathbb{Z}^n$ as $h$ goes to $0$. This result extends to a first order discrete differential operator the known convergence of discrete Schr\"odinger operators to their continuous counterpart. To be able to define such a discrete analog, we start by defining an alternative framework for a higher-dimensional discrete differential calculus. We believe that this framework, that generalize the standard one defined on simplicial complexes, could be of independent interest. We then express our operator as a differential operator acting on discrete forms to finally be able to show the limit to the continuous Dirac-Hodge operator.
Pablo Miranda, Daniel Parra
2023-03-31T21:59:16Z
http://arxiv.org/abs/2304.00153v1
# Continuum limit for a discrete Hodge-Dirac operator on square lattices. ###### Abstract. We study the continuum limit for Dirac-Hodge operators defined on the \(n\) dimensional square lattice \(h\mathbb{Z}^{n}\) as \(h\) goes to \(0\). This result extends to a first order discrete differential operator the known convergence of discrete Schrodinger operators to their continuous counterpart. To be able to define such a discrete analog, we start by defining an alternative framework for a higher-dimensional discrete differential calculus. We believe that this framework, that generalize the standard one defined on simplicial complexes, could be of independent interest. We then express our operator as a differential operator acting on discrete forms to finally be able to show the limit to the continuous Dirac-Hodge operator. **Keywords:** Continuum limit, Discrete Dirac operator, Higher order cochains, Discrete differential calculus. **Mathematics Subject Classification:** 47A10, 81Q35, 47A58. ## 1. Introduction and Main Result The aim of this paper is to study the continuum limit for a first order _discrete differential operator_ on \(h\mathbb{Z}^{n}\) as the mesh size \(h\) goes to \(0\). Such type of results have attracted growing attention following [17] in which Nakamura and Tadano showed that discrete Schrodinger operators of the form \(-\Delta+V\) on \(L^{2}(h\mathbb{Z}^{d})\) converge to the corresponding Schrodinger operator \(-\Delta+V\) on \(L^{2}(\mathbb{R}^{d})\) in the _norm resolvent sense_. Such kind of convergence, sometimes called generalized convergence to emphasize that operators are defined in different Hilbert spaces (see [10] and references therein), is useful for studying manifolds representing networks of thin tubes in the limiting process as they shrink to its underlying graph [14]. In the aftermath of the aforementioned result, some related techniques have been applied to the study of Fourier Multipliers [11] and of Laplacians on the half-space [11], and for quantum graphs on the Euclidean space [12]. In contrast with these positive and natural results, the study of discrete counterparts of Dirac operators has yielded different results. In [13] the authors showed that a discrete Dirac operator on \(\ell^{2}(h\mathbb{Z}^{2})^{2}\), written in terms of first-order discrete forward or backward differences, converge only in the _strong resolvent sense_. This was then confirmed in [11] where they showed that in order to obtain the norm resolvent convergence a diagonal term of order \(2\) (_i.e._ a Laplacian) is needed to be added (this is related to the so called _fermion doubling_. Our operator does not present that phenomenon. See Lemma 3.1). Motivated by this asymmetry between the discrete and continuous setting, the strategy of this manuscript is different. Following the study of the Gauss-Bonnet operator for graphs initiated in [1] and continued in [1, 21, 22] we adopt the point view of considering on \(\mathbb{Z}^{n}\) a discrete differential calculus. First, this gives rise to a _discrete exterior derivative_\(d_{h}\) acting on the Hilbert space of square-integrable cochains ###### Contents * 1 Introduction * 2 Preliminaries * 3 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.2 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.3 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.4 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.5 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.6 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.7 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.8 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.9 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.2 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.3 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.4 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.5 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.6 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.7 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.8 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.9 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.2 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.3 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.4 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.5 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.6 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.7 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.8 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.9 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.2 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.3 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.4 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.5 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.6 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.7 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.8 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.9 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.2 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.3 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.4 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.5 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.6 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.7 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.8 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.9 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.2 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.3 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.4 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.5 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.6 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.7 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.8 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.9 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.2 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.3 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.4 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.5 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.6 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.7 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.8 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.9 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.2 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.3 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.2 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.3 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.4 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.5 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.6 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.7 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.8 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.9 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.2 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.3 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.4 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.5 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.6 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.7 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.8 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.9 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.2 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.3 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.4 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.5 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.6 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.7 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.8 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.9 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.2 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.3 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.4 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.5 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.6 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.7 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.8 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.9 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.2 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.3 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.2 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.3 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.4 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.5 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.6 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.7 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.8 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.9 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.10 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.11 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.12 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator * 3.1 The \(\mathbb{Z}^{n}\)-Bonnet operator ## 2. Higher order differential structure on \(\mathbb{Z}^{n}\) The discrete Hodge-Laplacian, as a higher dimensional analog of the usual graph Laplacian, was first introduced by Eckmann [1]. It was defined over combinatorial simplicial complexes and allows one to obtain a discrete Hodge Theory. Furthermore, Dodziuk showed in [10] that such discrete Hodge Theory approximates the Hodge Theory of a Riemannian manifold if one considers finer triangularizations of the given manifold. In that setting, the spectral theory of Hodge-Laplacian or Hodge-Dirac operators has already been considered, see [11, 12, 13, 14, 15] for some particular questions that are not far away of the perspective of this manuscript. However, all those examples are concerned with such discrete differential operators on simplicial complexes and such structure becomes trivial on \(\mathbb{Z}^{n}\) for cochains of order 2 or higher. With that in mind, in this section we introduce slightly more general assumptions, compared with that of a simplicial complex, that still allows us to have our discrete differential operators and, in the \(\mathbb{Z}^{n}\) case, to define non-trivial higher order cochains. ### Abstract combinatorial differential structure In order to motivate our definition below, we start by recalling the framework of discrete diferential operators on simplicial complexes. An _abstract simplicial complex_ K over a set \(V\) is a collection of finite subsets of \(V\) closed under inclusion. An element \(F\in K\) is of dimension \(i\) if it has cardinality \(i+1\) and is called an \(i\)-simplex. An element in \(F\) is called a vertice of \(F\). By choosing an ordering of its vertices we say that we have an oriented simplex \([F]\). If there exist an even permutation transforming one ordering into another we say that they define the same orientation. If the permutation is odd, the orientations are considered to be oppposite and denoted by \(\overline{[F]}\). The complex vector space \(C^{i}(K)\) of \(i\)-cochains of \(K\) is defined as the space of functions on \(i\)-simplices that satisfy \[f(\overline{[F]})=-f([F])\.\] Then the simplicial coboundary maps \(d_{i}:C^{i}(K)\to C^{i+1}(K)\) are defined by \[d_{i}f([x_{0},x_{1},\dots,x_{i+1}])=\sum_{j=0}^{i+1}(-1)^{j}f([x_{0},\dots, \hat{x}_{j},\dots x_{i+1}])\] where \(\hat{x}_{j}\) indicates that we have omitted the \(j\)-th vertex. Motivated by this formalism but aiming to cover the case of \(\mathbb{Z}^{n}\) we propose the following definition where we denote by \(\mathbb{P}(V)\) the set of all subsets of a set \(V\). **Definition 2.1**.: _We call \(X\) a combinatorial differential complex of dimension \(n\) over \(V\) if there exist:_ 1. \(\imath:X\to\mathbb{P}(V)\) _such that for every_ \(A\in\mathbb{P}(V)\) _we have that either_ \(\sharp\imath^{-1}(A)=0\) _or_ \(\sharp\imath^{-1}(A)=2\)_. This define an involution on_ \(X\) _by_ \(\imath(s)=\imath(\overline{s})\)_._ 2. \(X=\cup_{j=0}^{n}X^{j}\) _with every_ \(X^{j}\) _non-empty, such that_ \(v\in X^{0}\Rightarrow\sharp\imath(v)=1\) _and_ \(e\in X^{1}\Rightarrow\sharp\imath(e)=2\)_. Without loss of generality we can assume that_ \(X^{0}=V\times\mathbb{Z}_{2}\) _and hence define_ \(\operatorname{sgn}:X^{0}\to\{-1,1\}\) _that satisfies_ \(\operatorname{sgn}(v)=-\operatorname{sgn}(\overline{v})\)_._ _._ 3. _For_ \(j\geq 1\)_, there exists_ \(\partial:X^{j}\to\mathbb{P}(X^{j-1})\) _that satisfies for_ \(s\in X^{j}\)_,_ \(\partial(\overline{s})=\overline{\partial(s)}\)_. Furthermore if_ \(j\geq 2\) _we ask_ \(\partial(\partial(s))\) _to be an involutive set while for_ \(e\in X^{1}\)_,_ \(\partial(e)=\{v_{1},v_{2}\}\)_, to satisfy_ \(\operatorname{sgn}v_{1}\operatorname{sgn}v_{2}=-1\)_._ If \(r\in\partial(s)\) we say that \(r\) is contained in \(s\) according to orientation and denoted this by \(r\subset s\). We assume that \(r\subset s\) for only finitely many \(s\). **Example 2.2**.: _As intended, a simplicial complex is indeed a combinatorial differential complex. The \(\imath\) operator takes an ordered simplex, forget its order and gives the subjacent set. The \(\partial\) operator takes an ordered simplex and gives us the set of ordered faces. Note that we have taken the convention of giving two orientations to singletons. This should be understand as an inward copy of the vertex and an outward copy of the vertex. Then, an edge would be compose of two vertices, one taken with an outward signature, the origin, and one taken with the inward signature, the target._ **Remark 2.3**.: _Although quite general, Definition 2.1 still does not admits some cases that one could be interested in studying. In particular, Item 1 implies that we are working with non-oriented combinatorial differential complexes without loops. On one hand, the study of oriented discrete structures, that gives rise to non-symmetric differential operators, and hence non-self-adjoint Laplacian, is at the same time very interesting and not well explored from a purely spectral theoretical point of view. On the other hand, although for a infinite graph it seems rather harmless to not allow loops, in practice these kind of phenomena appears quite naturally when making a quotient and hence need to be dealt in order to develop a theory of periodic combinatorial differential complexes. Since in this article we are confined to the \(\mathbb{Z}^{n}\) case, we need not to deal with such technicalities. Finally, one can also notice that the assumption that \(r\subset s\) for only finitely many \(s\) corresponds in the graph case, to be working with locally finite graphs. Again, this is not compulsory, but it suffices for this article aims. We refer to [15] for a survey of problems that could be of interest in the setting of combinatorial differential complexes with unbounded geometry._ Our task now is from Definition 2.1 to reproduce the usual discrete differential calculus. With that aim, we start by defining for \(1\leq j\leq n\) the vector space of \(j-\)_cochains_ \[C^{j}(X)=\{f:X^{j}\to\mathbb{C}:f(\overline{s})=-f(s)\}\] and the _discrete exterior derivative_ \[d_{j}:C^{j}(X)\to C^{j+1}(X)\] by \[d_{j}f(s)=\sum_{r\subset s}f(r)\.\] We stress that this definition for the exterior derivative relies only in what we consider to be the _discrete differential structure_. However, to go one step further we need to introduce a measure on \(X\): \[m:X\to\mathbb{R}_{>0}\quad;\quad m(s)=m(\overline{s})\.\] Let us denote by \(C^{j}_{c}(X)\) the subspace of \(j-\)cochains of finite support. On it we can introduce an inner product by \[\langle f,g\rangle_{X^{j}}:=\frac{1}{2}\sum_{r\in X^{j}}m(r)f(r)\overline{g(r)}\.\] This inner product defines the Hilbert space \(\ell^{2}(X^{j})\) by taking the closure of \(C^{j}_{c}(K)\) on \(C^{j}(K)\). A sufficient condition for \(d_{j}:\ell^{2}(X^{j})\to\ell^{2}(X^{j+1})\) to be bounded is given by \[\sup_{s\in X^{j}}\{\sum_{s\subset r}\frac{m(r)}{m(s)}\}<\infty.\] Hence, as a natural generalization of the degree of a vertex, we can define \[\deg_{m}:X\to\mathbb{R}_{>0}\quad;\quad\deg_{m}(s)=\sum_{s\subset r}\frac{m(r) }{m(s)}\.\] We turn now to determine the adjoint \(d_{j}^{*}:\ell^{2}(X^{j+1})\to\ell^{2}(X^{j})\) by computing for \(f\in C^{j}_{c}(X)\) and \(g\in C^{j+1}_{c}(X)\) \[\langle d_{j}f,g\rangle_{X^{j+1}}= \frac{1}{2}\sum_{s\in X^{j+1}}m(s)df(s)\overline{g(s)}\] \[= \frac{1}{2}\sum_{s\in X^{j+1}}m(s)\overline{g(s)}\sum_{r\subset s }f(r) \tag{2}\] \[= \frac{1}{2}\sum_{r\in X^{j}}m(r)f(r)\overline{\sum_{r\subset s} \frac{m(s)}{m(r)}g(s)}. \tag{1}\] Hence we have \[d_{j}^{*}g(r)=\sum_{r\subset s}\frac{m(s)}{m(r)}g(s) \tag{3}\] once we notice that for \(r\in X^{j}\) such that \(r\notin\partial(s)\) for every \(s\in X^{j+1}\) the sum in (3) vanishes and hence the step from (1) to (2) is fully justified. The Hodge-Laplacian of order \(j\) acting on \(\ell^{2}(X^{j})\) can then be defined by \[\Delta_{j}(X)=d_{j}^{*}d_{j}+d_{j-1}d_{j-1}^{*}. \tag{4}\] This structure defined for each order can be put together defining on \(\ell^{2}(X)=\oplus_{j=0}^{n}\ell^{2}(X^{j})\) the _discrete exterior derivative_ \[d:=\oplus_{j=0}^{n-1}d_{j}\.\] Using this notation we can state the following lemma. **Lemma 2.4**.: _The operator_ \[d:\ell^{2}(X)\to\ell^{2}(X)\] _satisfies_ \[d^{2}=0\.\] Proof.: It is enough to note that for \(f\in\ell^{2}(X^{j})\) and \(s\in X^{j+2}\) we have \[(d^{2}f)(s)=[d_{j+1}(d_{j}f)]\left(s\right)=\sum_{r\subset s}d_{j}f(r)=\sum_{r \subset s}\sum_{t\subset r}f(t)=\sum_{t\in\partial(\partial(s))}f(t)=0\] by recalling Item 3. The discrete Gauss-Bonnet operator is then given by \(D:=d+d^{*}\). From Lemma 2.4 one can check that \(D^{2}=\oplus_{j=0}^{n}\Delta_{j}(X):=\Delta(X)\), where \(\Delta(X)\) is the full Hodge-Laplacian. To complete the necessary ingredients to state a Hodge theory on \(X\) we only need the following Corollary. **Corollary 2.5**.: \[\operatorname{Ker}(\Delta_{j}(X))=\operatorname{Ker}(d_{j})\cap \operatorname{Ker}(d_{j-1}^{*})\] Proof.: Let \(f\in\operatorname{Ker}(\Delta_{j})\). By (4) we have \[d_{j}^{*}d_{j}f=-d_{j-1}d_{j-1}^{*}f. \tag{5}\] Applying \(d_{j}\) to both sides of (5) we get from Lemma 2.4 that \[d_{j}d_{j}^{*}d_{j}f=0\.\] From the properties of the adjoint we get that \(f\in\operatorname{Ker}(d_{j})\). Applying \(d_{j-1}^{*}\) to both side of (5) we get the other inclusion. Putting all this information together we get the desired _Hodge decomposition_ given by \[\ell 2(X^{j})=\operatorname{Ran}(d_{j-1})\oplus\operatorname{Ker}(\Delta_{j}(X ))\oplus\operatorname{Ran}(d_{j}^{*})\.\] To close this Section, let us ilustrate the supersymmetry, as per [14], exihibited by \(d+d^{*}\). For this, we define \[\ell^{2}(X)_{\operatorname{even}}=\bigoplus_{j\ \operatorname{even}}\ell^{2}(X ^{j})\quad;\quad\ell^{2}(X)_{\operatorname{odd}}=\bigoplus_{j\ \operatorname{odd}}\ell^{2}(X^{j})\] which satisfy \(\ell^{2}(X)=\ell^{2}(X)_{\operatorname{even}}\oplus\ell^{2}(X)_{\operatorname {odd}}\). Furthermore, \[\tau f=\begin{cases}f&\text{ if }f\in\ell^{2}(X)_{\operatorname{even}}\\ -f&\text{ if }f\in\ell^{2}(X)_{\operatorname{odd}}\end{cases} \tag{6}\] defines an involution on \(\ell^{2}(X)\) such that the restriction of \(D\) to \(\ell^{2}(X)_{\operatorname{odd}}\) anticommutes with \(\tau\). Given \(m\geq 0\), the Dirac operator on \(X\) is given by \[\mathbb{D}_{m}=D+\tau m\.\] For simplicity, in this paper we consider only the massless case \(m=0\). ### Application to \(\mathbb{Z}^{n}\) As indicated in Section 1, our aim is to endow \(\mathbb{Z}^{n}\) with a discrete differential structure that, in contrast with the simplicial one, is not trivial for dimensions \(j\geq 2\). **Theorem 2.6**.: _There exist a combinatorial differential structure over \(\mathbb{Z}^{n}\), as per Definition 2.1, such that \(\ell^{2}(X^{j})\) is infinite dimensional for every \(1\leq j\leq n\)._ Let us denote by \(\{\delta_{1},\ldots,\delta_{n}\}\) the canonical basis of \(\mathbb{Z}^{n}\). An hyper-cube \(s\subset\mathbb{Z}^{n}\) of dimension \(j\) has \(2^{j}\) vertices. In particular, there exists a unique way of describing \(s\) by a \(2^{j}\)-tuple \[(x_{1},x_{2},\ldots x_{2^{j}-1},x_{2^{j}})\] that satisfies \(x_{i+1}-x_{i}=\pm\delta_{l}\) with \(x_{i+1}-x_{i}=\delta_{l_{i}}\) and \(l_{i}<l_{i+1}\) for \(1\leq i\leq j\). This is considered an orientation on \(s\) and by convention we refer to this orientation as positive. The opposite orientation is given by \[(x_{j+1},x_{j},x_{j-1},\ldots,x_{2},x_{1},x_{2^{j}},x_{2^{j}-1},\ldots,x_{j+2 })=(\tilde{x}_{1},\ldots,\tilde{x}_{2^{j}})\] and satisfies \(\tilde{x}_{i+1}-\tilde{x}_{i}=-\delta_{\tilde{\ell}_{i}}\) and \(\tilde{\ell}_{i}>\tilde{\ell}_{i+1}\) for \(1\leq i\leq j\). Hence, an oriented hyper-cube \(s\) of dimension \(j\) in \(\mathbb{Z}^{n}\) can be described by a "base" point in \(\mathbb{Z}^{n}\) which given by the first element in the \(2^{j}\)-tuple, and \(j\) elements of the canonical basis ordered on either increasing or decreasing fashion. Increasing and decreasing correspond to the positive and negative orientations, respectively. Denoting the base point by \(\lfloor s\rfloor\) we thus write \[s\equiv(\lfloor s\rfloor;\delta_{l_{1}},\ldots,\delta_{l_{j}}). \tag{7}\] In particular, in the first hyperoctant a positive oriented hyber-cube \(s\) satisfies that \(\lfloor s\rfloor\) is the smallest vertice. In contrast, for a negatively oriented one \(\lfloor s\rfloor\) is its biggest vertice. For \(j=1\), the notation \((x;\delta_{\ell})\) confates \((x,x+\delta_{\ell})\) with \((x,x-\delta_{\ell})\). In the first case the singleton \(\delta_{\ell}\) needs to be understood as ordered in an increasing fashion while in the later case in decreasing order. For \(j=0\), we repeat our convention of considering each \(x\in\mathbb{Z}^{n}\) with an outward and inward orientation. We denote these orientation by \((x;-)\) and \((x;+)\) respectively. Let \(P^{j,n}\) be the set of functions \[P^{j,n}:=\{I:\{1,\ldots,j\}\to\{1,\ldots,n\};I\text{ strictly monotone}\} \tag{8}\] By (7) a simplex \(s\) of dimension \(j\) defines an element \(\hat{s}\) in \(P^{j,n}\). Hence we can also use the following notation \[(\lfloor s\rfloor;\delta_{\hat{s}(1)},\ldots,\delta_{\hat{s}(j)})\equiv( \lfloor s\rfloor;\hat{s}). \tag{9}\] On \(P^{j,n}\) we can define an involution by \[\hat{s}^{*}(i)=\hat{s}(j-i+1)\,\] and a signature by taking \(+1\) if it is increasing and \(-1\) if it is decreasing. By assumption we have that \(\operatorname{sgn}(s)=\operatorname{sgn}(\hat{s})\). We set \[\lceil s\rceil:=\lfloor s\rfloor+\operatorname{sgn}(s)\sum_{i=1}^{j}\delta_{ \hat{s}(i)}\.\] This notation allow us to relate an hyper-cube and its opposite orientation by \[\overline{s}=(\lceil s\rceil;\hat{s}^{*})=:(-1)s\.\] We set \(X(\mathbb{Z}^{n})\) as the set of oriented hyper-cubes. It is easy to see that it satisfies Item 1 of Definition 2.1 by taking \(n:X(\mathbb{Z}^{n})\to\mathbb{P}(\mathbb{Z}^{n})\) which for every hyper-cube gives its set of vertices. Denoting by \(X^{j}\) the set of oriented hyper-cubes of dimension \(j\) we can write \[X(\mathbb{Z}^{n})=\cup_{j=0}^{n}X^{j}\] in order to satisfy Item 2 of Definition 2.1. We have left to define \[\partial:X^{j+1}\to\mathbb{P}(X^{j})\.\] With that aim, starting by the application (9) and for \(1\leq i_{0}\leq j\), we define \({}_{i_{0}}\hat{s}\in P^{j-1,n}\) by \[{}_{i_{0}}\hat{s}(i)=\begin{cases}\hat{s}(i)&\text{ if }i<i_{0}\ ;\\ \hat{s}(i+1)&\text{ if }i_{0}\leq i\.\end{cases}\] For \(j\geq 2\) and \(s\in X^{j}\subset X(\mathbb{Z}^{n})\) we define \[\partial(s):=\cup_{i=1}^{j}\{(-1)^{j-i}(\lfloor s\rfloor;_{i}\hat{s})\}\bigcup \cup_{i=1}^{j}\{(-1)^{i}(\lceil s\rceil;(_{i}\hat{s})^{*})\}. \tag{10}\] Let us illustrate this definition by an example. Let \(s\) be a cube, \(s=(x;\delta_{1},\delta_{2},\delta_{3})\). Then, \(\partial(s)\) is composed of \(6\) faces. Three of these faces contain \(x=\lfloor s\rfloor\) : \[(x;\delta_{1},\delta_{2})\,;\overline{(x;\delta_{1},\delta_{3})}\,;\,(x;\delta_ {2},\delta_{3})\ ; \tag{11}\] while three contain \(x+\delta_{1}+\delta_{2}+\delta_{3}=\lceil s\rceil\) : \[\overline{(x+\delta_{1}+\delta_{2}+\delta_{3};\delta_{3},\delta_{2})}\,;\,(x+ \delta_{1}+\delta_{2}+\delta_{3};\delta_{3},\delta_{1})\,;\overline{(x+\delta _{1}+\delta_{2}+\delta_{3};\delta_{2},\delta_{1})}. \tag{12}\] We complete our definition for \(s\in X^{1}\) by \(\partial(x,y)=\{(x;-),(y;+)\}\). We can now state the result that would show that this definition is in accordance with Item 3 of Definition 2.1 **Proposition 2.7**.: _Let \(s\in X^{j}\subset X(\mathbb{Z}^{n})\), with \(j\geq 2\). Then \(\partial(\overline{s})=\overline{\partial(s)}\) and if \(r\in\partial(\partial(s))\), we have that \(\overline{r}\in\partial(\partial(s))\)._ The proof of Proposition 2.7 is rather long and elementary so we posponed to the Appendix A. We have hence endowed \(\mathbb{Z}^{n}\) with a _combinatorial differential structure_. By extension, we also have this structure for \(h\mathbb{Z}^{n}:=\{(hz_{1},\ldots,hz_{n}):z_{l}\in\mathbb{Z}\}\) with \(h>0\). We denote \[X(h\mathbb{Z}^{n})=\bigcup_{j=0}^{n}X^{j}_{h}\.\] The only ingredient missing for having the Gauss-Bonnet operator is the measure \(m:X(h\mathbb{Z}^{n})\to\mathbb{R}_{>0}\). For \(s\in X^{j}_{h}\) we set \[m(s)=h^{-2j}\.\] This measure is chosen in order to ensure that the norm of \(\delta_{s_{0}}\), the delta function over \(s_{0}\in X_{h}^{j}\), correspond to the volume of the (shrunken) \(j\)-hyper-cube. Indeed we can notice that \[||\delta_{s_{0}}||=\left(\sum_{s\in X}m(s)|\delta_{s_{0}}(s)|^{2}\right)^{\frac{ 1}{2}}=(m(s_{0}))^{\frac{1}{2}}=h^{-j}\.\] It follows that we have the following combinatorial differential operators on \(\ell^{2}(X(h\mathbb{Z}^{n}))\): \[(d_{h}f)(s)=\sum_{r\subset s}f(r)\quad;\quad(d_{h}^{*}f)(s)=\frac{1}{h^{2}}\sum _{s\subset r}f(r). \tag{13}\] One can check, using for example Lemma 3.1 and Corollary 3.2, that the spectrum of \(d_{h}+d_{h}^{*}\) is given by \[\left[-\frac{\sqrt{4n}}{h},\frac{\sqrt{4n}}{h}\right]\.\] For ease of notation, when \(h=1\) we denote the corresponding Hilbert space by \(\ell^{2}(X(\mathbb{Z}^{n}))\) and drop the \(h\) subscript. ### The particular case \(\mathbb{Z}^{1}\) The study of \(1\)-dimensional discrete Dirac operator clearly outweigh its higher dimensional counterpart as can be attested for example by [1, 1, 2] and reference therein. In the topic of the continuum limit, [2] focus in the \(n=2\) and argue that for \(n=1\) the discrete Dirac operator on \(\ell^{2}(\mathbb{Z})^{2}\) is given by \[\mathbb{D}_{m}=\begin{pmatrix}m&D_{-}\\ D_{+}&-m\end{pmatrix}\] where \(D_{+}f(x)=f(x+1)-f(x)\) and \(D_{-}=D_{+}^{*}\), is unitarily equivalent to a Schrodinger operator on \(\ell^{2}(\mathbb{Z})\). From this, one can use the result from [11] to study its continuum limit. From another point view, in contrast with the modifications needed for bigger dimension, [1] shows that for \(n=1\) the continuum limit holds in the norm resolvent sense for \(\mathbb{D}_{m}\). These results coincide with the present work in the sense that \(\mathbb{D}_{m}\) is unitarily equivalent to \(d+d^{*}+m\tau\), where \(\tau\) was defined on (6). This equivalence can be implemented by the unitary operator \(\mathbb{U}:\ell^{2}(X(\mathbb{Z}))\to\ell^{2}(\mathbb{Z})^{2}\) is given by \[(\mathbb{U}f)(x)=(f(\{x,+\}),f(x,x+1))\.\] ## 3. The operator \(d\) acting on discrete differential forms In this section we construct a unitary representation of \(\ell^{2}(X(\mathbb{Z}^{n}))\) which will be useful for the subsequent computations. For this construction we use standard notation from multilinear algebra and differential geometry. First, for \(0\leq j\leq n\) let \(\bigwedge^{j}(\mathbb{Z}^{n})\) be the vector space of alternating \(j\)-linear complex valued functions on \((\mathbb{Z}^{n})^{j}\), where we identify \(\bigwedge^{0}(\mathbb{Z}^{n})\) with \(\mathbb{C}\). We refer to a \(\omega\in\bigwedge^{j}(\mathbb{Z}^{n})\) as a \(j\)-form. We define \(dx^{l}\in\bigwedge^{1}(\mathbb{Z}^{n})\) by \[dx^{l}(\delta_{k})=\begin{cases}1&\text{ if }l=k\\ 0&\text{ if }l\neq k\end{cases}\.\] A basis for \(\bigwedge^{1}(\mathbb{Z}^{n})\) is given by \(\{dx^{1},\ldots,dx^{n}\}\). The wedge product \(\wedge:\bigwedge^{k}(\mathbb{Z}^{n})\times\bigwedge^{j}(\mathbb{Z}^{n})\to \bigwedge^{k+j}(\mathbb{Z}^{n})\) for \(\eta\in\bigwedge^{k}(\mathbb{Z}^{n})\) and \(\omega\in\bigwedge^{j}(\mathbb{Z}^{n})\) is given by \[\eta\wedge\omega(\mu_{1},\ldots,\mu_{k+j})=\frac{1}{k!j!}\sum_{\sigma\in S_{k+j }}\operatorname{sgn}(\sigma)\eta(\mu_{\sigma(1)},\ldots,\mu_{\sigma(k)})\omega (\mu_{\sigma(k+1)},\ldots,\mu_{\sigma(k+j)})\.\] Then, a basis for \(\bigwedge^{j}(\mathbb{Z}^{n})\) is given by \[\{\omega\in\bigwedge^{j}(\mathbb{Z}^{n}):\omega=dx^{l_{1}}\wedge\ldots\wedge dx ^{l_{j}}\text{ and }l_{1}<\cdots<l_{j}\}\.\] Let \(P^{j,n}_{+}\) be the elements in \(P^{j,n}\) (defined in (8)) that are strictly increasing. For \(I\in P^{j,n}_{+}\) we define \(dx^{I}:=dx^{I(1)}\wedge\ldots\wedge dx^{I(j)}\). Using this basis we can define an inner product on \(\bigwedge^{j}(\mathbb{Z}^{n})\) by: \[\langle dx^{I};{dx^{I}}^{\prime}\rangle\bigwedge^{j}(\mathbb{Z}^{n}):=\begin{cases} 1&\text{ if }I=I^{\prime}\\ 0&\text{ if else}\end{cases}. \tag{14}\] Let \(\Omega^{j}(\mathbb{Z}^{n}):=\{\omega:\mathbb{Z}^{n}\to\bigwedge^{j}(\mathbb{Z} ^{n})\}\) be the vector space sections over \(\bigwedge^{j}(\mathbb{Z}^{n})\). We denote by \(\Omega^{j}_{c}(\mathbb{Z}^{n})\) the subspace of compactly supported sections. Each \(\omega\) in \(\Omega^{j}(\mathbb{Z}^{n})\) can be written by \[\omega(\mu)=\sum_{I\in P^{j,n}_{+}}\omega_{I}(\mu)dx^{I}, \tag{15}\] where \(\omega_{I}:\mathbb{Z}^{n}\to\mathbb{C}\). On \(\Omega^{j}_{c}(\mathbb{Z}^{n})\) we consider the inner product induced by the inner product on \(\bigwedge^{j}(\mathbb{Z}^{n})\) given by (14). Using (15) one can check that \[\langle\omega,\eta\rangle_{\Omega^{j}_{c}(\mathbb{Z}^{n})}=\sum_{\mu\in \mathbb{Z}^{n}}\sum_{I\in P^{j,n}_{+}}\omega_{I}(\mu)\overline{\eta_{I}(\mu)}. \tag{16}\] The completion of \(\Omega^{j}_{c}(\mathbb{Z}^{n})\) under the norm defined by (16) is \(\ell^{2}(\mathbb{Z}^{n};\bigwedge^{j}(\mathbb{Z}^{n}))\). Let us introduce the operator \(\tilde{d}\) in \(\Omega^{j}(\mathbb{Z}^{n})\) using the standard ideas of differential forms. First, for a function \(\omega:\mathbb{Z}^{n}\to\mathbb{C}\) set \[\mathcal{D}_{l}\omega(\mu)=\omega(\mu+\delta_{l})-\omega(\mu).\] Let \(\tilde{d}_{0}:\Omega^{0}(\mathbb{Z}^{n})\to\Omega^{1}(\mathbb{Z}^{n})\) be the exterior derivative defined by \[\tilde{d}_{0}\omega=\sum_{l=1}^{n}(\mathcal{D}_{l}\omega)dx_{l}. \tag{17}\] We are interested in comparing \(d\) from (13) to \(\tilde{d}\). With that in mind, let us define, for every \(0\leq j\leq n\), the unitary operator \(U_{j}:\ell^{2}(X^{j})\to\ell^{2}(\mathbb{Z}^{n};\bigwedge^{j}(\mathbb{Z}^{n}))\) by \[(U_{j}f)(\mu):=\sum_{I\in P^{j,i}_{+}}f(\mu;\delta_{I(1)},\ldots,\delta_{I(j)} )\,dx^{I}. \tag{18}\] Noticing that for \(f\in\ell^{2}(X^{0})\) \[[(U_{1}\circ d_{0})f](\mu)=\sum_{i=1}^{j}(d_{0}f)(\mu;\delta_{j})dx^{i}=\sum_ {i=1}^{j}(d_{0}f)(\mu,\mu+\delta_{i})dx^{i}=\sum_{i=1}^{j}(f(\mu+\delta_{i})-f( \mu))dx^{i}\] \[= \sum_{i=1}^{j}\mathcal{D}_{i}f(\mu)dx^{i} \tag{19}\] we obtain that \(U_{1}\circ d_{0}=\tilde{d}_{0}\circ U_{0}\). The operator \(\tilde{d}_{j}:\Omega^{j}(\mathbb{Z}^{n})\to\Omega^{j+1}(\mathbb{Z}^{n})\) is defined by \[\tilde{d}_{j}(\sum_{I\in P_{+}^{j,n}}\omega_{I}dx^{I})=\sum_{I\in P_{+}^{j,n}}( \tilde{d}_{0}\omega_{I})\wedge dx^{I}. \tag{20}\] Arguing as in (19) one can check that we have the following commutative diagram To have a global perspective we define the _total exterior derivative_\(\tilde{d}\) as the off-diagonal operator given by \[\tilde{d}:\oplus_{j=0}^{n-1}\Omega^{j}(\mathbb{Z}^{n})\to\oplus_{j=1}^{n} \Omega^{j}(\mathbb{Z}^{n})\.\] and from the family of unitary operators \(U_{j}\), the diagonal operator \[U:\ell^{2}(X(\mathbb{Z}^{n}))\to\bigoplus_{j=0}^{n}\ell^{2}(\mathbb{Z}^{n}; \bigwedge^{j}(\mathbb{Z}^{n}))\.\] Then, we can summarize the results so far by \[\tilde{d}\circ U=U\circ d. \tag{21}\] We turn now our attention to \(\tilde{d}^{*}\). For this, let us consider \(I\in P_{+}^{j,n}\) and define \(J_{I}:\{j+1,\dots,n\}\to\{1,\dots,n\}\setminus\{I(1),\dots,I(j)\}\) to be the only strictly increasing function between those two sets. Then we denote by \(IJ_{I}\) the permutation given by \[IJ_{I}(i)=\begin{cases}I(i)&i\leq j\\ J_{I}(i)&j<i\end{cases}\.\] The _Hodge star operator_ is defined as the operator \(*:\bigwedge^{j}(\mathbb{Z}^{n})\to\bigwedge^{n-j}(\mathbb{Z}^{n})\) determined by \[*dx^{I}=\operatorname{sign}(IJ_{I})dx^{J_{I}}\.\] Then, we can compute the adjoint operator \(\tilde{d}^{*}:\Omega^{j+1}(\mathbb{Z}^{n})\to\Omega^{j}(\mathbb{Z}^{n})\) as \[\tilde{d}^{*}=(-1)^{ni+1}*\tilde{d}*. \tag{22}\] Let us denote by \(\Delta\) the discrete Laplacian on \(\mathbb{Z}^{n}\) given by: \[(\Delta f)(\mu)=\sum_{l=1}^{n}f(\mu+\delta_{l})+f(\mu-\delta_{l})-2f(\mu)\,\] and consider the bounded operator \(\tilde{\Delta}\) on \(\ell^{2}(\mathbb{Z}^{n};\bigwedge^{j}(\mathbb{Z}^{n}))\) defined by \[\tilde{\Delta}\left(\sum_{l}\omega_{I}dx^{I}\right)=\sum(\Delta\omega_{I})dx^ {I}\.\] The following result shows that the Hodge-Laplacian on \(\mathbb{Z}^{n}\) is unitary equivalent to a family of usual Laplacians on \(\mathbb{Z}^{n}\). **Lemma 3.1**.: \[(\tilde{d}+\tilde{d}^{*})^{2}=\tilde{\Delta}\] Proof.: First notice that by (21) we know that \(\tilde{d}^{2}=0\). It follows that \((\tilde{d}+\tilde{d}^{*})^{2}=\tilde{d}\tilde{d}^{*}+\tilde{d}^{*}\tilde{d}\) and hence we will prove \((\tilde{d}\tilde{d}^{*}+\tilde{d}^{*}\tilde{d})=\tilde{\Delta}\). Consider the quadratic form of the operator \(\tilde{\Delta}\) given by \[\langle\sum_{I}(\Delta\omega_{I})dx^{I};\sum_{I}\omega_{I}dx^{I}\rangle =\sum_{\mu\in\mathbb{Z}^{n}}\sum_{I}(\Delta\omega_{I}(\mu)) \overline{\omega_{I}(\mu)}\] \[=\sum_{\mu\in\mathbb{Z}^{n}}\sum_{I}\sum_{\alpha=1}^{n}(( \mathcal{D}_{\alpha}^{2}\omega_{I})(\mu-e_{\alpha}))\overline{\omega_{I}(\mu)} \tag{23}\] \[=\sum_{\mu\in\mathbb{Z}^{n}}\sum_{I}\sum_{\alpha=1}^{n}| \mathcal{D}_{\alpha}\omega_{I}(\mu)|^{2}\.\] Now, let us compute the quadratic form of \(\tilde{d}\tilde{d}^{*}+\tilde{d}^{*}\tilde{d}\). Using (22) we obtain that \[\langle(\tilde{d}\tilde{d}^{*})\omega,\omega\rangle =\langle\tilde{d}*\omega;\tilde{d}*\omega\rangle\] \[=\langle\tilde{d}\sum_{I}\operatorname{sign}(IJ_{I})\omega_{I}dx^ {J};\tilde{d}\sum_{I}\operatorname{sign}(IJ_{I})\omega_{I}dx^{J_{I}}\rangle\] \[=\langle\sum_{I}\operatorname{sign}(IJ_{I})\sum_{\alpha=1}^{n} \mathcal{D}_{\alpha}\omega_{I}dx^{\alpha}\wedge dx^{J_{I}};\sum_{I} \operatorname{sign}(IJ_{I})\sum_{\alpha=1}^{n}\mathcal{D}_{\alpha}\omega_{I} dx^{\alpha}\wedge dx^{J_{I}}\rangle\] \[=\sum_{\mu\in\mathbb{Z}^{n}}\sum_{I}\sum_{\begin{subarray}{c} \alpha\neq J_{I}(i)\\ j+1\leq i\leq n\end{subarray}}|\mathcal{D}_{\alpha}\omega_{I}(\mu)|^{2} \tag{24}\] \[=\sum_{\mu\in\mathbb{Z}^{n}}\sum_{I}\sum_{i=1}^{j}|\mathcal{D}_{ I(i)}\omega_{I}(\mu)|^{2}.\] Similarly \[\langle(\tilde{d}^{*}\tilde{d})\omega,\omega\rangle =\langle\tilde{d}\omega;\tilde{d}\omega\rangle\] \[=\langle\sum_{I}\sum_{\alpha=1}^{n}\mathcal{D}_{\alpha}\omega_{I }dx^{\alpha}\wedge dx^{J};\sum_{I}\sum_{\alpha=1}^{n}\mathcal{D}_{\alpha} \omega_{I}dx^{\alpha}\wedge dx^{J}\rangle\] \[=\sum_{\mu\in\mathbb{Z}^{n}}\sum_{I}\sum_{\begin{subarray}{c} \alpha\neq I(i)\\ 1\leq i\leq j\end{subarray}}|\mathcal{D}_{\alpha}\omega_{I}(\mu)|^{2} \tag{25}\] \[=\sum_{\mu\in\mathbb{Z}^{n}}\sum_{I}\sum_{i=j+1}^{n}|\mathcal{D}_ {J_{I}(i)}\omega_{I}(\mu)|^{2}.\] Adding (24) and (25) we obtain (23). This result is useful for the present article because it allows the following representation of the resolvent of \(d+d^{*}\): **Corollary 3.2**.: _For \(\operatorname{Im}(z)\neq 0\) the resolvent operators of \(\tilde{d}+\tilde{d}^{*}\) and \(\tilde{\Delta}\) satisfies_ \[(\tilde{d}+\tilde{d}^{*}-z)^{-1}=(\tilde{d}+\tilde{d}^{*}+z)(\tilde{\Delta}-z^{2 })^{-1}.\] ## 4. Continuum limit ### Introducing the mesh size \(h\) Now we are ready to introduce the parameter \(h\). As in the last part of section Section 2 we consider the spaces \(\bigoplus_{j=0}^{n}\ell^{2}(h\mathbb{Z}^{n};\bigwedge^{j}(h\mathbb{Z}^{n}))\) and \(\bigoplus_{j=0}^{n}\ell^{2}\Big{(}h\mathbb{Z}^{n};\mathbb{C}^{\binom{n}{j}} \Big{)}\) with internal products given respectively by \[\langle\omega,\eta\rangle_{\ell^{2}(h\mathbb{Z}^{n};\bigwedge^{j} (h\mathbb{Z}^{n}))} =\frac{1}{h^{2j}}\ \sum_{\mu\in h\mathbb{Z}^{n};I\in P_{+}^{j,n}}\omega_{I}(\mu) \overline{\eta}_{I}(\mu),\] \[\langle f,g\rangle_{\ell^{2}\Big{(}h\mathbb{Z}^{n};\mathbb{C} \binom{n}{j}\Big{)}} =\sum_{\mu\in h\mathbb{Z}^{n};1\leq l\leq\binom{n}{j}}f_{l}(\mu) \overline{g}_{l}(\mu).\] To define a unitary operator between these spaces we introduce the lexicographic order in \(P_{+}^{j,n}\), which is given by: \(I<I^{\prime}\) if and only if there exists \(1\leq l\leq j\) such that \[I(i)= I^{\prime}(i),\quad i<l,\] \[I(l)< I^{\prime}(l).\] Therefore we have \(P_{+}^{j,n}=\{I_{1}^{j}<\cdots<I_{\binom{n}{j}}^{j}\}\). For \(0\leq j\leq n\), using (15), we set \[\tilde{U}_{j,h}:\ell^{2}(h\mathbb{Z}^{n};\bigwedge^{j}(h\mathbb{Z}^{n}))\to \ell^{2}\Big{(}h\mathbb{Z}^{n};\mathbb{C}^{\binom{n}{j}}\Big{)}\,\] by \[(\tilde{U}_{j,h}\omega)_{l}(\mu):=\frac{1}{h^{j}}\omega_{I_{l}^{j}}(\mu), \qquad 1\leq l\leq\binom{n}{j}. \tag{26}\] This gives a unitary operator whose adjoint is given by \[(\tilde{U}_{j,h}^{*}f)(\mu):=h^{j}\sum_{l=1}^{\binom{n}{j}}f_{l}(\mu)dx^{I_{l} ^{j}} \tag{27}\] Using the same definitions of (13), (17) and (20) together with (18) and (26) give us the commutative diagram \[\begin{CD}\ell^{2}(hX^{j})@>{U_{j}}>{}>\ell^{2}(h\mathbb{Z}^{n};\bigwedge^{ j}(h\mathbb{Z}^{n}))@>{\tilde{U}_{j,h}}>{}>\ell^{2}(h\mathbb{Z}^{n};\mathbb{C}^{ \binom{n}{j}})\\ @V{}V{d_{j,h}}V@V{}V{d_{j,h}}V@V{}V{\tilde{U}_{j+1,h}}V_{j,h}V_{j,h}^{*}\\ \ell^{2}(hX^{j+1})@>{U_{j+1}}>{}>\ell^{2}(h\mathbb{Z}^{n};\bigwedge^{j+1}(h \mathbb{Z}^{n}))@>{\tilde{U}_{j+1,h}}>{}>\ell^{2}(h\mathbb{Z}^{n};\mathbb{C}^{ \binom{n}{j+1}})\end{CD}\] Defining the diagonal operator \(\tilde{U}_{h}:\bigoplus_{j=0}^{n}\tilde{U}_{j,h}\) we are led to study the convergence of the operator \[H_{h}:=\tilde{U}_{h}U(d_{h}+d_{h}^{*})U^{*}\tilde{U}_{h}^{*}:\bigoplus_{j=0}^{n} \ell^{2}\Big{(}h\mathbb{Z}^{n};\mathbb{C}^{\binom{n}{j}}\Big{)}\to\bigoplus_{j=0 }^{n}\ell^{2}\Big{(}h\mathbb{Z}^{n};\mathbb{C}^{\binom{n}{j}}\Big{)}\.\] This operator can be written in the operator-valued matrix form \[H_{h}=\tilde{U}_{h}U\begin{pmatrix}0&d_{0,h}^{*}&0&\dots&0&0&0\\ d_{0,h}&0&d_{1,h}^{*}&\dots&0&0&0\\ 0&d_{1,h}&0&\dots&0&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 0&0&0&\dots&0&d_{j-1,h}^{*}&0\\ 0&0&0&\dots&d_{j-1,h}&0&d_{j,h}^{*}\\ 0&0&0&\dots&0&d_{j,h}&0\end{pmatrix}U^{*}\tilde{U}_{h}^{*}.\] Now we turn to construct the passage to the continuum. For this, we denote an element in \(\bigoplus_{j=0}^{n}\ell^{2}\Big{(}h\mathbb{Z}^{n};\mathbb{C}^{\binom{n}{j}} \Big{)}\) by \(f_{j,l}\) for \(1\leq j\leq n;1\leq l\leq\binom{n}{j}\) and set \(F:\bigoplus_{j=0}^{n}\ell^{2}\Big{(}h\mathbb{Z}^{n};\mathbb{C}^{\binom{n}{j}} \Big{)}\to\bigoplus_{j=0}^{n}L^{2}\Big{(}h^{-1}\mathbb{T}^{n};\mathbb{C}^{ \binom{n}{j}}\Big{)}\) \[(Ff)_{j,l}(\xi):=h^{n}\sum_{\mu\in h\mathbb{Z}^{n}}e^{-2\pi i\xi\cdot\mu}f_{j,l }(\mu)\,\qquad 0\leq j\leq n,\quad 1\leq l\leq\binom{n}{j}.\] The next lemma describe \(FH_{h}F^{*}\) as matrix-value multiplication operator. This characterization will enable us to adapt use the results from [17] to obtain the desired limit. **Lemma 4.1**.: _The operator \(FH_{h}F^{*}\) can be written as a \(2^{n}\times 2^{n}\)- matrix whose coefficients are linear combinations of elements of the form:_ \[a_{h,l}(\xi):=\frac{(-1+e^{-2\pi ih\xi_{l}})}{h},\quad 1\leq l\leq n.\] Proof.: For fixed \(0\leq j\leq n\) let us consider the operator \(\tilde{d}_{j}\tilde{U}_{j,h}^{*}\) applied to \(f=(f_{j,l})\in\ell^{2}\Big{(}h\mathbb{Z}^{n};\mathbb{C}^{\binom{n}{j}}\Big{)}\). Using (20) and (27) \[(\tilde{d}_{j,h}\tilde{U}_{j,h}^{*}f)(\mu) =h^{j}\sum_{l}^{\binom{n}{j}}(\tilde{d}_{0,h}f_{j,l})(\mu)\wedge dx ^{I_{l}^{j}}\] \[=h^{j}\sum_{l}^{\binom{n}{j}}\sum_{\alpha=1}^{n}\left(f_{j,l}(\mu +h\delta_{\alpha})-f_{j,l}(\mu)\right)dx^{\alpha}\wedge dx^{I_{l}^{j}}\] \[=h^{j}\sum_{1\leq\overline{I}\leq\binom{n}{j}}\left(\sum_{ \begin{subarray}{c}\alpha,l\\ dx^{\alpha}\wedge dx^{I_{l}^{j}}=(\pm)dx^{I_{l}^{j+1}}\end{subarray}}\left(f_{j, l}(\mu+h\delta_{\alpha})-f_{j,l}(\mu)\right)\right)(\pm)dx^{I_{l}^{j+1}}.\] From this computation and (26) we see that the \((\tilde{l},l)\) coefficient of the \(\binom{n}{j+1}\times\binom{n}{j}\) operator matrix \(\tilde{U}_{j+1,h}\tilde{d}_{j}\tilde{U}_{j,h}^{*}\) is \[\sum_{\begin{subarray}{c}1\leq\alpha\leq n\\ dx^{\alpha}\wedge dx^{l^{j}}_{\tilde{l}}=(\pm)dx^{l^{j+1}}_{\tilde{l}}\end{subarray}} \frac{(f_{j,l}(\mu+h\delta_{\alpha})-f_{j,l}(\mu))}{h}.\] We finish the proof by recalling that \(F(f_{j,l}(\cdot+h\delta_{\alpha})-f_{j,l}(\cdot))=h\,a_{h,\alpha}Ff_{j,l}\). Our problem is to study the convergence of \((FH_{h}F^{*}-z)^{-1}\). Using Corollary 3.2 we immediately see that for \(\operatorname{Im}(z)>0\) \[(FH_{h}F^{*}-z)^{-1}(\xi)=\frac{1}{r_{z}(\xi)}FH_{h}F^{*}+\frac{z}{r_{z}(\xi)} \tag{28}\] where \[r_{z}(\xi):=\sum_{l=1}^{n}|a_{h,l}(\xi)|^{2}-z^{2}.\] ### The limit operator and some auxiliary results For the limit operator we will consider the operator exterior derivative \(\mathrm{d}\) in \(\Omega(\mathbb{R}^{n})\). Then, we define the operator \(H\) in \(\bigoplus_{j=0}^{n}L^{2}\Big{(}\mathbb{R}^{n};\mathbb{C}^{\binom{n}{j}}\Big{)}\) with domain \(\bigoplus_{j=0}^{n}\mathcal{H}^{1}\Big{(}\mathbb{R}^{n};\mathbb{C}^{\binom{n}{ j}}\Big{)}\), the first order Sobolev space, and is given by the operator-valued matrix \[H=\begin{pmatrix}0&\mathrm{d}_{0}^{*}&0&\dots&0&0&0\\ \mathrm{d}_{0}&0&\mathrm{d}_{1}^{*}&\dots&0&0&0\\ 0&\mathrm{d}_{1}&0&\dots&0&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 0&0&0&\dots&0&\mathrm{d}_{j-1}^{*}&0\\ 0&0&0&\dots&\mathrm{d}_{j-1}&0&\mathrm{d}_{j}^{*}\\ 0&0&0&\dots&0&\mathrm{d}_{j}&0\end{pmatrix}\] Defining the Fourier transform \(\mathcal{F}:\bigoplus_{j=0}^{n}L^{2}\Big{(}\mathbb{R}^{n};\mathbb{C}^{\binom{n }{j}}\Big{)}\to\bigoplus_{j=0}^{n}L^{2}\Big{(}\mathbb{R}^{n};\mathbb{C}^{ \binom{n}{j}}\Big{)}\) by \[(\mathcal{F}f)_{j,l}(\xi)=\int_{\mathbb{R}^{n}}e^{-2\pi ix\cdot\xi}f_{j,l}(x)dx,\quad 1\leq j\leq n;1\leq l\leq\binom{n}{j},\] in a completely analogous manner as we did before, we can see that for \(\operatorname{Im}(z)>0\) \[(\mathcal{F}H\mathcal{F}^{*}-z)^{-1}(\xi)=\frac{1}{R_{z}(\xi)}\mathcal{F}H \mathcal{F}^{*}+\frac{z}{R_{z}(\xi)} \tag{29}\] where \(R_{z}(\xi):=4\pi^{2}|\xi|^{2}-z^{2}\). This time the coefficients of the matrix of \(\mathcal{F}H\mathcal{F}^{*}\) are obtained by replacing \(a_{h,l}\) of Lemma 4.1 by \[A_{l}(\xi):=2i\pi\xi_{l}\quad 1\leq l\leq n\.\] Next, using the construction from [17] define the partial isometry \(P_{h}:\bigoplus_{j=1}^{n}\left(L^{2}(\mathbb{R}^{n});\mathbb{C}^{\binom{n}{j}} \right)\to\bigoplus_{j=1}^{n}\left(\ell^{2}(h\mathbb{Z}^{n});\mathbb{C}^{\binom{ n}{j}}\right)\) \[(P_{h}u)(\mu)=h^{-2}\int_{\mathbb{R}^{n}}\overline{\varphi_{h,\mu}(x)}u(x)dx,\] where \(\varphi_{h,\mu}(x):=\varphi(h^{-1}(x-\mu))\), and \(\varphi\) is any function in \(\mathcal{S}(\mathbb{R}^{n})\) satisfying \[\sum_{\mu\in\mathbb{Z}^{n}}|\hat{\varphi}(\xi+\mu)|^{2}=1,\quad \xi\in\mathbb{R}^{n}, \tag{31}\] \[\operatorname{supp}(\hat{\varphi})\subset(-1,1)^{n}. \tag{30}\] Set also \(Q_{h}:=FP_{h}\mathcal{F}^{*}\). The following two lemmas are small modifications of [17, Lemma 2.2, Lemma 2.3]. For ease of reading, we include the main ideas of the proofs here. **Lemma 4.2**.: _For each \(z\in\mathbb{C}\setminus\mathbb{R}\), there exists a positive constant \(C\) such that_ \[\|(1-P_{h}^{*}P_{h})(H-z)^{-1}\|\leq Ch.\] Proof.: Since \((1-P_{h}^{*}P_{h})(H-z)^{-1}=\mathcal{F}^{*}[1-Q_{h}^{*}Q_{h}](\mathcal{F}H \mathcal{F}^{*}-z)^{-1}\mathcal{F}\), is enough to prove the result for each entry in the difference matrix of \([1-Q_{h}^{*}Q_{h}](\mathcal{F}H\mathcal{F}^{*}-z)^{-1}\). By (29) and Lemma 4.1 each entry of \((\mathcal{F}H\mathcal{F}^{*}-z)^{-1}\) is a linear combination of elements in \(\{zR_{z}(\xi)^{-1},A_{1}(\xi)R_{z}(\xi)^{-1},\ldots,A_{n}(\xi)R_{z}(\xi)^{-1}\}\). We will just study \(A_{l}(\xi)R_{z}(\xi)^{-1}\) for a fixed \(l\) since the proof for \(zR_{z}(\xi)^{-1}\) is similar. For \(\psi\in L^{2}(\mathbb{R}^{n})\) set \(g(\xi)=\frac{A_{l}(\xi)}{|2\pi\xi|^{2}-z}\psi(\xi)\). It is possible to show that (see the Appendix in [17]) \[(1-Q_{h}^{*}Q_{h})\,g(\xi) =g(\xi)-\sum_{\mu\in\{0,\pm 1\}^{n}}\hat{\varphi}(h\xi)\overline{ \hat{\varphi}(h\xi+\mu)}g(\xi+h^{-1}\mu)\] \[=(1-|\hat{\varphi}(h\xi)|^{2})g(\xi)-\sum_{0\neq\mu\in\{0,\pm 1 \}^{n}}\hat{\varphi}(h\xi)\overline{\hat{\varphi}(h\xi+\mu)}g(\xi+h^{-1}\mu).\] For the first term, by (30) and (31) \[\|(1-|\hat{\varphi}(h\xi)|^{2})g\|\leq\sup_{|\xi|>h^{-1}\delta}\left|\frac{A_{ l}(\xi)}{|2\pi\xi|^{2}-z}\right|\|\psi\|\leq Ch\|\psi\|,\] for some \(\delta>0\). For the terms in the sum using (30) and (31) again we se that \(\hat{\varphi}(h\xi)\overline{\hat{\varphi}(h\xi+\mu)}=0\) for \(|h\xi+\mu|\leq\delta\). Thus for each \(\mu\in\{0,\pm 1\}^{n}\) \[\|\hat{\varphi}(h\xi)\overline{\hat{\varphi}(h\xi+\mu)}g(\xi+h^{-1}\mu)\|\leq \sup_{|\xi+h^{-1}\mu|>\delta}\left|\frac{A_{l}(\xi+h^{-1}\mu)}{|2\pi(\xi+h^{- 1}\mu)|^{2}-z}\right|\|\psi\|\leq Ch\|\psi\|.\] **Lemma 4.3**.: _For each \(z\in\mathbb{C}\setminus\mathbb{R}\), there exists a positive constant \(C\) such that_ \[\|(H_{h}-z)^{-1}P_{h}-P_{h}(H-z)^{-1}\|\leq Ch\] Proof.: First write \[\|(H_{h}-z)^{-1}P_{h}-P_{h}(H-z)^{-1}\|=\|Q_{h}^{*}F(H_{h}-z)^{-1}F^{*}Q_{h}-Q_ {h}^{*}Q_{h}\mathcal{F}(H-z)^{-1}\mathcal{F}^{*}\|.\] As is the previous lemma it is enough to prove that the inequality holds for each entry in the matrix of \(Q_{h}^{*}F(H_{h}-\lambda)^{-1}F^{*}Q_{h}-Q_{h}^{*}Q_{h}\mathcal{F}(H-\lambda)^{-1} \mathcal{F}^{*}\). Then, taking into account (28) and (29), for a fixed \(l\), we consider \(\|Q_{h}^{*}\dfrac{a_{h,l}}{r_{z}}Q_{h}-Q_{h}^{*}Q_{h}\dfrac{A_{l}}{R_{z}}\|\). Let \(\psi\) in \(\mathcal{S}(\mathbb{R}^{n})\), then it is possible to show that (see the Appendix in [17]) \[\left(Q_{h}^{*}\dfrac{a_{l}}{r_{z}}Q_{h}-Q_{h}^{*}Q_{h}\dfrac{A_{l}}{R_{z}} \right)\psi=\sum_{\mu\in\{0,\pm 1\}^{n}}\hat{\varphi}(h\xi)\overline{\hat{ \varphi}(h\xi+\mu)}\mathcal{B}_{h}(\xi+h^{-1}\mu)\psi(\xi+h^{-1}\mu), \tag{32}\] where \(\mathcal{B}_{h}=\dfrac{a_{h,l}}{r_{z}}-\dfrac{A_{l}}{R_{z}}=a_{h,l} \dfrac{R_{z}-r_{z}}{r_{z}R_{z}}+\dfrac{a_{h,l}-A_{l}}{R_{z}}\). Using the Taylor expansion and (30) and (31) we easily get \[|\hat{\varphi}(h\xi)|^{2}\left|\dfrac{a_{h,l}-A_{l}}{R_{z}}\right|\leq Ch|\hat {\varphi}(h\xi)|^{2},\] and \[|\hat{\varphi}(h\xi)|^{2}|a_{h,l}|\left|\dfrac{R_{z}-r_{z}}{r_{z}R_{z}}\right| \leq Ch|\hat{\varphi}(h\xi)|^{2}.\] In the same way, from (30) and (31), for \(\mu\in\{0,\pm 1\}^{n}\) \[|\hat{\varphi}(h\xi)\overline{\hat{\varphi}(h\xi+\mu)}D_{h}(\xi+h^{-1}\mu)| \leq Ch,\quad\xi\in\mathbb{R}^{n}. \tag{33}\] Thus, using (33) in (32) we finish the proof. ### Proof of Theorem 1.1 Let \(T_{h}:\ell^{2}(X_{h})\to\bigoplus_{j=0}^{n}L^{2}(\mathbb{R}^{n};\mathbb{C}^{ \binom{n}{j}})\) be the operator \[T_{h}:=P_{h}^{*}\tilde{U}_{h}U\.\] Being \(\tilde{U}_{h}U\) unitary, and \(P_{h}\) a partial isometry with full range, \(T_{h}\) is an isometry. Noticing that \(T_{h}(d_{h}+d_{h}^{*}-z)T_{h}^{*}=P_{h}^{*}(H_{h}-z)P_{h}\) we write \[P_{h}^{*}(H_{h}-z)^{-1}P_{h}-(H-z)^{-1}=P_{h}^{*}\left[(H_{h}-\lambda)^{-1}P_{ h}-P_{h}(H-\lambda)^{-1}\right]-(1-P_{h}^{*}P_{h})(H-\lambda)^{-1}\] from where one can conclude by taking into account Lemmas 4.2 and 4.3. ## Appendix A Proof of Proposition 2.7 Because the definition of \(\partial\) involves \((_{i}\hat{s})^{*}\) we need a better understanding of how the involution and restriction are related before attempting to prove Proposition 2.7. **Lemma A.1**.: _Let \(\hat{s}\) be given by (9). Let \(1\leq i_{0}\leq j\) and \(1\leq i_{1}\leq j-1\). Then the following statements hold:_ 1. \({}_{i_{0}}(\hat{s}^{*})=(_{j-i_{0}+1}\hat{s})^{*}\)__ 2. \((_{i_{0}}(\hat{s}^{*}))^{*}=(_{j-i_{0}+1}\hat{s})\)__ 3. \[{}_{i_{1}}(_{i_{0}}\hat{s})=\begin{cases}{}_{(i_{0}-1)}(_{i_{1}}\hat{s})&\text { if }i_{1}<i_{0}\ ;\\ {}_{i_{0}}((_{i_{1}+1})\hat{s})&\text{ if }i_{0}\leq i_{1}\.\end{cases}\] Proof.: Let \(1\leq i\leq j-1\), then on one hand we have \[{}_{i_{0}}(\hat{s}^{*})(i)= \begin{cases}\hat{s}^{*}(i)&i<i_{0}\\ \hat{s}^{*}(i+1)&i_{0}\leq i\end{cases}\] \[= \begin{cases}\hat{s}(j-i+1)&i<i_{0}\\ \hat{s}(j-i)&i_{0}\leq i\end{cases}\] while in the other we can compute \[(_{j-i_{0}+1}\hat{s})^{*}(i)= \,_{j-i_{0}+1}\hat{s}(j-i)\] \[= \,\begin{cases}\hat{s}(j-i)&j-i<j-i_{0}+1\\ \hat{s}(j-i+1)&j-i_{0}+1\leq j-i\end{cases}\] \[= \,\begin{cases}\hat{s}(j-i)&i_{0}\leq i\\ \hat{s}(j-i+1)&i<i_{0}\end{cases}\] which proves Item 1. Further, Item 2 is a direct consequence of Item 1. To prove Item 3 we fix \(1\leq i_{0}\leq j\) and \(1\leq i_{1}\leq j-1\) and first consider the case \(i_{1}<i_{0}\). We first compute \[{}_{i_{1}}(_{i_{0}}\hat{s})(i)= \,\begin{cases}{}_{i_{0}}\hat{s}(i)&i<i_{1}\\ {}_{i_{0}}\hat{s}(i+1)&i_{1}\leq i\end{cases}\] \[= \,\begin{cases}\hat{s}(i)&i<i_{1}\\ \hat{s}(i+1)&i_{1}\leq i\leq i_{0}-2\\ \hat{s}(i+2)&i_{0}-1\leq i\end{cases}\] and then compare it with \[{}_{i_{0}-1}(_{i_{1}}\hat{s})(i)= \,\begin{cases}{}_{i_{1}}\hat{s}(i)&i<i_{0}-1\\ {}_{i_{1}}\hat{s}(i+1)&i_{0}-1\leq i\end{cases}\] which will coincide with the previous expression and hence show that Item 3 holds for \(i_{1}<i_{0}\). The computations for \(i_{0}\leq i_{1}\) follow the same structure and are omitted. Proof of Proposition 2.7.: Let us first show that \(\partial(\overline{s})=\overline{\partial(s)}\). By (10) and using Items 1 and 2 from Lemma A.1 we can check that: \[\partial(\overline{s}) =\cup_{i=1}^{j}\left\{(-1)^{j-i}(\lfloor\overline{s}\rfloor;( \hat{s}^{*}))\right\}\bigcup\cup_{i=1}^{j}\{(-1)^{i}(\lceil\overline{s} \rceil;(_{i}(\hat{s}^{*}))^{*})\}\] \[=\cup_{i=1}^{j}\left\{(-1)^{j-i}(\lceil s\rceil\lfloor;(_{j-i+1} \hat{s})^{*})\right\}\bigcup\cup_{i=1}^{j}\{(-1)^{i}(\lfloor s\rfloor;_{j-i+1} \hat{s})\}\] \[=\cup_{m=1}^{j}\frac{\{(-1)^{m-1}(\lceil s\rceil\lfloor;(_{m} \hat{s})^{*})\}\bigcup\cup_{m=1}^{j}\{(-1)^{j-m+1}(\lfloor s\rfloor;_{m}\hat{ s})\}}{\{(-1)^{m}(\lceil s\rceil\lfloor;(_{m}\hat{s})^{*})\}\bigcup\cup_{m=1}^{j} \overline{\{(-1)^{j-m}(\lfloor s\rfloor;_{m}\hat{s})\}}=\overline{\partial(s)}\.\] It remains to show that \(\partial(\partial(s))=\overline{\partial(\partial(s))}\). For this we introduce the following notation for \(1\leq i\leq j\) and \(s\in X^{j}\) \[A_{i}(s):=(-1)^{j-i}(\lfloor s\rfloor;_{i}\hat{s})\quad B_{i}(s):=(-1)^{i}( \lceil s\rceil:(_{i}\hat{s})^{*})\.\] It follows that \[\partial(\partial(s))=\cup_{l=1}^{j-1}\cup_{i=1}^{j}\left\{A_{l}(A_{i}(s)) \cup B_{l}(A_{i}(s))\cup A_{l}(B_{i}(s))\cup B_{l}(B_{i}(s))\right\}\.\] Let us now consider \(s\in X^{j}\). Without lose of generality we will assume that \(s=(\lfloor s\rfloor;\delta_{1},\ldots,\delta_{j})\). From the proof of Lemma A.1 on can see that if \(i_{1}<i_{0}\), then \({}_{i_{1}}({}_{i_{0}}\hat{s})\) skips the \(i_{0}\) and \(i_{1}\) terms. We start by identifying the faces \(r\in\partial(\partial(s))\) that satisfies \(\lfloor r\rfloor=\lfloor s\rfloor\). There are given for \(1\leq i\leq j\) and \(1\leq l\leq j-1\) such that \(j-i\) is even and \(j-l\) is odd by \[A_{l}(A_{i})(s)=A_{l}((-1)^{j-i}(\lfloor s\rfloor;{}_{i}\hat{s})=(-1)^{j-1-l}( \lfloor s\rfloor;{}_{l}(_{i}\hat{s}))=\begin{cases}(\lfloor s\rfloor;{}_{l}(_ {i}\hat{s}))&\text{ if }l<i\\ (\lfloor s\rfloor;{}_{i}(_{l+1}\hat{s}))&\text{ if }i\leq l\end{cases}\ ;\] and for \(1\leq i\leq j\) and \(1\leq l\leq j-1\) such that \(j-i\) is odd and \(l\) is even by \[B_{l}(A_{i})(s)=B_{l}((-1)^{j-i}(\lfloor s\rfloor;{}_{i}\hat{s})= B_{l}((\lceil s\rceil-\delta_{i};(_{i}\hat{s})^{*})\] \[= (-1)^{l}(\lfloor s\rfloor;(_{l}((_{i}\hat{s})^{*}))^{*})\] \[= (\lfloor s\rfloor;{}_{j-l}(_{i}\hat{s}))=\begin{cases}(\lfloor s \rfloor;{}_{j-l}(_{i}\hat{s}))&\text{ if }j-l<i\\ (\lfloor s\rfloor;{}_{i}(_{j-l+1}\hat{s}))&\text{ if }i\leq j-l\end{cases}\.\] In both cases we have faces that start at \(\lfloor s\rfloor\) but have two directions less than \(s\). To check that we have all possible \(\frac{j*(j-1)}{2}\) combinations we first consider the case \(j=2k\) for \(k\in\mathbb{N}\). We have that the faces \(r\in\partial(\partial(s))\) that satisfies \(\lfloor r\rfloor=\lfloor s\rfloor\) are given by \[\begin{split}\left\{\cup_{p=1}^{k}&\cup_{m=1}^{k}A_{2p- 1}(A_{2m}(s))\right\}\bigcup\left\{\cup_{p=1}^{k-1}\cup_{m=1}^{k}B_{2p}(A_{2m- 1}(s))\right\}\\ &=\left\{\cup_{p=1}^{k}\cup_{m=p}^{k}(\lfloor s\rfloor;{}_{2p-1}( _{2m}\hat{s}))\right\}\bigcup\left\{\cup_{p=1}^{k}\cup_{m=1}^{p-1}(\lfloor s \rfloor;{}_{2m}(_{2p}\hat{s}))\right\}\\ &\quad\bigcup\left\{\cup_{p=1}^{k-1}\cup_{m=k-p+1}^{k}(\lfloor s \rfloor;{}_{2k-2p}(_{2m-1}\hat{s}))\right\}\bigcup\left\{\cup_{p=1}^{k-1} \cup_{m=1}^{k-p}(\lfloor s\rfloor;{}_{2m-1}(_{2k-2p+1}\hat{s}))\right\}\\ &=\left\{\cup_{p=1}^{k}\cup_{m=p}^{k}(\lfloor s\rfloor;{}_{2p-1}( _{2m}\hat{s}))\right\}\bigcup\left\{\cup_{p=1}^{k}\cup_{m=1}^{p-1}(\lfloor s \rfloor;{}_{2m}(_{2p}\hat{s}))\right\}\\ &\quad\bigcup\left\{\cup_{p=1}^{k-1}\cup_{m=p+1}^{k}(\lfloor s \rfloor;{}_{2p}(_{2m-1}\hat{s}))\right\}\bigcup\left\{\cup_{p=1}^{k-1}\cup_{ m=1}^{p}(\lfloor s\rfloor;{}_{2m-1}(_{2p+1}\hat{s}))\right\}\\ &=\left\{\cup_{l=1}^{j-1}\cup_{i=l+1}^{j}(\lfloor s\rfloor;{}_{l}(_{i} \hat{s}))\right\}.\end{split}\] Similar computations work for \(j=2k+1\) as well for the treatment of the other types of faces. We do not enter into the detail of each one but we will discuss where the terms come from with the aid of the following table. \begin{tabular}{||c|c c c c|} \hline & \(r\) & \(i\) such that & \(l\) such that & \(\lfloor r\rfloor\) \\ \hline \hline 1 & \(A_{l}(A_{i}(s))\) & \(j-i\) is even & \(j-l\) is odd & \(\lfloor s\rfloor\) \\ \hline 2 & \(B_{l}(A_{i}(s))\) & \(j-i\) is odd & \(l\) is even & \(\lfloor s\rfloor\) \\ \hline 3 & \(A_{l}(A_{i}(s))\) & \(j-i\) is even & \(j-l\) is even & \(\lceil s\rceil-\delta_{x_{1}}-\delta_{x_{2}}\) \\ \hline 4 & \(B_{l}(A_{i}(s))\) & \(j-i\) is odd & \(l\) is odd & \(\lceil s\rceil-\delta_{x_{1}}-\delta_{x_{2}}\) \\ \hline 5 & \(A_{l}(A_{i}(s))\) & \(j-i\) is odd & \(j-l\) is even & \(\lfloor s\rfloor+\delta_{x_{1}}\) \\ \hline 6 & \(B_{l}(A_{i}(s))\) & \(j-i\) is even & \(l\) is odd & \(\lfloor s\rfloor+\delta_{x_{1}}\) \\ \hline 7 & \(A_{l}(B_{i}(s))\) & \(i\) is odd & \(j-l\) is odd & \(\lfloor s\rfloor+\delta_{x_{1}}\) \\ \hline 8 & \(B_{l}(B_{i}(s))\) & \(i\) is even & \(l\) is even & \(\lfloor s\rfloor+\delta_{x_{1}}\) \\ \hline 9 & \(A_{l}(A_{i}(s))\) & \(j-i\) is odd & \(j-l\) is odd & \(\lfloor s\rfloor-\delta_{x_{1}}\) \\ \hline 10 & \(B_{l}(A_{i}(s))\) & \(j-i\) is even & \(l\) is even & \(\lfloor s\rfloor-\delta_{x_{1}}\) \\ \hline 11 & \(A_{l}(B_{i}(s))\) & \(i\) is odd & \(j-l\) is even & \(\lfloor s\rfloor-\delta_{x_{1}}\) \\ \hline 12 & \(B_{l}(B_{i}(s))\) & \(i\) is even & \(l\) is odd & \(\lceil s\rceil-\delta_{x_{1}}\) \\ \hline 13 & \(A_{l}(B_{i}(s))\) & \(i\) is even & \(j-l\) is even & \(\lfloor s\rfloor+\delta_{x_{1}}+\delta_{x_{2}}\) \\ \hline 14 & \(B_{l}(B_{i}(s))\) & \(i\) is odd & \(l\) is odd & \(\lfloor s\rfloor+\delta_{x_{1}}+\delta_{x_{2}}\) \\ \hline 15 & \(A_{l}(B_{i}(s))\) & \(i\) is even & \(j-l\) is odd & \(\lceil s\rceil\) \\ \hline 16 & \(B_{l}(B_{i}(s))\) & \(i\) is odd & \(l\) is even & \(\lceil s\rceil\) \\ \hline \end{tabular} The first two rows corresponds to the calculations we have already done. Rows 3 and 4 will give rise two the opposite faces of the first two rows. In all, the first two rows are all the faces that contain \(\lfloor s\rfloor\), either as \(\lfloor r\rfloor\) or \(\lceil r\rceil\). The next 8 rows describe the faces that contain a vertices at distance 1 from \(\lfloor s\rfloor\). Again we have that the first rows, rows 5 to 8, are positively oriented while the second half, rows 9 to 12 correspond to theirs opposite faces. Finally, rows 13 to 16 are symmetric to the the first 4 rows but its faces contain \(\lceil s\rceil\) instead of \(\lfloor s\rfloor\). **Data availability:** There are no data associated with this manuscript. **Conflict of interest:** On behalf of all authors, the corresponding author states there is no conflict of interest. **Acknowledgments:** P. Miranda was supported by the Chilean Fondecyt Grant 1201857. D. Parra was supported by the Chilean Fondecyt Grant 3210686.
2309.07818
Self-adjoint Momentum Operator for a Particle Confined in a Multi-Dimensional Cavity
Based on the recent construction of a self-adjoint momentum operator for a particle confined in a one-dimensional interval, we extend the construction to arbitrarily shaped regions in any number of dimensions. Different components of the momentum vector do not commute with each other unless very special conditions are met. As such, momentum measurements should be considered one direction at a time. We also extend other results, such as the Ehrenfest theorem and the interpretation of the Heisenberg uncertainty relation to higher dimensions.
A. Mariani, U. -J. Wiese
2023-09-14T16:14:21Z
http://arxiv.org/abs/2309.07818v1
# Self-adjoint Momentum Operator for a Particle Confined in a Multi-Dimensional Cavity ###### Abstract Based on the recent construction of a self-adjoint momentum operator for a particle confined in a one-dimensional interval, we extend the construction to arbitrarily shaped regions in any number of dimensions. Different components of the momentum vector do not commute with each other unless very special conditions are met. As such, momentum measurements should be considered one direction at a time. We also extend other results, such as the Ehrenfest theorem and the interpretation of the Heisenberg uncertainty relation to higher dimensions. Introduction Momentum is a central concept in both classical and quantum physics, and is fundamentally related to translational invariance. It finds numerous applications in both finite and infinite systems where translational invariance is a symmetry. In finite regions of space, translational invariance is most commonly preserved by choosing appropriate boundary conditions such as periodic boundary conditions. However, other choices of boundary conditions which may arise in physical systems of interest (such as for example Dirichlet or Neumann boundary conditions) explicitly break translational invariance. In these systems, one might therefore expect momentum not to be a useful physical quantity. For a classical particle, in the bulk, away from the boundaries, one can ignore the effect of the boundary and thus still employ the usual notion of momentum, but this is more complicated in quantum mechanics. In this case, the boundary introduces strong ultraviolet effects which need to be properly understood. Consider a quantum mechanical particle that is confined to a finite region \(\Omega\in\mathbb{R}^{d}\). Then the operator \(-i\vec{\nabla}\) (in units where \(\hbar=1\)), which describes the momentum of a particle in the Hilbert space \(L^{2}(\mathbb{R}^{d})\) of square-integrable functions over \(\mathbb{R}^{d}\), is not self-adjoint in \(L^{2}(\Omega)\) (at least if only local physical boundary conditions are imposed). Self-adjointness (and not Hermiticity alone) is essential to ensure that an operator has a spectrum of real eigenvalues and a corresponding system of orthonormal eigenfunctions, at least in a generalized sense [1]. In addition to Hermiticity (which results if an operator \(A\) and its Hermitean conjugate \(A^{\dagger}\) act in the same way), self-adjointness requires that the corresponding domains coincide, \(D(A^{\dagger})=D(A)\)[2, 3, 4]. The domain of an operator is usually characterized by square-integrability conditions on derivatives of the wave function as well as by boundary conditions, which are characterized by a family of self-adjoint extension parameters [5, 6, 7]. The consistent interpretation of quantum mechanical measurements of an observable \(A\), which return one of its eigenvalues and collapse the wave function onto the corresponding eigenfunction, indeed requires that \(A\) is self-adjoint. Since the operator \(-i\vec{\nabla}\) is not self-adjoint in \(L^{2}(\Omega)\), the problem is usually considered in \(L^{2}(\mathbb{R}^{d})\). Then the unquantized eigenvalues \(\vec{k}\in\mathbb{R}^{d}\) form a continuous spectrum. Since the corresponding eigenstates are plane waves \(\exp(i\vec{k}\cdot\vec{x})\), which exist everywhere in space with the same probability density, a momentum measurement of this type transfers an infinite amount of energy to the particle and catapults it out of the finite region \(\Omega\). Recently, a self-adjoint momentum operator for a particle that is strictly confined to a finite 1-d interval \(\Omega=[-\frac{L}{2},\frac{L}{2}]\), even after a momentum measurement, has been constructed [8, 9]. In that case, the momentum eigenvalues are quantized. The key to this construction is the doubling of the Hilbert space to \(L^{2}(\Omega)\times\mathbb{C}^{2}\), which was originally motivated by an ultraviolet lattice regularization, and led to a resolution of this long-standing puzzle also directly in the continuum. While sharp impenetrable boundaries are a mathematical idealization, the physical model of the particle in a box and the new momentum concept may be applied as an effective description to many physical systems which are confined inside a limited region of space, such as ultra-cold atoms confined in an optical box trap [10], electrons in a quantum dot [11], domain wall fermions in a higher-dimensional space [12, 13] or the phenomenological MIT bag model [14, 15, 16] for confined quarks and gluons. Here we extend the construction of the new self-adjoint momentum to higher dimensions. First of all, we carefully consider the problem of the simultaneous measurement of different components of the momentum vector. We find that different components generally do not commute, unless very special conditions are met. In particular, this implies that in a bounded region of space, momentum measurements should only be considered one direction at a time. Moreover, we show higher-dimensional generalizations of the Ehrenfest theorem and the Heisenberg-Robertson-Schrodinger uncertainty relation [17, 18, 19, 20, 21, 22, 23] for the new momentum operator. ## 2 Boundary Conditions and the New Momentum Concept in Higher Dimensions In this section we discuss the boundary conditions which make the Hamiltonian \(H\) and the new momentum operator \(\vec{p}_{R}\) self-adjoint. This extends the discussion of previous works for the analogous one-dimensional operators [8, 23], to which we point the interested reader for further details. ### Self-adjoint Hamiltonian Let us consider the Hamiltonian \[H=-\tfrac{1}{2m}\Delta+V(\vec{x}) \tag{2.1}\] for \(\vec{x}\in\Omega\). We assume that \(V(\vec{x})\) is a non-singular potential. Performing two partial integrations, one obtains \[\langle H^{\dagger}\chi|\Psi\rangle =\langle\chi|H\Psi\rangle=\] \[=\langle H\chi|\Psi\rangle+\frac{1}{2m}\int_{\partial\Omega}d \vec{n}\cdot\left[\vec{\nabla}\chi(\vec{x})^{*}\Psi(\vec{x})-\chi(\vec{x})^{* }\vec{\nabla}\Psi(\vec{x})\right]. \tag{2.2}\] Hermiticity of \(H\) requires the boundary term to vanish. On the other hand, locality requires that the boundary conditions do not relate the values of the wave function (or its derivatives) at different points in space. We impose local Robin boundary conditions \[\gamma(\vec{x})\Psi(\vec{x})+\vec{n}(\vec{x})\cdot\vec{\nabla}\Psi(\vec{x})=0\,\quad\vec{x}\in\partial\Omega. \tag{2.3}\] Here \(\vec{n}(\vec{x})\) is the unit-vector normal to the boundary, which conventionally points outwards. Dirichlet boundary conditions, \(\Psi(\vec{x})=0\), correspond to \(\gamma(\vec{x})\to\infty\), and Neumann boundary condition, \(\vec{n}(\vec{x})\cdot\vec{\nabla}\Psi(\vec{x})=0\), correspond to \(\gamma(\vec{x})=0\). Wave functions that obey eq.(2.3) and whose Laplacian is square-integrable belong to the domain \(D(H)\). Inserting eq.(2.3) into the boundary term in eq.(2.2), the Hermiticity condition reads \[\int_{\partial\Omega}d^{d-1}x\left[\vec{n}(\vec{x})\cdot\vec{\nabla}\chi(\vec{ x})^{*}+\gamma(\vec{x})\chi(\vec{x})^{*}\right]\Psi(\vec{x})=0. \tag{2.4}\] Since \(\Psi(\vec{x})\) can still take arbitrary values, this implies \[\gamma(\vec{x})^{*}\chi(\vec{x})+\vec{n}(\vec{x})\cdot\vec{\nabla}\chi(\vec{x })=0. \tag{2.5}\] This characterizes the domain \(D(H^{\dagger})\) of \(H^{\dagger}\) (which acts on \(\chi\)). The domains \(D(H^{\dagger})\) and \(D(H)\) coincide only if \(\gamma(\vec{x})^{*}=\gamma(\vec{x})\in\mathbb{R}\). This defines a family of self-adjoint extensions of \(H\), which are characterized by a self-adjoint extension parameter \(\gamma(\vec{x})\) for each point \(\vec{x}\in\partial\Omega\) in the boundary of the region. The boundary conditions eq.(2.3) ensure that the probability current \[\vec{j}(\vec{x})=\frac{1}{2mi}[\Psi(\vec{x})^{*}\vec{\nabla}\Psi(\vec{x})- \vec{\nabla}\Psi(\vec{x})^{*}\Psi(\vec{x})]\, \tag{2.6}\] does not flow outside the region \(\Omega\). This follows because its perpendicular component \(\vec{n}(\vec{x})\cdot\vec{j}(\vec{x})\) obeys \[\vec{n}(\vec{x})\cdot\vec{j}(\vec{x})= \frac{1}{2mi}[\Psi(\vec{x})^{*}\vec{n}(\vec{x})\cdot\vec{\nabla} \Psi(\vec{x})-\vec{n}(\vec{x})\cdot\vec{\nabla}\Psi(\vec{x})^{*}\Psi(\vec{x} )]=\] \[=\frac{1}{2mi}[-\Psi(\vec{x})^{*}\gamma(\vec{x})\Psi(\vec{x})+ \gamma(\vec{x})^{*}\Psi(\vec{x})^{*}\Psi(\vec{x})]=0. \tag{2.7}\] Self-adjointness is hence directly responsible for ensuring that the particle remains confined in the finite region. ### Self-adjoint momentum While it is straightforward to construct self-adjoint extensions of the Hamiltonian, for the operator \(A=-i\vec{\nabla}\) this is not possible in a physically meaningful way. In fact, performing a partial integration, one obtains \[\langle(-i\vec{\nabla})^{\dagger}\chi|\Psi\rangle =\langle\chi|(-i\vec{\nabla})\Psi\rangle=\] \[=\langle(-i\vec{\nabla})\chi|\Psi\rangle-i\int_{\partial\Omega}d ^{d-1}x\,\vec{n}(\vec{x})\chi(\vec{x})^{*}\Psi(\vec{x}). \tag{2.8}\] Hermiticity requires that the boundary term vanishes. For physical reasons we again limit ourselves to local boundary conditions, which do not relate the wave function values at physically distinct points. Hermiticity then results for Dirichlet boundary conditions, \(\Psi(\vec{x})=0\), \(\vec{x}\in\partial\Omega\), which define the domain \(D(-i\vec{\nabla})\). However, as soon as \(\Psi(\vec{x})\) is fixed to zero at the boundary, \(\chi(\vec{x})\) can still take arbitrary values. As a consequence, the domain of \((-i\vec{\nabla})^{\dagger}\) (which acts on \(\chi(\vec{x})\)) remains unrestricted, \(D((-i\vec{\nabla})^{\dagger})\supset D(-i\vec{\nabla})\). Since the two domains do not coincide, although with Dirichlet boundary conditions \(-i\vec{\nabla}\) is Hermitean, it is not self-adjoint. Consequently, it does not qualify as a physically acceptable momentum operator in the Hilbert space \(L^{2}(\Omega)\). As pointed out recently [8, 9], a physically and mathematically satisfactory momentum operator can be defined in a doubled Hilbert space with 2-component wave functions \[\vec{p}_{R}=-i\left(\begin{array}{cc}0&\vec{\nabla}\\ \vec{\nabla}&0\end{array}\right)=-i\sigma_{1}\vec{\nabla}\,\qquad\Psi(\vec{x})= \left(\begin{array}{c}\Psi_{e}(\vec{x})\\ \Psi_{o}(\vec{x})\end{array}\right). \tag{2.9}\] Besides \(\vec{p}_{R}\), the momentum operator \(\vec{p}=\vec{p}_{R}+i\vec{p}_{I}\) also has an anti-Hermitean contribution \(i\vec{p}_{I}\), which we will discuss later. First we concentrate on the self-adjointness of \(\vec{p}_{R}\). Properly speaking \(\vec{p}_{R}\) is a tuple of operators, and only the momentum in a specific direction is an operator on the Hilbert space. Therefore we consider the momentum in direction \(\hat{k}\), that is the operator \(\hat{k}\cdot\vec{p}_{R}\), which is indeed a map \(D(\hat{k}\cdot\vec{p}_{R})\rightarrow{\cal H}\) from its domain \(D(\hat{k}\cdot\vec{p}_{R})\subset{\cal H}\) to the Hilbert space \({\cal H}\). By partial integration one obtains \[\langle(\hat{k}\cdot\vec{p}_{R})^{\dagger}\chi|\Psi\rangle=\langle \chi|(\hat{k}\cdot\vec{p}_{R})\Psi\rangle=\] \[=\langle(\hat{k}\cdot\vec{p}_{R})\chi|\Psi\rangle-i\int_{\partial \Omega}d^{d-1}x[\chi_{e}(\vec{x})^{*}\Psi_{o}(\vec{x})+\chi_{o}(\vec{x})^{*} \Psi_{e}(\vec{x})](\hat{k}\cdot\vec{n}(\vec{x})). \tag{2.10}\] If the operator \(\hat{k}\cdot\vec{p}_{R}\) is to be Hermitean, then the boundary term must vanish. Locality then implies that \[[\chi_{e}(\vec{x})^{*}\Psi_{o}(\vec{x})+\chi_{o}(\vec{x})^{*}\Psi_{e}(\vec{x} )](\hat{k}\cdot\vec{n}(\vec{x}))=0. \tag{2.11}\] As a consequence of eq.(2.11), any boundary condition need only be imposed on the set of points \(\vec{x}\) such that \(\hat{k}\cdot\vec{n}(\vec{x})\neq 0\). Thus we have two separate situations. If \(\hat{k}\cdot\vec{n}(\vec{x})=0\) only on a set of isolated points, then the condition eq.(2.11) is equivalent to \[\chi_{e}(\vec{x})^{*}\Psi_{o}(\vec{x})+\chi_{o}(\vec{x})^{*}\Psi_{e}(\vec{x})=0 \tag{2.12}\] everywhere on the boundary, and as such, we can now impose the boundary conditions \[\Psi_{o}(\vec{x})=\lambda(\vec{x})\Psi_{e}(\vec{x})\,\quad\vec{x}\in\partial \Omega\, \tag{2.13}\] which constrain the domain \(D(\hat{k}\cdot\vec{p}_{R})\). Inserting these relations in the square bracket in eq.(2.11), the Hermiticity condition takes the form \[[\chi_{e}(\vec{x})^{*}\lambda(\vec{x})+\chi_{o}(\vec{x})^{*}]\Psi_{e}(\vec{x}) =0\,\qquad\vec{x}\in\partial\Omega. \tag{2.14}\] Since \(\Psi_{e}(\vec{x})\) can take arbitrary values, this implies \[\chi_{o}(\vec{x})=-\lambda(\vec{x})^{*}\chi_{e}(\vec{x})\,\qquad\vec{x}\in \partial\Omega. \tag{2.15}\] The self-adjointness of \(\hat{k}\cdot\vec{p}_{R}\) requires \(D(\hat{k}\cdot\vec{p}_{R}^{\dagger})=D(\hat{k}\cdot\vec{p}_{R})\), which implies \(\lambda(\vec{x})=-\lambda(\vec{x})^{*}\) such that \(\lambda(\vec{x})\in i\mathbb{R}\). Hence, the self-adjoint extensions of \(\hat{k}\cdot\vec{p}_{R}\) are characterized by a purely imaginary parameter \(\lambda(\vec{x})\) at each point on the boundary. Note however that the parameter \(\lambda\) also depends on \(\hat{k}\), that is momentum operators in different directions may have different self-adjoint extension parameters. On the other hand, depending on the choice of domain \(\Omega\) and direction \(\hat{k}\), it is also possible that \(\hat{k}\cdot\vec{n}(\vec{x})=0\) on a subset of \(\partial\Omega\) of non-zero measure. This occurs whenever \(\partial\Omega\) includes hyperplanes of codimension 1, for example straight lines in \(2D\), planes in \(3D\), etc. When this occurs, for \(\hat{k}\cdot\vec{p}_{R}\) to be self-adjoint, both \(\Psi\) and \(\chi\) must satisfy the usual boundary conditions \(\Psi_{o}(\vec{x})=\lambda(\vec{x})\Psi_{e}(\vec{x})\) with \(\lambda(\vec{x})\in i\mathbb{R}\) on every point \(\vec{x}\in\partial\Omega\) such that \(\hat{k}\cdot\vec{n}(\vec{x})\neq 0\). On the other hand, no boundary conditions at all are imposed at those points \(\vec{x}\) where \(\hat{k}\cdot\vec{n}(\vec{x})=0\). As explained in previous work [8, 23] in order to recover the original physics from the doubled Hilbert space \(L^{2}(\Omega)\times\mathbb{C}^{2}\), the Hamiltonian needs to be appropriately modified so that only those states \(|\Psi\rangle\) with \(\Psi_{e}=\Psi_{o}\) should be considered as belonging to the physical, finite-energy subspace. ## 3 Momentum measurements in higher dimensions In this section we discuss the measurement of the new momentum operator \(\vec{p}_{R}\) in higher dimensions. The discussion is based on the one-dimensional case which was already considered in previous works [8, 9, 23]. We find that different components of the momentum are not simultaneously measurable, unless the region \(\Omega\) is a perfect parallelepiped and the self-adjoint extension parameters \(\lambda\) are constant on each face. For a general momentum measurement in higher dimensions, one therefore chooses a specific direction. The position operators in all transverse directions are then measured, and, as we will see in this section, the resulting momentum measurement reduces to the one-dimensional case. Before we start the discussion, we point out a subtlety of the theory of the simultaneous diagonalization of unbounded self-adjoint operators. In fact, in order for two self-adjoint operators \(A\) and \(B\) to be simultaneously diagonalizable, it turns out not to be enough that \([A,B]=0\) on a dense subset of the Hilbert space where the relation is well-defined [3, 4, 24]. For \(A\) and \(B\) to be simultaneously diagonalizable they must _strongly commute_, a condition which may be equivalently stated as either the commutation of all (bounded) projection operators occurring in their spectral decomposition [4], or the commutation of the respective families of one-parameter exponentials for all values of the parameters [24]. ### General structure of momentum measurements First of all we note that it is possible to simultaneously measure the position operator and the momentum operator in orthogonal directions. In fact one has \[[\hat{k}\cdot\vec{p}_{R},\hat{m}\cdot\vec{x}]=0\qquad\mbox{if}\qquad\hat{k}\cdot \hat{m}=0, \tag{3.1}\] and this relation is well-defined on the whole domain of \(\hat{k}\cdot\vec{p}_{R}\). This is because \(\hat{m}\cdot\vec{x}\) is a bounded operator defined on the whole Hilbert space and, moreover, if \(\Psi\) satisfies the boundary conditions eq.(2.13), so does \(\hat{m}\cdot\vec{x}\,\Psi\). Therefore, the general structure of a higher-dimensional momentum measurement in a certain direction \(\hat{k}\) involves first measuring the position operator \(\vec{x}\) in all directions orthogonal to \(\hat{k}\). This will single out a line in the \(d\)-dimensional space, which will pierce through the boundary \(\partial\Omega\) in a set of isolated points, as shown in Fig. 1 (note that the case when the line is parallel to the boundary does not cause problems, as in that case \(\vec{n}\cdot\hat{k}=0\) and the boundary condition eq.(2.13) does not apply). The simplest case is that of a convex domain \(\Omega\), in which the line will generically pierce through \(\partial\Omega\) in exactly two points. Then the measurement of the \(\hat{k}\) component of the momentum is the same as the measurement of the one-dimensional momentum operator, as described in previous work [8, 9], with values of \(\lambda_{\pm}\) equal to those of \(\lambda(\vec{x})\) at the two points \(\vec{x}_{\pm}\) pierced by the line. If, on the other hand, the domain is non-convex, then the intersection between the line and \(\Omega\) will be a number of disconnected one-dimensional intervals, on each of which the momentum may be measured as in the one-dimensional case. The spectrum will therefore be the union of the single-interval spectra. In both cases, the overall spectrum of \(\hat{k}\cdot\vec{p}_{R}\) will be the union of the spectra of the measurement \(\hat{k}\cdot\vec{p}_{R}\) for each possible choice of eigenvalues of the position operator \(\vec{x}\) in all directions orthogonal to \(\hat{k}\). Its spectrum will thus be _continuous_, even though it is discrete in each line. We will consider the explicit form of the eigenfunctions in the next section. Figure 1: A non-convex two-dimensional domain \(\Omega\) together with its intersection with two parallel lines. The left line is split into two intervals because \(\Omega\) is non-convex. ### Simultaneous measurement of the momentum in different directions In this section we consider the simultaneous measurement of the momentum operator in orthogonal directions. For simplicity we first consider the two-dimensional case, and rotate the axes so that the two directions coincide with the \(\hat{x}\) and \(\hat{y}\) axes, i.e. we consider the operators \(\hat{x}\cdot\vec{p}_{R}\) and \(\hat{y}\cdot\vec{p}_{R}\). In this case, there are no position operators orthogonal to both the \(x\) and \(y\) directions. We must consider several cases, in relation with the shape of the box and the choice of self-adjoint extension parameters \(\lambda\). First of all, consider an irregularly shaped domain \(\Omega\), which is one where \(\vec{n}\cdot\hat{k}=0\) only at a set of isolated points as \(\hat{k}\) ranges among a set of basis unit-vectors. In this case the momentum boundary conditions eq.(2.13) are imposed at every point of the boundary. We call \(\lambda_{x}(\vec{x})\) and \(\lambda_{y}(\vec{x})\) the self-adjoint extension parameters appearing in eq.(2.13) for \(\hat{x}\cdot\vec{p}_{R}\) and \(\hat{y}\cdot\vec{p}_{R}\) respectively, which may be different in principle. Then if \(\lambda_{x}(\vec{x})\neq\lambda_{y}(\vec{x})\), the boundary conditions for the two operators are incompatible and, as such, the commutator \([\hat{x}\cdot\vec{p}_{R},\hat{y}\cdot\vec{p}_{R}]\) is only defined on the zero vector. Therefore the two operators cannot be simultaneously diagonalized. If instead \(\lambda_{x}(\vec{x})=\lambda_{y}(\vec{x})\), still in an irregularly shaped \(\Omega\), then \(\hat{x}\cdot\vec{p}_{R}\) and \(\hat{y}\cdot\vec{p}_{R}\) are defined in the same domain, and therefore there might be hope that they could have a joint set of eigenfunctions. However, their commutator \([\hat{x}\cdot\vec{p}_{R},\hat{y}\cdot\vec{p}_{R}]\) is again only defined on the zero vector, because even if \(\Psi\) satisfies the boundary conditions eq.(2.13), generally \(\hat{k}\cdot\vec{p}_{R}\,\Psi\) will not. In fact eq.(2.13), being defined only on the boundary of \(\Omega\), can only imply conditions on derivatives of \(\Psi\) along directions tangent to \(\partial\Omega\) at each point. Hence knowledge of the values of \(\Psi\) at the boundary cannot restrict \(\hat{k}\cdot\vec{p}_{R}\,\Psi\) in this case since we assumed that \(\vec{n}\cdot\hat{k}\neq 0\). Therefore again it is not possible to simultaneously diagonalize the two operators. We can understand the situation more explicitly by considering the eigenvalue equation for \(\hat{x}\cdot\vec{p}_{R}\) in a two-dimensional region \(\Omega\). For simplicity we assume that \(\Omega\) is convex. We can solve \[(\hat{x}\cdot\vec{p}_{R})\,\Phi=\mu\Phi\, \tag{3.2}\] to obtain the generic eigenfunction \[\Phi(x,y)=A(y)\begin{pmatrix}e^{i\mu x}+\sigma(y)e^{-i\mu x}\\ e^{i\mu x}-\sigma(y)e^{-i\mu x}\end{pmatrix}\, \tag{3.3}\] which should be compared to the one-dimensional case [8]. Imposing the boundary conditions eq.(2.13), we see that we must have \[e^{2i\mu x}=\sigma(y)\frac{1+\lambda(x,y)}{1-\lambda(x,y)}\, \tag{3.4}\] at each point \((x,y)\in\partial\Omega\). Since we assumed that the surface is convex, each line of fixed \(y\) intersects \(\partial\Omega\) in exactly two points, which we call \((x_{-},y)\) and \((x_{+},y)\). Then the solution of the eigenvalue equation is similar to the one-dimensional case, whereby \[e^{2i\mu(x_{+}-x_{-})}=\frac{(1+\lambda(x_{+},y))(1-\lambda(x_{-},y))}{(1- \lambda(x_{+},y))(1+\lambda(x_{-},y))}\, \tag{3.5}\] which gives the spectrum of eigenvalues at each fixed \(y\). Since this would imply that the eigenvalue \(\mu\) is a function of \(y\), we must then choose \(A(y)\propto\delta(y-y_{0})\) where \(\delta\) is the Dirac delta function and \(y_{0}\) a generic value of \(y\). Therefore each eigenfunction of \((\hat{x}\cdot\vec{p}_{R})\) is labelled by an eigenvalue \(y_{0}\) of \(y\) and then by an integer \(n\), which indexes the discrete spectrum at each \(y_{0}\). These results agree with the general measurement prescription that was described at the beginning of the section. It is clear that the function eq.(3.3) cannot be an eigenfunction of \(\hat{y}\cdot\vec{p}_{R}\), the self-adjoint momentum operator in the \(y\) direction. In fact, as we see from condition (3.5), for eq.(3.3) to be an eigenfunction of \(\hat{y}\cdot\vec{p}_{R}\), it is necessary not only that \(\lambda\) be constant on the whole \(\partial\Omega\), but also that the distance \(x_{+}-x_{-}\) be independent of \(y\). Since this requirement applies not only for the distances \(x_{+}-x_{-}\) with respect to \(y\), but also for the distances \(y_{+}-y_{-}\), which would arise in an eigenvalue equation for \(\hat{y}\cdot\vec{p}_{R}\), this means that the region \(\Omega\) in this case must be the inside of a parallelogram in two dimensions, or a parallelepiped in \(d\)-dimensions. This, however, means that \(\vec{n}\cdot\hat{k}=0\) for some \(\hat{k}\) on the whole domain, a situation which we treat in the next paragraph. The final case to consider is when \(\vec{n}\cdot\hat{k}\) is allowed to vanish on part of \(\partial\Omega\). Again we consider for simplicity a two-dimensional region \(\Omega\) with the operators \((\hat{x}\cdot\vec{p}_{R})\) and \((\hat{y}\cdot\vec{p}_{R})\). We recall that on those points where \(\vec{n}\cdot\hat{k}=0\) no boundary conditions are imposed on \(\hat{k}\cdot\vec{p}_{R}\). From the previous discussion, it is clear that as soon as we must impose boundary conditions for both \((\hat{x}\cdot\vec{p}_{R})\) and \((\hat{y}\cdot\vec{p}_{R})\) on a subset of \(\partial\Omega\) of non-zero measure, then the domain of the commutator \([\hat{x}\cdot\vec{p}_{R},\hat{y}\cdot\vec{p}_{R}]\) is only the zero vector, and therefore the two operators cannot be simultaneously diagonalized. In general, this means that a necessary condition for the two operators to commute is that Figure 2: A two-dimensional rectangle, an example of a parallelepiped, where the boundary conditions of the self-adjoint momentum in orthogonal directions are applied on disjoint subsets. The segments where \(\hat{x}\cdot\vec{p}_{R}\) satisfies the boundary conditions eq.(2.13) are dashed (blue), while the segments where \(\hat{y}\cdot\vec{p}_{R}\) satisfies the same boundary conditions are solid (red). the domain \(\partial\Omega\) must be the union of pairwise parallel hyperplanes, so that \(\Omega\) is the interior of a \(d\)-dimensional parallelepiped. In two dimensions this means that \(\Omega\) must be a parallelogram in general, or, in the case of the operators \((\hat{x}\cdot\vec{p}_{R})\) and \((\hat{y}\cdot\vec{p}_{R})\), a rectangle, as shown in Fig. 2. This is the same conclusion that we reached by considering the eigenvalue equation for \((\hat{x}\cdot\vec{p}_{R})\) in the previous paragraph. It should be noted, however, that if the one or more of the corners of the parallelepiped are rounded off, then we would have to impose boundary conditions for multiple components of \(\vec{p}_{R}\) simultaneously. Their commutator would then again be defined only on the zero vector, so that even an infinitesimal rounding off of the corners of the square makes it impossible to simultaneously measure the two operators. We now investigate in more detail the case of the operators \((\hat{x}\cdot\vec{p}_{R})\) and \((\hat{y}\cdot\vec{p}_{R})\) on a two-dimensional rectangle, shown in Fig. 2. In this case the momentum in the \(x\) direction, \(\hat{x}\cdot\vec{p}_{R}\), only has boundary conditions on the line segments parallel to the \(y\) axis (for which \(\vec{n}\propto\hat{x}\), so that \(\vec{n}\cdot\hat{x}\neq 0\)), while \(\hat{y}\cdot\vec{p}_{R}\) only has boundary conditions on the line segments parallel to the \(x\) axis. As we've already seen, \(\lambda\) must be constant on each line segment of \(\partial\Omega\). The domain of the commutator \([\hat{x}\cdot\vec{p}_{R},\hat{y}\cdot\vec{p}_{R}]\) is now non-zero, and given by those square-integrable wave functions \(\Psi\) in the doubled Hilbert space with square-integrable derivatives such that \[\begin{cases}\Psi_{o}(\vec{x})=\lambda_{1,3}\Psi_{e}(\vec{x})\,&\text{on}\ \partial \Omega_{1,3}\,\\ \partial_{y}\Psi_{o}(\vec{x})=\lambda_{1,3}\partial_{y}\Psi_{e}(\vec{x})\,& \text{,}\\ \Psi_{o}(\vec{x})=\lambda_{2,4}\Psi_{e}(\vec{x})\,&\text{on}\ \partial \Omega_{2,4}\,\end{cases} \tag{3.6}\] where the labels \(1\) to \(4\) for the line segments making up the boundary \(\partial\Omega\) are given in Fig. 2. Note that we're allowed to differentiate the boundary condition eq.(2.13) in a tangent direction. The domain of \([\hat{x}\cdot\vec{p}_{R},\hat{y}\cdot\vec{p}_{R}]\) is therefore a dense subset of the Hilbert space, and moreover \([\hat{x}\cdot\vec{p}_{R},\hat{y}\cdot\vec{p}_{R}]=0\) on its whole domain of definition. However, as we have seen in the introduction to this section, this does not conclusively show that the two operators are simultaneously diagonalizable. However, we can show that this is the case by considering the simultaneous eigenvalue equations for both components of \(\vec{p}_{R}\), \[\begin{cases}(\hat{x}\cdot\vec{p}_{R})\,\Phi=\mu_{x}\Phi\,\\ (\hat{y}\cdot\vec{p}_{R})\,\Phi=\mu_{y}\Phi\.\end{cases} \tag{3.7}\] The general solutions of the two equations is given by \[\Phi(x,y)=\begin{pmatrix}Ae^{i\vec{\mu}\cdot\vec{x}}+Be^{-i\vec{\mu}\cdot\vec{ x}}\\ Ae^{i\vec{\mu}\cdot\vec{x}}-Be^{-i\vec{\mu}\cdot\vec{x}}\end{pmatrix}\, \tag{3.8}\] where \(A,B,\vec{\mu}=(\mu_{x},\mu_{y})\) are constants. The lengths of the sides of the rectangle are \(L_{3}=L_{1}=L_{x}\) and \(L_{4}=L_{2}=L_{y}\). The boundary conditions require that \[\Psi_{o}(x,0) =\lambda_{1}\Psi_{e}(x,0)\, \tag{3.9}\] \[\Psi_{o}(L_{x},y) =\lambda_{2}\Psi_{e}(L_{x},y)\,\] (3.10) \[\Psi_{o}(x,L_{y}) =\lambda_{3}\Psi_{e}(x,L_{y})\,\] (3.11) \[\Psi_{o}(0,y) =\lambda_{4}\Psi_{e}(0,y). \tag{3.12}\] In this case, all the position-dependent factors drop out and this system is therefore identical to two copies of one-dimensional self-adjoint momentum operators, one with overall length \(L_{x}\) and extension parameters \((\lambda_{2},\lambda_{4})\), the other one with length \(L_{y}\) and extension parameters \((\lambda_{1},\lambda_{3})\). The two operators may thus be simultaneously diagonalized. At the end of this section, we summarize our results about the simultaneous diagonalization of different components of the self-adjoint momentum. We found that in general this is not possible unless the region \(\Omega\) is a parallelepiped, in which case one may simultaneously measure the components of \(\vec{p}_{R}\) parallel to the boundaries. However this situation is unphysical, as even an infinitesimal rounding off of the corners of the parallelepiped would make it impossible to simultaneously measure both. The eigenstates of \(\hat{l}\cdot\vec{p}_{R}\) in a convex region are given by \[\Phi_{\vec{y}_{0},k}(x_{l},\vec{y})=\delta(\vec{y}-\vec{y}_{0})\frac{1}{2\sqrt {x_{l+}-x_{l-}}}\begin{pmatrix}e^{ix_{l}k}+\sigma_{\vec{y}_{0},k}e^{-ix_{l}k} \\ e^{ix_{l}k}-\sigma_{\vec{y}_{0},k}e^{-ix_{l}k}\end{pmatrix}\, \tag{3.13}\] where \(x_{l+}-x_{l-}\) is the length of the line of constant \(\vec{y}_{0}\) with \(\Omega\). The eigenvalues \(k\) are implicitly dependent on \(\vec{y}_{0}\) and they are given by \[k_{n}=\frac{\pi}{x_{l+}-x_{l-}}n+\theta\, \tag{3.14}\] where \(\theta\) is the solution of the equation \[\exp{(2i\theta)}=\frac{(1+\lambda_{+})(1-\lambda_{-})}{(1-\lambda_{+})(1+ \lambda_{-})}. \tag{3.15}\] Here \(\lambda_{\pm}\) are the values of the self-adjoint extension parameter \(\lambda(\vec{x})\) at the two points pierced by the line of constant \(\vec{y}_{0}\) and thus also depend implicitly on \(\vec{y}_{0}\). Since the wavefunction eq.(3.13) is an eigenfunction of the position operator in the transverse directions, it contains a non-normalizable \(\delta\)-function. This should be interpreted as a generalized eigenfunction, as usual for eigenfunctions of operators with a continuous spectrum such as the position operator. If, on the other hand, the finite region \(\Omega\) is non-convex, the eigenvectors will depend on the number of finite intervals identified by the intersection between \(\Omega\) and the line of constant \(\vec{y}_{0}\). If there is only one finite interval, then the eigenvectors and eigenvalues are again given by eq.(3.13) and eq.(3.14) respectively. Generally, if the intersection identifies \(n\) finite intervals, the eigenvectors of \(\hat{l}\cdot\vec{p}_{R}\) will consist of the union of all the functions which are equal to eq.(3.13) on exactly one of the intervals and zero on the other \(n-1\) intervals. The eigenvalues are thus the union of the eigenvalues eq.(3.14) on each of the \(n\) intervals, each with its appropriate \(\lambda_{\pm}\). If the eigenvalues are all non-degenerate, then a momentum measurement will only find the particle on one interval at a time. On the other hand, if the eigenvalues are degenerate (this can happen for example if the \(\theta\) parameters are equal in the different intervals, and their lenghts are integer multiples of each other), then it is also possible to find the particle in superposition of momentum eigenstates belonging to different intervals. It is important to note that our discussion shows that, in the higher-dimensional setting, it is only meaningful to consider the momentum in one specific direction only. Finally, we remark that while for simplicity we considered the case of a two-dimensional convex region, it is easy to see that the conclusions generalize to a generically shaped region in any number of dimensions. ## 4 Ehrenfest Theorem and Interpretation of the Heisenberg Uncertainty Relation In the case of a finite interval in one dimension, it has been shown [23] that, in the physical sector, the expectation value of the new momentum operator \(p_{R}\) can be related to that of the standard momentum via the relation \[\langle-i\partial_{x}\rangle=\langle p_{R}\rangle+i\langle p_{I}\rangle. \tag{4.1}\] This relation leads to the position-momentum Ehrenfest theorem, \[\frac{d\langle x\rangle}{dt}=\langle p_{R}\rangle. \tag{4.2}\] The new momentum operator also satisfies a version of the momentum-force Ehrenfest theorem, \[\frac{d\langle p_{R}\rangle}{dt}=-\langle V^{\prime}\rangle+\langle F_{B} \rangle\, \tag{4.3}\] where \(F_{B}\) is a force localized at the boundary of the finite interval. In this section, we generalize the Ehrenfest theorem for the new momentum to the higher-dimensional case. Moreover, we consider the Heinseberg uncertainty relation. In particular, since measuring the new momentum leads outside the physical sector, a new momentum measurement necessarily transfers an infinite amount of energy to the particle. As a consequence, the variance \(\Delta p_{R}\) is generally infinite [23]. Therefore the uncertainty relation for \(\vec{p}_{R}\) is not physically meaningful. On the other hand, the standard momentum \(-i\vec{\nabla}\) is not an observable and therefore its uncertainty relation is also not physically meaningful [22]. Still, a generalization of the Heisenberg-Robertson-Schrodinger uncertainty relation is also valid for non-Hermitean operator such as \(-i\vec{\nabla}\) and it can be used to derive a physically meaningful inequality where each term is physically measurable [23]. In this section, we also provide a generalization of this inequality to the higher-dimensional case. ### Relations between expectation values We now prove the higher-dimensional generalization of the relation \(\langle-i\partial_{x}\rangle=\langle p_{R}\rangle+i\langle p_{I}\rangle\), which has been proven in the one-dimensional case in [23]. We will use the relation several times in this section, to prove the Ehrenfest theorem as well as for the uncertainty relation. This relation will also serve as a basis to define \(\vec{p_{I}}\) in the higher-dimensional setting. Consider for simplicity a convex region \(\Omega\), so that each line which intersects \(\partial\Omega\) does so in exactly two points. We consider the expectation value of the operator \(-i\hat{m}\cdot\vec{\nabla}\) in an arbitrary finite-energy state \(\ket{\Psi}\). We split the position vector \(\vec{x}=(x_{m},\vec{y})\) into a component \(x_{m}\) parallel to \(\hat{m}\) and a basis of components \(\vec{y}\) orthogonal to \(\hat{m}\). In order to perform the computation, we expand in a basis \(\ket{\Phi_{\vec{y}_{0},k}}\) of eigenstates of \(\hat{m}\cdot\vec{p}_{R}\). These are characterised first by a choice \(\vec{y}_{0}\) of eigenvector of all the coordinates of the position operator orthogonal to \(\hat{m}\). This then defines a line in \(d\) dimensional space which is parallel to \(\hat{m}\). Due to the convexity assumption on \(\Omega\), the line intersects \(\partial\Omega\) in exactly two points, which reduces the problem to the one-dimensional case for each fixed \(\vec{y}_{0}\) and as such defines a pair of parameters \(\lambda_{\pm}\) and the discrete spectrum of eigenvalues \(\{k\}\). Expanding in a basis of such eigenstates, \[\langle-i\hat{m}\cdot\vec{\nabla}\rangle=\int d^{d-1}\vec{y}_{0}\,\sum_{k} \langle\Psi|\Phi_{\vec{y}_{0},k}\rangle\,\langle\Phi_{\vec{y}_{0},k}|\,(-i \hat{m}\cdot\vec{\nabla})\,|\Psi\rangle\enspace. \tag{4.4}\] Now, since the eigenstates \(\langle\Phi_{\vec{y}_{0},k}|\) are \(\delta\) functions in the directions orthogonal to \(\hat{m}\), both inner products involving \(\langle\Phi_{\vec{y}_{0},k}|\) reduce to one-dimensional integrals along the intersection between \(\Omega\) and the line defined by \(\vec{y}_{0}\). Hence each sum over \(k\) reduces to a one-dimensional problem which was already treated in [23]. Adapting the one-dimensional result to the present situation, we find that \[\sum_{k}\langle\Psi|\Phi_{\vec{y}_{0},k}\rangle\,\langle\Phi_{ \vec{y}_{0},k}|\,(-i\hat{m}\cdot\vec{\nabla})\,|\Psi\rangle=\sum_{k}k\langle \Psi|\Phi_{\vec{y}_{0},k}\rangle\langle\Phi_{\vec{y}_{0},k}|\Psi\rangle+\\ -\frac{i}{2}\left[|\Psi(x_{m+},\vec{y}_{0})|^{2}-|\Psi(x_{m-}, \vec{y}_{0})|^{2}\right]\enspace, \tag{4.5}\] where \(x_{m}\) is the coordinate parallel to \(\hat{m}\) and \(x_{m\pm}\) are the coordinates of the intersection between the line defined by \(\vec{y}_{0}\) and \(\partial\Omega\). Therefore \(x_{m\pm}\) implicitly depend on \(\vec{y}_{0}\). Integrating the first term over \(\vec{y}_{0}\) simply gives the spectral decomposition for \(\hat{m}\cdot\vec{p}_{R}\), so that overall \[\langle-i\hat{m}\cdot\vec{\nabla}\rangle=\langle\hat{m}\cdot\vec{p}_{R}\rangle- \frac{i}{2}\int d^{d-1}\vec{y}_{0}\,\left[|\Psi(x_{m+},\vec{y}_{0})|^{2}-|\Psi (x_{m-},\vec{y}_{0})|^{2}\right]. \tag{4.6}\] Both points \((x_{m\pm},\vec{y}_{0})\) belong to \(\partial\Omega\) by construction, and, as such, the last term in eq.(4.6) may be written as a difference of two integrals, each of which is performed over half of \(\partial\Omega\). In fact, the set of points in \(\partial\Omega\) which are parallel to \(\hat{m}\) form a \((d-2)\)-dimensional subset which partitions \(\partial\Omega\) into two disjoint subsets, \(\partial\Omega_{+}\) and \(\partial\Omega_{-}\) (as long as \(\Omega\) is a convex region), such that \(\partial\Omega=\partial\Omega_{+}\cup\partial\Omega_{-}\). Since the integration in eq.(4.6) is performed over lines parallel to \(\hat{m}\), when integrated over \(\partial\Omega\) the integrand will be proportional to \(\vec{n}\cdot\hat{m}\), so that \[\langle-i\hat{m}\cdot\vec{\nabla}\rangle=\langle\hat{m}\cdot\vec{p}_{R}\rangle -\frac{i}{2}\int_{\partial\Omega}d^{d-1}\vec{x}\left(\vec{n}\cdot\hat{m}\right) \left|\Psi\right|^{2}\, \tag{4.7}\] or, equivalently, \[\langle-i\vec{\nabla}\rangle=\langle\vec{p}_{R}\rangle-\frac{i}{2}\langle\vec {n}\rangle_{\partial\Omega}. \tag{4.8}\] If we interpret the right-hand side of eq.(4.7) as the expectation value of an operator \(\hat{m}\cdot\vec{p}_{I}\), then we can take \[\vec{p}_{I}=\lim_{\epsilon\to 0}\begin{pmatrix}-\vec{n}(\vec{x})\delta(\vec{x} \in\partial\Omega_{\epsilon})&0\\ 0&0\end{pmatrix}\, \tag{4.9}\] where \(\partial\Omega_{\epsilon}\) is a \((d-1)\)-dimensional subset of \(\Omega\) such that \(\lim_{\epsilon\to 0}\partial\Omega_{\epsilon}=\partial\Omega\). Note that the factor of \(1/2\) in eq.(4.8) is absent in eq.(4.9) as it comes from the normalization of the finite-energy state in the doubled Hilbert space. ### Ehrenfest theorem Using the relation eq.(4.7), we may therefore prove the position-momentum Ehrenfest theorem, similarly to what was done in [23] for the one-dimensional case. We start by computing \[\frac{d}{dt}\langle\hat{k}\cdot\vec{x}\rangle=i\left(\langle H\Psi|\left(\hat {k}\cdot\vec{x}\right)|\Psi\rangle-\langle\Psi|\left(\hat{k}\cdot\vec{x} \right)|H\Psi\rangle\right). \tag{4.10}\] Using Green's second identity, it is not hard to show that the right hand side of eq.(4.10) is given by \[-\frac{i}{m}\int_{\Omega}d^{d}\vec{x}\,\Psi^{*}(\hat{k}\cdot\vec{ \nabla})\Psi-\frac{i}{2m}\int_{\partial\Omega}d^{d-1}\vec{x}\left(\hat{k}\cdot \vec{x}\right)\vec{n}\cdot\left[\Psi\vec{\nabla}\Psi^{*}-\Psi^{*}\vec{\nabla }\Psi\right]+\\ +\frac{i}{2m}\int_{\partial\Omega}d^{d-1}\vec{x}\left(\vec{n} \cdot\hat{k}\right)\left|\Psi\right|^{2}. \tag{4.11}\] The second term in the first line vanishes because of the Robin boundary conditions eq.(2.3), so that overall \[m\frac{d}{dt}\langle\hat{k}\cdot\vec{x}\rangle=\langle-i\hat{k}\cdot\vec{\nabla} \rangle+\frac{i}{2}\int_{\partial\Omega}d^{d-1}\vec{x}\left(\vec{n}\cdot\hat{k} \right)\left|\Psi\right|^{2}. \tag{4.12}\] Using eq.(4.7) it is therefore easy to see that \[m\frac{d}{dt}\langle\hat{k}\cdot\vec{x}\rangle=\langle\hat{k}\cdot\vec{p}_{R} \rangle\, \tag{4.13}\] or, equivalently, \[m\frac{d}{dt}\langle\vec{x}\rangle=\langle\vec{p}_{R}\rangle\, \tag{4.14}\] as expected. This result reinforces our arguments that \(\vec{p}_{R}\) is the appropriate concept of momentum for a particle confined to a finite region. We now consider the momentum-force Ehrenfest theorem. In this case the calculation does not easily reduce to the one-dimensional case. Therefore we choose to carefully compute \(\frac{d}{dt}\langle-i\vec{\nabla}\rangle\) and then take the real part to extract \(\frac{d}{dt}\langle\vec{p}_{R}\rangle\) using eq.(4.7). We have, \[\frac{d}{dt}\langle-i\hat{k}\cdot\vec{\nabla}\rangle=-\frac{i}{ 2m}\left(\langle\Delta\Psi|\left(-i\hat{k}\cdot\vec{\nabla}\right)\left|\Psi \right\rangle-\langle\Psi|\left(-i\hat{k}\cdot\vec{\nabla}\right)\left|\Delta \Psi\right\rangle\right)+\\ +i\left(\langle V\Psi|\left(-i\hat{k}\cdot\vec{\nabla}\right) \left|\Psi\right\rangle-\langle\Psi|\left(-i\hat{k}\cdot\vec{\nabla}\right) \left|V\Psi\right\rangle\right)\, \tag{4.15}\] where \(\Delta\) is the Laplacian. The term in the second line involving the potential may be computed in a straightforward manner and reduces to \(-\langle\hat{k}\cdot\vec{\nabla}V\rangle\). The term on the right-hand side of the first line, however, is more complicated. In fact, while we may assume that \(\Psi\) is twice differentiable as it is in the domain of the Hamiltonian, it may not be thrice differentiable. Therefore in the inner product \(\langle\Psi|\left(-i\hat{k}\cdot\vec{\nabla}\right)\left|\Delta\Psi\right\rangle\), the operator \(\left(-i\hat{k}\cdot\vec{\nabla}\right)\) should be understood as acting on the left. We therefore perform a partial integration to show that \[\langle\Delta\Psi|\left(\hat{k}\cdot\vec{\nabla}\right)\left|\Psi \right\rangle-\langle\Psi|\left(\hat{k}\cdot\vec{\nabla}\right)\left|\Delta \Psi\right\rangle=\int_{\Omega}d^{d}\vec{x}\ \Big{[}\nabla^{2}\Psi^{*}(\hat{k}\cdot\vec{\nabla}\Psi)+\nabla^{2}\Psi( \hat{k}\cdot\vec{\nabla}\Psi^{*})\Big{]}+\\ -\int_{\partial\Omega}d^{d-1}\vec{x}\left(\vec{n}\cdot\hat{k} \right)\Psi^{*}\nabla^{2}\Psi. \tag{4.16}\] Now we would like to show that the remaining volume integral is a boundary term. By careful use of Green's identities, making sure that only two derivatives act on \(\Psi\) at any point, we find that \[\nabla^{2}\Psi^{*}(\hat{k}\cdot\vec{\nabla}\Psi)+\nabla^{2}\Psi(\hat{k}\cdot \vec{\nabla}\Psi^{*})=\vec{\nabla}\cdot\Big{[}(\hat{k}\cdot\vec{\nabla}\Psi^ {*})\vec{\nabla}\Psi+(\hat{k}\cdot\vec{\nabla}\Psi)\vec{\nabla}\Psi^{*}-\hat{ k}\vec{\nabla}\Psi^{*}\cdot\vec{\nabla}\Psi\Big{]}\, \tag{4.17}\] which therefore reduces to a boundary term once substituted back in the volume integral. Then, putting everything together and using the Robin boundary conditions eq.(2.3), we finally find \[\frac{d}{dt}\langle-i\hat{k}\cdot\vec{\nabla}\rangle=-\langle\hat{k }\cdot\vec{\nabla}V\rangle+\\ +\frac{1}{2m}\int_{\partial\Omega}d^{d-1}\vec{x}\,\left[\gamma\hat {k}\cdot\vec{\nabla}(\Psi\Psi^{*})+(\vec{n}\cdot\hat{k})(\vec{\nabla}\Psi^{* }\cdot\vec{\nabla}\Psi+\Psi^{*}\nabla^{2}\Psi)\right]. \tag{4.18}\] This expression provides a higher-dimensional generalization of a result first shown in [25]. The imaginary part of this expression gives the expectation value of \(\vec{p}_{I}\), \[\frac{d}{dt}\langle\hat{k}\cdot\vec{p}_{I}\rangle=\frac{1}{4mi}\int_{\partial \Omega}d^{d-1}\vec{x}\,\left[(\vec{n}\cdot\hat{k})(\Psi^{*}\nabla^{2}\Psi- \Psi\nabla^{2}\Psi^{*})\right]. \tag{4.19}\] This can be expressed in terms of the divergence of the probability current \(\vec{j}\), \[\frac{d}{dt}\langle\hat{k}\cdot\vec{p}_{I}\rangle=\frac{1}{2}\int_{\partial \Omega}d^{d-1}\vec{x}\,(\vec{n}\cdot\hat{k})\vec{\nabla}\cdot\vec{j}=-\frac{1} {2}\int_{\partial\Omega}d^{d-1}\vec{x}\,(\vec{n}\cdot\hat{k})\frac{\partial} {\partial t}\left|\Psi\right|^{2}\, \tag{4.20}\] where we used the continuity equation. This last equation can immediately be seen to be true from the definition of \(\vec{p}_{I}\), eq.(4.9). Finally, taking the real part instead, we find the momentum-force Ehrenfest theorem, \[\frac{d}{dt}\langle\hat{k}\cdot\vec{p}_{R}\rangle=-\langle\hat{k }\cdot\vec{\nabla}V\rangle+\\ +\frac{1}{2m}\int_{\partial\Omega}d^{d-1}\vec{x}\,\left[\gamma \hat{k}\cdot\vec{\nabla}(\Psi\Psi^{*})+(\vec{n}\cdot\hat{k})(\vec{\nabla}\Psi ^{*}\cdot\vec{\nabla}\Psi+\Psi^{*}\nabla^{2}\Psi+\Psi\nabla^{2}\Psi^{*}) \right]\, \tag{4.21}\] or, more simply, \[\frac{d}{dt}\langle\vec{p}_{R}\rangle=-\langle\vec{\nabla}V\rangle+\frac{1}{2 m}\int_{\partial\Omega}d^{d-1}\vec{x}\,\left[\gamma\vec{\nabla}(\Psi\Psi^{*})+ \vec{n}(\nabla^{2}(\Psi\Psi^{*})-\vec{\nabla}\Psi^{*}\cdot\vec{\nabla}\Psi) \right]. \tag{4.22}\] Therefore, apart from the usual force term coming from the expectation value of the derivative of the potential, we also have two terms associated with a force at the boundary of quantum mechanical origin. One of these is normal to the boundary and may be interpreted as a sort of quantum-mechanical pressure, while the other one is in the direction of the gradient of the probability density at the boundary and it may be interpreted as arising from the imposition of the Robin boundary conditions eq.(2.3). ### Interpretation of the Heisenberg Uncertainty Relation As shown in a previous work [23], in its most general form the Heisenberg-Robertson-Schrodinger uncertainty relation, valid for not necessarily Hermitean operators \(A\) and \(B\)[23], is given by \[\Delta A\Delta B\geq\left|\langle A^{\dagger}B\rangle-\langle A^{\dagger} \rangle\langle B\rangle\right|. \tag{4.23}\] In [23] eq.(4.23) was used to provide an interpretation for the Heisenberg uncertainty relation for the non-self-adjoint operator \(-i\partial_{x}\). Here we extend this result to the higher-dimensional case, and provide an interpretation for the uncertainty relation for the higher-dimensional operator \(-i\vec{\nabla}\). Setting \(A_{k}=-i\hat{k}\cdot\vec{\nabla}\), by partial integration one obtains \[\langle A_{k}^{\dagger}A_{k}\rangle = \int_{\Omega}d^{d}x(-i\hat{k}\cdot\vec{\nabla}\Psi(\vec{x}))^{*}( -i\hat{k}\cdot\vec{\nabla}\Psi(\vec{x})) \tag{4.24}\] \[= -\int_{\Omega}d^{d}x\ \Psi(\vec{x})^{*}(\hat{k}\cdot\vec{\nabla})^{2 }\Psi(\vec{x})+\int_{\partial\Omega}d^{d-1}x\,(\vec{n}\cdot\hat{k})\Psi(\vec{x })^{*}\hat{k}\cdot\vec{\nabla}\Psi(\vec{x})\] \[= \langle-(\hat{k}\cdot\vec{\nabla})^{2}\rangle+\langle(\vec{n} \cdot\hat{k})(\hat{k}\cdot\vec{\nabla})\rangle_{\partial\Omega}\.\] Therefore choosing a set of basis vectors \(\hat{k}\), we find, using the Robin boundary conditions eq.(2.3), \[\sum_{\hat{k}}\langle A_{k}^{\dagger}A_{k}\rangle=\langle-\Delta\rangle- \langle\gamma\rangle_{\partial\Omega}\, \tag{4.25}\] where \(\Delta\) is the Laplacian and \(\gamma\) the self-adjoint extension parameter for the Hamiltonian. Setting \(B_{m}=\hat{m}\cdot\vec{x}\), we can similarly compute \[\langle B_{m}^{\dagger}A_{k}\rangle = \int_{\Omega}d^{d}x\ \Psi(\vec{x})^{*}(\hat{m}\cdot\vec{x})(-i\hat{k} \cdot\vec{\nabla})\Psi(\vec{x}) \tag{4.26}\] \[= \int_{\Omega}d^{d}x\ (-i\hat{k}\cdot\vec{\nabla}\Psi(\vec{x}))^{*}( \hat{m}\cdot\vec{x})\Psi(\vec{x})+i(\hat{k}\cdot\hat{m})\int_{\Omega}d^{d}x\ \left|\Psi(\vec{x})\right|^{2}+\] \[\qquad\qquad-i\int_{\partial\Omega}d^{d-1}x\,(\vec{n}\cdot\hat{k })(\hat{m}\cdot\vec{x})\left|\Psi(\vec{x})\right|^{2}\] \[= \langle A_{k}^{\dagger}B_{m}\rangle+i(\hat{k}\cdot\hat{m})-i \langle(\vec{n}\cdot\hat{k})(\hat{m}\cdot\vec{x})\rangle_{\partial\Omega}\,\] Also, \[\langle A_{k}\rangle = \int_{\Omega}d^{d}x\ \Psi(\vec{x})^{*}(-i\hat{k}\cdot\vec{\nabla}) \Psi(\vec{x}) \tag{4.27}\] \[= \int_{\Omega}d^{d}x\ (-i\hat{k}\cdot\vec{\nabla}\Psi(\vec{x}))^{*} \Psi(\vec{x})-i\int_{\partial\Omega}d^{d-1}x\,(\vec{n}\cdot\hat{k})\ \left|\Psi(\vec{x})\right|^{2}\] \[= \langle A_{k}^{\dagger}\rangle-i\langle\vec{n}\cdot\hat{k} \rangle_{\partial\Omega}\,\] consistently with eq.(4.8). Moreover, \[(\Delta A_{k})^{2}=\langle A_{k}^{\dagger}A_{k}\rangle-\langle A_{k}^{\dagger }\rangle\langle A_{k}\rangle=2m\langle T_{k}\rangle+\langle(\vec{n}\cdot\hat {k})(\hat{k}\cdot\vec{\nabla})\rangle_{\partial\Omega}-\langle\hat{k}\cdot \vec{p}_{R}\rangle^{2}-\langle\hat{k}\cdot\vec{p}_{I}\rangle^{2}\, \tag{4.28}\] where \(T_{k}=-\frac{1}{2m}(\hat{k}\cdot\nabla)^{2}\) is the kinetic energy in the \(\hat{k}\)-direction, and we used eq.(4.7). We must finally express \(\langle B_{m}^{\dagger}A_{k}\rangle\) in terms of measurable quantities. To do so, we consider the expectation value \(\langle(\hat{m}\cdot\vec{x})(-i\hat{l}\cdot\vec{\nabla})\rangle\) and insert a complete basis of eigenstates of \(\hat{l}\cdot\vec{p}_{R}\) to find, \[\langle(\hat{m}\cdot\vec{x})(-i\hat{l}\cdot\vec{\nabla})\rangle=\int d^{d-1} \vec{y}_{0}\,\sum_{\mu}\langle\Psi|\Phi_{\vec{y}_{0},\mu}\rangle\,\langle\Phi_{ \vec{y}_{0},\mu}|\,(\hat{m}\cdot\vec{x})(-i\hat{l}\cdot\vec{\nabla})\left|\Psi \right\rangle. \tag{4.29}\] Here \(|\Phi_{\vec{y}_{0},\mu}\rangle\) is an eigenfunction of \(\hat{l}\cdot\vec{p}_{R}\) with eigenvalue \(\mu\), as defined in eqs.(3.13) and (3.14). Now we compute the second inner product explicitly, \[\langle\Phi_{\vec{y}_{0},\mu}|\,(\hat{m}\cdot\vec{x})(-i\hat{l}\cdot\vec{\nabla} )\,|\Psi\rangle=\int_{\Omega}d^{d}\vec{x}\,\Phi^{*}_{\vec{y}_{0},\mu,+}(\hat{m} \cdot\vec{x})(-i\hat{l}\cdot\vec{\nabla})\Psi\, \tag{4.30}\] where \(\Phi_{\vec{y}_{0},\mu,+}\) is the projection of \(\Phi_{\vec{y}_{0},\mu}\) onto the positive energy subspace. Since \(\Phi\) is an eigenstate of \(\hat{l}\cdot\vec{p}_{R}\), its projection satisfies \((-i\hat{l}\cdot\vec{\nabla})\Phi_{\vec{y}_{0},\mu,+}=\mu\Phi_{\vec{y}_{0},\mu,+}\). Since \(\Phi^{*}_{\vec{y}_{0},\mu,+}\) is only supported on a line where \(\vec{x}=(x_{l},\vec{y}_{0})\) and \(x_{l-}<x_{l}<x_{l+}\), where \(x_{l}=\hat{l}\cdot\vec{x}\) and \(x_{l\pm}\) are the points where the line intersects \(\partial\Omega\), we find that actually \[\langle\Phi_{\vec{y}_{0},\mu}|\,(\hat{m}\cdot\vec{x})(-i\hat{l}\cdot\vec{\nabla })\,|\Psi\rangle=\int_{x_{l-}}^{x_{l+}}dx_{l}\,\Phi^{*}_{\vec{y}_{0},\mu,+}(x_ {l})(\hat{m}\cdot\vec{x})(-i\partial_{l})\Psi(x_{l},\vec{y}_{0}). \tag{4.31}\] This last equation can be simplified by integrating by parts and then performing some straightforward manipulations. Putting everything together, we find that \[\langle(\hat{m}\cdot\vec{x})(-i\hat{l}\cdot\vec{\nabla})\rangle =\langle(\hat{l}\cdot\vec{p}_{R})(\hat{m}\cdot\vec{x})\rangle+i( \hat{l}\cdot\hat{m})+\] \[-i\int d^{d-1}\vec{y}_{0}\,(\hat{m}\cdot\vec{x})\Psi(x_{l},\vec{y }_{0})\sum_{\mu}\langle\Psi|\Phi_{\vec{y}_{0},\mu}\rangle\Phi^{*}_{\vec{y}_{0}, \mu,+}(x_{l})\big{|}_{x_{l-}}^{x_{l+}}. \tag{4.32}\] The sum over the eigenvalues \(\mu\) is then the same as in the one-dimensional case and may be evaluated as in that case. We see from eq.(3.13) that \[\Phi_{\vec{y}_{0},\mu,+}(x_{l})=\frac{e^{ix_{l}\mu}}{\sqrt{x_{l+}-x_{l-}}}. \tag{4.33}\] Therefore, \[\sum_{\mu}\langle\Psi|\Phi_{\vec{y}_{0},\mu}\rangle\Phi^{*}_{\vec{y}_{0},\mu, +}(x_{l})\bigg{|}_{x_{l-}}^{x_{l+}}=\int_{x_{l-}}^{x_{l+}}d\vec{x}_{l}\,\Psi^{* }(\vec{x}_{l},\vec{y}_{0})\frac{\sum_{\mu}e^{i(\vec{x}_{l}-x_{l})\mu}}{2(x_{l+ }-x_{l-})}\bigg{|}_{x_{l}=x_{l-}}^{x_{l+}}. \tag{4.34}\] The sums over \(\mu\) become \(\delta\)-functions at the boundary using the Poisson summation formula, exactly as in the one-dimensional case [23], \[\sum_{k}\int_{a}^{b}dx\,f(x)e^{ik(x-a)}=(b-a)f(a). \tag{4.35}\] Applying this to our case we see that \[\sum_{\mu}\langle\Psi|\Phi_{\vec{y}_{0},\mu}\rangle\Phi^{*}_{\vec{y}_{0},\mu, +}(x_{l})\bigg{|}_{x_{l-}}^{x_{l+}}=\frac{1}{2}\Psi^{*}(x_{l},\vec{y}_{0}) \bigg{|}_{x_{l}=x_{l-}}^{x_{l+}}. \tag{4.36}\] Therefore \[\langle(\hat{m}\cdot\vec{x})(-i\hat{l}\cdot\vec{\nabla})\rangle= \langle(\hat{l}\cdot\vec{p}_{R})(\hat{m}\cdot\vec{x})\rangle+i(\hat{l}\cdot \hat{m})\\ -\frac{i}{2}\int d^{d-1}\vec{y}_{0}\,(\hat{m}\cdot\vec{x})\Psi^{* }(x_{l},\vec{y}_{0})\Psi(x_{l},\vec{y}_{0})\bigg{|}_{x_{l}=x_{l-}}^{x_{l+}}. \tag{4.37}\] This last integral is again an integral over the surface \(\partial\Omega\) where we consider only the component parallel to \(\hat{l}\). This can therefore be written in its final form as, \[\langle(\hat{m}\cdot\vec{x})(-i\hat{l}\cdot\vec{\nabla})\rangle=\langle(\hat{l} \cdot\vec{p}_{R})(\hat{m}\cdot\vec{x})\rangle+i(\hat{l}\cdot\hat{m})-\frac{i}{2 }\int_{\partial\Omega}d^{d-1}\vec{x}\,(\vec{n}\cdot\hat{l})(\hat{m}\cdot\vec{x} )\,|\Psi|^{2}\, \tag{4.38}\] which is equivalent to \[\langle(\hat{m}\cdot\vec{x})(-i\hat{l}\cdot\vec{\nabla})\rangle=\langle(\hat{l }\cdot\vec{p}_{R})(\hat{m}\cdot\vec{x})\rangle+i(\hat{l}\cdot\hat{m})-\frac{i} {2}\langle(\hat{n}\cdot\hat{l})(\hat{m}\cdot\vec{x})\rangle_{\partial\Omega}. \tag{4.39}\] These correctly reproduce their one-dimensional versions. We're finally ready to plug everything into the generalized uncertainty relation eq.(4.23). Calling \((\Delta B_{m})^{2}\equiv(\Delta x_{m})^{2}\), we find \[2m\langle T_{k}\rangle\geq\frac{1}{(\Delta x_{m})^{2}}\left[ \tfrac{1}{2}\langle\{(\hat{k}\cdot\vec{p}_{R}),x_{m}\}\rangle-\langle x_{m} \rangle\langle\hat{k}\cdot\vec{p}_{R}\rangle\right]^{2}+\\ +\frac{1}{4(\Delta x_{m})^{2}}\left[(\hat{k}\cdot\hat{m})- \langle(\vec{n}\cdot\hat{k})(\hat{m}\cdot\vec{x})\rangle_{\partial\Omega}+ \langle x_{m}\rangle\langle\vec{n}\cdot\hat{k}\rangle_{\partial\Omega}\right]^ {2}+\\ +\langle(\vec{n}\cdot\hat{k})(\hat{k}\cdot\vec{\nabla})\rangle_{ \partial\Omega}+\langle\hat{k}\cdot\vec{p}_{R}\rangle^{2}+\langle\hat{k}\cdot \vec{p}_{I}\rangle^{2}\, \tag{4.40}\] which is again an inequality for the kinetic energy \(T_{k}\). However, it contains a term, \(\langle(\vec{n}\cdot\hat{k})(\hat{k}\cdot\vec{\nabla})\rangle_{\partial\Omega}\), which is not necessarily measurable. Hence we sum over a set of orthogonal directions \(k\), which leads to \[2m\langle T\rangle\geq\frac{1}{(\Delta x_{m})^{2}}\sum_{\hat{k} }\left[\tfrac{1}{2}\langle\{(\hat{k}\cdot\vec{p}_{R}),x_{m}\}\rangle-\langle x _{m}\rangle\langle\hat{k}\cdot\vec{p}_{R}\rangle\right]^{2}+\\ +\frac{1}{4(\Delta x_{m})^{2}}\sum_{\hat{k}}\left[(\hat{k}\cdot \hat{m})-\langle(\vec{n}\cdot\hat{k})(\hat{m}\cdot\vec{x})\rangle_{\partial \Omega}+\langle x_{m}\rangle\langle\vec{n}\cdot\hat{k}\rangle_{\partial\Omega} \right]^{2}+\\ +\langle\gamma\rangle_{\partial\Omega}+\langle\vec{p}_{R}\rangle ^{2}+\langle\vec{p}_{I}\rangle^{2}\, \tag{4.41}\] for any choice of direction \(\hat{m}\), where we used the boundary conditions eq.(2.3). Each term in eq.(4.41) is, in principle, measurable and therefore the inequality provides a physically meaningful interpretation of the Heisenberg uncertainty principle for the operator \(-i\vec{\nabla}\) as an inequality for the kinetic energy of the system. This is only possible because of the introduction of the new momentum concept \(\vec{p}=\vec{p_{R}}+i\vec{p_{I}}\), which is a measurable observable, and therefore reinforces the notion that the new momentum concept is the appropriate notion of momentum for a particle confined in a finite region of space. ## 5 Conclusions The usual momentum operator \(-i\vec{\nabla}\) for a particle confined in a finite region of space is not self-adjoint, and therefore does not qualify as a physically valid observable. Based on the construction of a self-adjoint momentum operator for a particle in a finite one-dimensional interval, first introduced in [8, 9], we extended the new momentum concept to a finite region in arbitrary dimension. The new momentum concept provides an observable momentum which may be used to perform momentum measurements and compute expectation values, which would not be possible with the usual formulation. We have extended several results first obtained in the one-dimensional case [23], such as the Ehrenfest theorem and the interpretation of the Heisenberg uncertainty relation. The introduction of the new momentum \(\vec{p}=\vec{p_{R}}+i\vec{p_{I}}\) provides an extension of the fundamental physical concept of momentum to the case of a quantum mechanical particle confined to a finite region of space. A central result of the present work is that, in a finite region, momentum should only be considered one direction at a time. The most striking manifestation of this fact is that different components of the momentum cannot in general be measured simultaneously. Several remaining questions deserve further attention. Among these is understanding the dependence on the ultraviolet details of the probabilities of measurement of momentum eigenvalues, in both one and higher dimensions. In fact, a momentum measurement transfers infinite energy to the particle and therefore leads it outside the physical space. As such, what happens to the particle after a measurement necessarily depends on the underlying ultraviolet details. On the other hand, one would hope that measurement probabilities only depend on the low-energy physics. Moreover, while the new momentum concept is in principle measurable, it would be interesting to construct a momentum measurement device, at least a theoretical one, along the lines first established by von Neumann [2]. This could be done, for example, via time-of-flight measurements [26]. It would then be especially interesting to look for experimental verification of both the quantum mechanical force at the boundary (which arises as part of the momentum-force Ehrenfest theorem (4.22)) and the interpretation of the Heisenberg uncertainty relation (4.41). ## Acknowledgments UJW thanks M. Al-Hashimi for his collaboration on the development of the new momentum concept in [8, 9]. The research leading to these results received funding from the Schweizerischer Nationalfonds (grant agreement number 200020_200424).
2309.08295
A Real-Time Active Speaker Detection System Integrating an Audio-Visual Signal with a Spatial Querying Mechanism
We introduce a distinctive real-time, causal, neural network-based active speaker detection system optimized for low-power edge computing. This system drives a virtual cinematography module and is deployed on a commercial device. The system uses data originating from a microphone array and a 360-degree camera. Our network requires only 127 MFLOPs per participant, for a meeting with 14 participants. Unlike previous work, we examine the error rate of our network when the computational budget is exhausted, and find that it exhibits graceful degradation, allowing the system to operate reasonably well even in this case. Departing from conventional DOA estimation approaches, our network learns to query the available acoustic data, considering the detected head locations. We train and evaluate our algorithm on a realistic meetings dataset featuring up to 14 participants in the same meeting, overlapped speech, and other challenging scenarios.
Ilya Gurvich, Ido Leichter, Dharmendar Reddy Palle, Yossi Asher, Alon Vinnikov, Igor Abramovski, Vishak Gopal, Ross Cutler, Eyal Krupka
2023-09-15T10:20:16Z
http://arxiv.org/abs/2309.08295v1
A Real-Time Active Speaker Detection System Integrating an Audio-Visual Signal with a Spatial Querying Mechanism ###### Abstract We introduce a distinctive real-time, causal, neural network-based active speaker detection system optimized for low-power edge computing. This system drives a virtual cinematography module and is deployed on a commercial device. The system uses data originating from a microphone array and a 360-degree camera. Our network requires only 127 MFLOPs per participant, for a meeting with 14 participants. Unlike previous work, we examine the error rate of our network when the computational budget is exhausted, and find that it exhibits graceful degradation, allowing the system to operate reasonably well even in this case. Departing from conventional DOA estimation approaches, our network learns to query the available acoustic data, considering the detected head locations. We train and evaluate our algorithm on a realistic meetings dataset featuring up to 14 participants in the same meeting, overlapped speech, and other challenging scenarios. Ilya Gurvich, Ido Leichter, Dharmendar Reddy Palle, Yossi Asher, Alon Vinnikov, Igor Abramovski, Vishak Gopal, Ross Cutler, Eyal Krupka Microsoft Corporation {ilyagu, idol, dharmendar.palle, yossiasher, alvinn, igorab, vishak.gopal, ross.cutler, eyalk}@microsoft.com Speaker detection, A/V fusion, deep learning ## 1 Introduction In the era of hybrid work environments, where teams combine in-person and remote collaboration, it is essential to ensure an equitable experience for all participants in meetings. This paper introduces and focuses on an active speaker detection (ASD) system running on a teleconferencing device placed on a table in a meeting room. The system's objective is to determine, in real-time, whether each participant in the meeting room is speaking or not. The ASD system then feeds into a virtual cinematographer that crops the speakers' faces, adjusts the virtual camera's angles, and switches between different participants based on their speaking activity, creating a seamless and engaging visual experience for remote participants. We present a novel real-time causal system that uses a horizontal circular microphone array in addition to a 360degcamera to accurately determine who's speaking in a meeting room. Our deep neural network runs on an edge device, powered by a lightweight Intel Movidius Myriad X vision processing unit (VPU) consuming only 2 Watts of power and that can concurrently handle up to 14 participants at a prediction rate of 7.5 predictions per second, requiring only \((123.5+43.5/K)\) MFLOPs to process a single participant's head per frame, where K is the total number of participants in that frame. We also demonstrate that our algorithm exhibits graceful degradation when the number of participants exceeds the available computational budget. While research that utilizes microphone array data is usually concerned with estimating the direction of arrival (DOA) of sounds, we present a distinctive approach for querying the available acoustic data given the location of the participant in question. Specifically, since we have video data available, we extract heads from it and then construct a representation that encodes the location of the participant we want to determine speech for. We also experiment with encoding the locations and sizes of the background participants as part of that query, to encourage the network to disregard possible interfering sources. This constitutes an end-to-end approach, rendering post-processing in the form of matching DOA estimates to participants' locations unnecessary. Moreover, unlike previous audio-only methods which attempt to regress the azimuth direction only, our approach explicitly accounts for both azimuth and altitude information in the audio signal. Another branch of our neural network models lip motion, which is correlated with the audio to accurately determine speech. We train our system and evaluate it using an extensive and realistic dataset of multi-participant meetings collected specifically for this purpose, which features up to 14 participants in the same meeting, overlapped speech, and other challenging scenarios (see Sec. 4.1). Our contributions include: (1) A real-time algorithm consisting of a head detector, a head tracker, an ASD deep neural network (DNN) model, and a virtual cinematography module running concurrently on a low-power edge device. (2) A low-latency neural network architecture that uses multi-channel audio and video feeds, in addition to spatial query data, to determine whether each participant is speaking. (3) A formulation of the ASD problem, which uses the participant's location as an input to the network to query the multi-channel audio data, and predict speech/silence class for it, in contrast to predicting a DOA. (4) Ablation studies to assess the contribution of input features and system components on the overall accuracy of the system. (5) A method to handle compute budget exhaustion and graceful degradation. ## 2 Related Work Over the past two decades, extensive research has been conducted on the ASD problem, starting with the pioneering work [1, 2]. This research can be characterized based on a whole range of factors, some of which we mention below. ModalitiesSeveral modalities were employed for active speaker detection, each with its own advantages and limitations. Work utilizing visual modality (e.g. [3]) relies on face detection and tracking techniques to feed classifiers that operate per face and detect facial cues, lip movements, or body language. However, relying solely on the video modality increases the likelihood of misinterpreting facial movements as certain actions are ambiguous. Furthermore, video is susceptible to limitations in cases such as unfavorable lighting conditions or when participants are occluded, face away from the camera, or located far from it. When microphone array multi-channel audio is available, it becomes possible to utilize sound source localization (SSL) algorithms to determine the DOA of incoming sounds. Classical approaches to this problem include generalized cross-correlation-based methods [4, 5], and subspace-based methods. Lately, there has been a surge in the prevalence of DNN-based approaches to this problem [6]. These include the direct usage of short-time Fourier transform (STFT)-based features, classical features, or their combinations. In spite of this considerable progress, SSL algorithms exhibit reduced robustness in situations where two or more speakers are situated in proximity, in cases of overlapped speech, or when background noise or reverberations are present. A considerable amount of research has lately been focused on single-channel A/V fusion models. This challenging problem got much attention following [7] and then [8], which introduced the AVA Active Speaker dataset. Most of that research builds on the premise that facial motion patterns can be correlated with the audio signal. Few works were published lately that used multi-channel audio in combination with video. In [9], the authors use the audio signals to estimate a 2D heat map of acoustic activity in the scene and then concatenate it with a video frame while aligning the spatial coordinates. The result is then processed by another network to yield a final speech activity map. This work didn't use temporal modeling (which was shown to be beneficial for ASD [10]), and the audio pipeline made predictions without knowing the locations of the faces in the scene and without correlating lip movement with the audio. In [11], audio-based features are combined with video-based features using a cross-modal attentive fusion mechanism proposed in the paper. However, video is used only to specify the locations of faces, without modeling information originating from the mouth region. The researchers of [12], whose work is most relevant to ours, proposed to incorporate lip features into a DOA estimation system. However, their system is not real-time, was trained on the MISP dataset which focuses on a scenario substantially different from ours (see discussion below), and requires postprocessing to associate DOA outputs to faces. Another difference from [12] is that the latter detects the location of the lips, in addition to the face, thus spending compute time on this operation, as well as restricting the operational envelope of their system to near frontal poses. Our model directly determines each participant's speech state, uses the knowledge of the locations of the heads to query the microphone array data, correlates lip movement with audio, and models long-term temporal relationships. It runs in real-time, doesn't require preprocessing in the form of detecting the lips, and is not limited to near frontal faces. _Inference run-time performance:_ Despite significant progress made in the field of efficient neural networks (e.g., [13, 14, 15]), the ASD problem received little attention from the community in this regard. Recently, [16] proposed to split 3D convolutions into 2D and 1D convolutions to improve expressiveness and latency similar to the "(2+1)D" decomposition described in [17]. Their offline system, guided by this and other design choices, yielded near-SoTA results on AVA-ActiveSpeaker while requiring 200 MFLOPs per candidate per frame. _Datasets:_ There are several datasets with some degree of relevance to our research. We list them below and outline the key factors that prompted us to eventually collect and conduct experiments with our own data. The _AMI corpus_[18] includes multi-channel A/V recordings of meetings. However, there is a maximum of 4 participants per meeting. It also doesn't include audio-visual registration data, which prevents its use in algorithms in which it's needed. The _MISP2021 challenge_[19] and its accompanying dataset [20], contains video and _linear_ microphone array recordings of up to 6 sitting participants (as described in [12]), and focuses specifically on conversations in home TV rooms in Chinese, a scenario which is remarkedly different from ours. Unfortunately, we were also not able to experiment with it due to its restrictive license agreement. The popular single-channel A/V _AVA Active Speaker_ dataset [8], despite containing challenging scenarios such as dubbing and complex scenes, is highly skewed towards the film industry, and does not reflect the "true" distribution of human activity, as noted by its curators. The ASD task on this dataset is made simpler by taking advantage of priors relating to cinematographic effects (e.g., the camera tends to focus on the speaker [21]). Furthermore, in 90% of the frames of this dataset, 3 or fewer participants are visible. This contrasts with our dataset, which contains between 4 to 7 people in 75% of the frames, and 8 participants or more in 14% of the frames. ## 3 Method Our network architecture is illustrated in Fig. 1. Using three backbone networks, it first creates short-term representations of the A/V modalities and the spatial query information. We've chosen to use SqueezeNet [13] as a backbone network as it provides a good accuracy/performance tradeoff on the VPU. Then these embeddings are fed into a sequential model to be fused together, and to take long-term temporal context into account. For the sequential model, we've opted to select a TCN due to its low computational burden and superiority in a wide range of tasks [22]. At inference time, we take a sliding window approach: we maintain a first-in-first-out queue that contains the short-term embeddings. Whenever new embeddings become available, they're concatenated and put in a queue. If the length of the queue exceeds the receptive field size of the TCN the oldest item is removed. This simple approach allows the network to consider long-term information while calculating only the most recent embeddings at each timestep. ### Head detection and tracking The head detection and tracking module provides the locations of all persons in the room in each frame captured by the camera. The unconstrained meeting scenario involves many challenges, including occlusions, extreme head pose, varying lighting conditions, low resolution due to device-to-person distance, and motion blur. Therefore, any individual frame may not contain the necessary information for detecting all the people in the room. The head tracking uses head detection and low-level tracking to maintain a set of tracklets, where each tracklet is defined as a sequence of heads in time that belongs to the same person. We use a method similar to that in [23] with several adaptations to our specific setting, such as exploiting the stationarity of the camera for detecting motion, performing the low-level tracking by color-based mean-shift instead of gray-level based normalized correlation, tuning the algorithm to minimize the risk of tracklet mergers (which in our context are destructive), etc. Special attention was paid to meet the requirement for the real-time tracking of many people in the 360\({}^{\circ}\)panorama video. This was achieved by dividing the head detection and tracking task between two processes Figure 1: Proposed network architecture diagram - one for searching for new heads to track and one for tracking the heads found by the former process. To avoid large latency, the detector of the first process is applied to each frame to part of the frame only, each time on a different part of it in a round-robin fashion. To increase the efficiency of the second process, motion detection is applied on all head regions being tracked. When there is no motion, the head location remains the same, so there is no need to apply tracking. Since motion detection is cheaper than tracking, and since most of the meeting heads are stationary, a lot of compute is saved. ### Visual encoder We use the output of the tracker to crop the image patches of the participants, resize them to a fixed size \(H\times W\), and convert them to grayscale. Prior to cropping, the bounding box undergoes a small adjustment in its location and size to make sure that the lip region is included in it when the face is visible (even when the participant is facing sideways). The parameters of this adjustment are found in an initial experimentation stage. These transformations of the bounding box allowed pixels that are more informative to be included in the network's receptive field. This simple approach makes the calculation of the landmarks (and specifically lips) unnecessary, thus saving computing time. To include movement information in the representation, we stack the last \(l\) facial patches as channels to the backbone network \(f_{v}\). Since \(l\) is expected to be small, and since we smooth the output of the tracker before extracting the patches, the lack of registration between facial landmarks is minimal. Formally, we encode the short-term facial representation at each time-step \(t\) for participant \(i\), given its facial crop \(v_{t,i}\in\mathbb{R}^{H\times W}\), using the backbone network \(f_{v}\) as follows: \(v_{t,i}^{e}=f_{v}(v_{t-l+1,i}\oplus v_{t-l+2,i}\oplus\ldots\oplus v_{t,i})\), where \(\oplus\) denotes concatenation. ### Audio encoder Our system uses \(T_{a}\) seconds, ending at frame \(t\), of the \(M\) microphones' waveform data, sampled at frequency \(\nu\) to encode the multi-channel audio signal \(a_{t}\in\mathbb{R}^{T_{a}\nu\times M}\). STFT is then applied on each microphone's \(m\in\{1,\ldots,M\}\) signal \(a_{t}[\cdot,m]\) separately. We then use the resulting spectrogram \(S^{m}\in\mathbb{R}^{T_{B}\times F}\), where \(T_{B}\times F\) are the dimensions of the time-frequency bin matrix, to extract simple logarithmic and phase features, concatenate them along the channels dimension, and feed them into the audio backbone network \(f_{a}\). Formally, the momentary audio representation is encoded as follows: \[a_{t}^{e}=f_{a}\left(\bigoplus_{m\in\{1,\ldots,M\}}log(|S^{m}|+\epsilon) \oplus arg(S^{m})\right)\!.\] The result of the concatenation has the dimensions \(2M\times T_{B}\times F\), where the first dimension indexes the norm and phase signals of the channels. This allows the 2D-CNN network to integrate data across all microphones for each time-frequency bin. ### Spatial query encoder We use the shared coordinate system of the 360\({}^{\circ}\)camera and the circular microphone array to encode a query containing spatial information of the reference and background participants. We first construct vectors \(\mathbf{v}\) containing the sine and cosine of the azimuth \(\lambda\) and the altitude \(\phi\) of the participant's head, the angular width of the head \(\theta\), and the spherical distance (on a unit sphere) \(\delta\) between the reference head and the current background head: \(\mathbf{v}=(\sin\lambda,\cos\lambda,\sin\phi,\cos\phi,\theta,\delta)\). We then sort background participants by their distance from the reference head and take \(N\) background heads that are closest to the reference head. Then, we encode the reference head's vector using a 2-layer fully connected (FC) network \(f_{\text{ref}}\). A separate network with an identical architecture \(f_{\text{bg}}\) is used to encode the background heads' vectors. We then take the element-wise mean of these background vectors' encodings, concatenate it with the encoding of the reference vector, and feed the result into another FC combiner network \(f_{\text{comb}}\): \(s_{t,i}^{e}=f_{\text{comb}}\left(f_{\text{ref}}\left(\mathbf{v}_{t,i}\right) \oplus\frac{1}{N}\sum_{j\in\{1,\ldots,N\}}f_{\text{bg}}(\mathbf{v}_{t,j})\right)\) This representation allows our network to reason about the spatial location of the reference participant, their distance from the device, and their interaction with possible distracting (background) participants. ### Fusion and temporal modeling We define a prediction head as \(f_{\text{pred}}(v)=\text{softmax}(\text{FC}(TCN(v)_{t}))\). The fully connected network FC, which serves as a bottleneck, is 3 layers deep, and is applied on \(TCN(v)_{t}\), the last timestep of the TCN's output. It is followed by a 2-class softmax operation for speech/silence classification. The concatenation of all three encodings is defined as \(e_{t,i}=v_{t,i}^{e}\oplus a_{t,i}^{e}\oplus s_{t,i}^{e}\). When training, a binary cross-entropy loss function \(L_{\text{ECE}}\) is applied to the result, yielding the primary loss \(L_{\text{max}}=L_{\text{ECE}}\left(f_{\text{pred}}(e_{t-K+1,i},\ldots,e_{t,i})\right)\), where \(K\) is TCN's receptive field. To encourage robustness to modality-specific corruptions and avoid cross-modality co-adaption, we apply auxiliary losses to separate groups of modalities \(L_{v}=L_{\text{BCE}}\left(f_{\text{pred}}(v_{t-K+1,i}^{e},\ldots,v_{t,i}^{e})\right)\) and \(L_{\text{as}}=L_{\text{BCE}}\left(f_{\text{pred}}(a_{t-K+1,i}^{e}\oplus s_{t-K+ 1,i}^{e},\ldots,a_{t,i}^{e}\oplus s_{t,i}^{e})\right).\) Note that the audio and the spatial query encodings must be concatenated together, as the latter contains the location of the reference participant. The final loss is \(L=L_{\text{vis}}+\lambda_{v}L_{v}+\lambda_{\text{as}}L_{\text{as}}\). ### Data augmentation We augment the video data by randomly rotating and translating the facial clips to simulate head rotations and head detector's jitter, respectively. Audio is augmented by audio channel swapping, similarly to [24], which simulates the rotation of the microphone array by \(\frac{360}{M}\cdot k\) degrees, where \(k\in\{0,\ldots,M-1\}\), by taking advantage of its circular symmetry. The azimuth input to the spatial query encoder is rolled in tandem to maintain consistency. ## 4 Experiments ### Dataset The commercial system we are building is intended to run on a device placed strategically in the center of a conference room table to enable capturing and transmitting the A/V signals of all meeting participants. The device's camera is located approximately at the height of meeting participants' faces similar to the RoundTable system [2], has a full panoramic resolution of \(1666\times 10000\) pixels (height \(\times\) width), and is accompanied by a circular microphone array with a radius of 4.25cm, which is situated slightly above the camera. The array comprises 6 MEMS microphones evenly spaced around its circumference, along with an additional microphone placed at its center. We have collected 110 meetings, each 30 minutes long. We partitioned the dataset into 71, 17, and 22 meetings for the train, validation, and test sets, respectively. To avoid overfitting to specific participants, we made sure that every participant appeared in only one partition: 29, 17, and 14 participants in the train, validation, and test sets, respectively. The meetings were conducted in English; Figure 2: A frame from our dataset, arranged in two rows. The faces were pixelated to preserve participants’ privacy. however, our participants came from diverse ethnic backgrounds and spoke with various accents. The dataset contains an equal proportion of males and females. The recorded meetings are not scripted but conducted in a natural manner so that each participant is free to speak and behave in a way that the feels appropriate. However, to kick-start each meeting we provide a topic for each discussion, e.g.: Pros/cons of working from home or office, etc. We also assign each participant a set of activities they need to perform such as white-boarding, walking around, gesturing, entering/exiting the room, etc. Though these and other topics and activities can be raised and occur naturally, similarly to [18] we choose to explicitly elicit them to promote good coverage of real-world meetings. The recordings are annotated at intervals of 200ms, specifying for each participant whether that participant is speaking or not. This rate is sufficient for our purpose of implementing a virtual cinematographer. ### Implementation details We reshape each patch to \(120\times 192\) pixels and set the number of frames \(l\) (defined in Sec. 3.2) for the short-term video representation to 3. Though our device captures video at a frame rate of 30 FPS, our ASD system operates at 7.5 FPS, as we found in initial experiments that this frame rate provides a good tradeoff between accuracy and compute time. The audio is sampled at a 16 kHz rate, and we use a window size of 512 samples, a hop length of 160 samples, and 512 FFT bins as STFT parameters. We set \(T_{a}\), the duration of the waveform we process at each timestep, to 300 milliseconds. Following initial experimentation, we configured the TCNs to have three 64-channel layers. Our network architecture is implemented in PyTorch [25]. For training, we use the SGD optimization algorithm with a Nesterov momentum of 0.9 and batch size of 64. Training is stopped if the validation error doesn't improve for 10 epochs. ### Evaluation metrics In SSL literature, the mean absolute DOA error is commonly used as an evaluation metric in contrast to binary classification metrics (e.g., F1, AUROC, etc.), which are used in the majority of ASD-related work. We find the latter to be more informative in scenarios such as ours, in which sparsely located candidates are first found in a preprocessing step. Object (and specifically: head) detectors don't usually suffer from spatial error, reinforcing our point. In [12] the authors calculated accuracy and F1 score by applying a threshold of 20\({}^{\circ}\)on the DOA error. In our scenario, however, we are interested in detecting the active speaker regardless of the azimuth difference from neighboring speakers. Thus, confusing who the active speaker is between two neighbors is considered an error even if they are at the same azimuth. We therefore calculate the equal error rate (EER) for all (frame, participant) pairs w.r.t. the ground truth annotations. ### Ablation studies In Table 1, we show different system configurations, with certain features being either activated or deactivated for each one. Configuration C1, which uses only the visual signal, achieves an EER of 9.31%, while C6, which uses the audio signal in combination with the full spatial query achieves an EER of 7.13%. Their combination, C8 reduces the error rate to 5.99%. However, when this A/V network is trained without auxiliary losses (C9) the error rate increases significantly to 7.00%, emphasizing its importance. Exploring the effects of ablating components of the query when only the audio signal is available, we find that excluding the background speakers' representation increases the error slightly from 7.13% (C6) to 7.36% (C5). However, when both A/V signals are used, the network achieves similar error rates regardless of background speakers' encoding (C7, C8), probably by using information available in the visual modality to achieve similar results. When the query is eliminated altogether (C2) the visual and the audio trunks work by correlating visemes resulting in an EER of 8.47%, which is better than the visual trunk alone (C1). The lack of any spatial information (query) obviates the need for multi-channel audio and indeed, a system that takes as inputs the visual signal with a single-channel audio signal (C3) achieves a similar accuracy. This result, together with previous ones, suggests that the audio trunk is needed not only to determine whether speech is coming from a certain direction but also to correlate lip motion with audio. A system that uses the audio modality but is not provided with the query (C4) yields a very high error rate, as expected. ### Inference time and graceful degradation Our full network requires 167 MFLOPs (13ms on our target VPU device) to make a single inference for a single participant, out of which 43.5 MFLOPs (4.2ms) are spent on the audio encoder. As the audio encoder's representation is participant independent, we reuse its result for all participants in a frame. Running at 7.5 predictions per second, our computational budget is 133.3ms, allowing us to support up to 14 participants in the room. If this limit is exceeded, our system chooses the participants to predict in a round-robin fashion. That is, for every timestep the participants are sorted according to their last prediction times. Those that are the oldest get predicted till the computational budget is exhausted. This algorithm results in a stride that is different from the one that was used during training, constituting out-of-distribution data. Moreover, the stride between consecutive invocations may not be constant. We simulate round-robin in evaluation time and report the error rate as a function of the average prediction rate in Fig. 3. We observe that the EER increases smoothly, up to 7.51% at an average prediction rate of 1.875 predictions/sec, which is still a reasonable figure for our purposes, especially given the fact that the model was not trained to handle this scenario. ## 5 Conclusions We have described a real-time efficient ASD system whose performance gracefully degrades under heavy computational constraints. We have departed from traditional DOA estimation methods by querying available acoustic data using detected head locations, eliminating the need for post-processing stages, thus taking another step towards end-to-end architectures for multi-channel ASD. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & **Vis** & **Aud** & **Query** & **Bg** & **Aux** & **EER (\%)** \\ \hline C1 & ✓ & & & & 9.31 \\ C2 & ✓ & ✓ & & & ✓ & 8.47 \\ C3 & ✓ & 1ch & & & & 8.27 \\ C4 & & ✓ & & & & 42.32 \\ C5 & & ✓ & ✓ & & 7.36 \\ C6 & & ✓ & ✓ & ✓ & & 7.13 \\ C7 & ✓ & ✓ & ✓ & & ✓ & 6.02 \\ C8 & ✓ & ✓ & ✓ & ✓ & ✓ & **5.99** \\ C9 & ✓ & ✓ & ✓ & ✓ & & 7.00 \\ \hline \hline \end{tabular} \end{table} Table 1: Ablated configurations and results. Bg denotes background speakers’ locations encoding. Aux denotes auxiliary loss. Figure 3: EER versus frame rate
2309.05941
Random Segmentation: New Traffic Obfuscation against Packet-Size-Based Side-Channel Attacks
Despite encryption, the packet size is still visible, enabling observers to infer private information in the Internet of Things (IoT) environment (e.g., IoT device identification). Packet padding obfuscates packet-length characteristics with a high data overhead because it relies on adding noise to the data. This paper proposes a more data-efficient approach that randomizes packet sizes without adding noise. We achieve this by splitting large TCP segments into random-sized chunks; hence, the packet length distribution is obfuscated without adding noise data. Our client-server implementation using TCP sockets demonstrates the feasibility of our approach at the application level. We realize our packet size control by adjusting two local socket-programming parameters. First, we enable the TCP_NODELAY option to send out each packet with our specified length. Second, we downsize the sending buffer to prevent the sender from pushing out more data than can be received, which could disable our control of the packet sizes. We simulate our defense on a network trace of four IoT devices and show a reduction in device classification accuracy from 98% to 63%, close to random guessing. Meanwhile, the real-world data transmission experiments show that the added latency is reasonable, less than 21%, while the added packet header overhead is only about 5%.
Mnassar Alyami, Abdulmajeed Alghamdi, Mohammed Alkhowaiter, Cliff Zou, Yan Solihin
2023-09-12T03:33:36Z
http://arxiv.org/abs/2309.05941v1
# Random Segmentation: New Traffic Obfuscation against Packet-Size-Based Side-Channel Attacks ###### Abstract Despite encryption, the packet size is still visible, enabling observers to infer private information in the Internet of Things (IoT) environment (e.g., IoT device identification). Packet padding obfuscates packet-length characteristics with a high data overhead because it relies on adding noise to the data. This paper proposes a more data-efficient approach that randomizes packet sizes without adding noise. We achieve this by splitting large TCP segments into randomized chunks; hence, the packet length distribution is obfuscated without adding noise data. Our client-server implementation using TCP sockets demonstrates the feasibility of our approach at the application level. We realize our packet size control by adjusting two local socket-programming parameters. First, we enable the TCP_NODELAY option to send out each packet with our specified length. Second, we downsize the sending buffer to prevent the sender from pushing out more data than can be received, which could disable our control of the packet sizes. We simulate our defense on a network trace of four IoT devices and show a reduction in device classification accuracy from 98% to 63%, close to random guessing. Meanwhile, the real-world data transmission experiments show that the added latency is reasonable, less than 21%, while the added packet header overhead is only about 5%. device fingerprinting: IoT privacy; traffic analysis countermeasure; traffic shaping + Footnote †: journal: Computer Science 0 Footnote 0: footnotetext: 0 Footnote 0: footnotetext: 0 Footnote 0: 0: footnotetext: 0 ## 1 Introduction The wide adoption of IoT devices comes with a privacy threat. Even with encryption, the metadata of encrypted traffic, such as the packet size, data volume, and packet inter-arrival time, can be utilized by passive observers to conduct _device fingerprinting_ (DF) attacks [1; 2]. These attacks enable observers to identify the presence of devices and their operational states, thereby allowing adversaries to infer privacy-sensitive information about user behaviors and activities. For example, Wang et al. [3] show that an observer can identify which command a user gives to a smart speaker using the packet length sequence and direction. DF becomes feasible due to correlated information in encrypted traffic associated with IoT devices. Several studies [4; 5] have validated that an observer can passively capture the network traffic and use features of packets' lengths and timing to build machine learning-based classifiers for device identification. Once a device is successfully fingerprinted, the adversary may monitor the fluctuations in the device traffic to detect network events (e.g., Nest Thermostat is in Active or Idle mode) [6]. Hence, protecting against device identification would not only prevent DF, but also hinder event detection (i.e., event-level adversaries must identify the device first and then monitor for status-indicating patterns). WiFi observers can passively capture network traffic transmitted over the WiFi channel without joining the network (see Section 3). Furthermore, secured WiFi encryption cannot hide the MAC-layer traffic metadata, including the frame size, observation timestamp, and signal strength. The signal strength was not found to be a useful attribute for DF [2]. Thus, fingerprinting defense approaches aim to mutate the lengths and/or transmission time. For example, to address the packet-size leakage, the current traffic shaping methods mainly pad packets with additional bytes to obscure the related characteristics [7]. Regarding the timing side-channel, which falls outside the scope of this paper, adding a random packet delay has been employed as a means to prevent such information leakage [8]. Obviously, both countermeasures introduce data and time overhead. There has been active research on improving privacy with minimum data overhead [9; 10]. These methods are typically centered on minimizing the injected noisy data to conceal IoT traffic. However, these improvements fail to balance privacy protection and overhead [11] (i.e., the attack accuracy or data overhead is high). Inspired by the principles of TCP segmentation [12], we propose an alternative approach to distort length-based patterns without adding noise, thus achieving anonymity with a significantly lower data overhead. Our defensive strategy randomizes the packet lengths by breaking the data stream into random-sized chunks instead of injecting noise data for packet-size obfuscation. We implement our approach at the application level using TCP socket programming, which makes our defense easier to deploy. In this way, an IoT device manufacturer needs a simple software update on its devices to deploy the proposed defense without changing the devices' operating system or low-level codes. We realize our packet-size control by adjusting two local socket-programming parameters. First, we enable the TCP_NODELAY option to force the operating system to push out each packet with our specified length without waiting for additional data. TCP_NODELAY is a TCP socket option that can be used to turn on/off Nagle's algorithm [13], which by default adds a small latency to improve the network efficiency. It minimizes the number of small TCP segments sent over the network by buffering the data and combining them into larger segments for transmission. Second, we downsize the sending buffer of the socket to prevent the sender from pushing out more data than can be received, which could disable our control of packet sizes. This paper argues that added noise traffic is needed only to mask data-volume-related features, which is non-discriminatory for devices with highly variable data rates (see Section 5.1.1). In fact, the data volume has been utilized for event detection of an already identified device, the step that our defense prevents in the first place. Thus, noise traffic is often unnecessary to hide device-level signatures, and hence, randomization can be achieved without noise, as proposed in this work. In summary, we present a new defense against packet-size leakage attacks with the following properties: * Data-efficient: Traditional countermeasures add noise traffic to hide packet-size-based signatures, resulting in a significant data overhead. Our defense thwarts such leakage without adding any noise; thus, it is much more efficient than noise-based solutions. * Adaptable: Effective techniques in the literature utilize a fixed dynamic for obfuscation (e.g., padding to the maximum transmission unit (MTU)), which poses a non-optimizable overhead. Our defense utilizes adjustable parameters within the application code, enabling greater flexibility and programmability in managing defense strength and overhead. The remainder of this paper is organized as follows: We examine the relevant literature and previous studies in Section 2. In Section 3, we present the threat model. Section 4 provides a detailed explanation of our approach for traffic obfuscation. We evaluate our technique and discuss our results in Sections 5 and 6, respectively. Finally, we conclude and discuss future work in Section 7. ## 2 Related Work Many studies have shown how packet-length information can be exploited to identify IoT devices [4] and specific events [3; 6]. The Onion Router (Tor), a well-known privacy-preserving system, addresses such side-channel leakage by sending data in a fixed packet length [14]. Nevertheless, adopting Tor increases the amount of received traffic and adds additional latency due to the multi-hop nature of Tor. Packet padding has an acceptable effectiveness but incurs a high data overhead. The authors of [7] reported that several packet-padding strategies could thwart the attackers' classification but increased the amount of data sent significantly (>500%). A lightweight solution presented by Pinheiro et al. [15] could reduce the accuracy of IoT device identification to higher than random guessing by 15%. Their mechanism inserts random bytes between 1 and the available space to fill the packet (i.e., to equal MTU). Still, the added noisy data (54%) can lead to an undesirable communication overhead. A closely related defense [16] can successfully defeat analytics based on WiFi eavesdropping. It uses dummy traffic to shape a pair of devices' traffic to be similar. The technique could spoof other devices' traffic by constructing the flow of dummy packets using prerecorded traces of the targeted device. Thus, an attacker cannot identify a specific device. Moreover, it incurs zero Internet bandwidth overhead by dropping the dummy packets at the access point (AP) before sending them to the Internet. The WiFi-based AP needs to be modified to drop dummy packets, which are flagged using the reserved bit flag on the IP header. However, its effectiveness diminishes when the attacker can monitor the IP-level traffic. Insiders accessing the IP header, such as rogue APs and network snoopers, can filter out flagged dummy packets. As a result, network-layer observers can recover the original/undefended traffic and overcome the defense. Our technique addresses the size-based leakage against both internal and external adversaries (i.e., IP- and MAC-level observers) because our segmentation occurs at the transport layer before the packet construction. Traffic splitting was initially introduced for multi-path routing [17], which splits the flow across different paths to prevent malicious intermediary nodes from recording the whole traffic. It presented two levels of defense. First, the network-layer defense applies a multipathing strategy within the Tor network to obscure the traffic patterns. The second application-layer defense follows the same concept. It decomposes HTTP requests into sub-requests in parallel over multiple paths or sends a single HTTP request for different web objects over different entry nodes in Tor circuits. Assuming partial data is insufficient to perform traffic analysis attacks, this defense is effective against remote observers. However, in this paper, we are also concerned about local eavesdroppers who are physically close to the device transmission range. No middleboxes are involved in this scenario; hence, the attacker can collect the complete capture to perform the attack. On the other hand, our defense considers all observers positioned in the link between the source and destination, remote and local observers alike. Traffic shaping has been introduced as a routing-optimization technique for vehicular ad hoc networks [18]. This approach employs reinforcement learning to enhance the efficiency of routing decisions, particularly in demanding and real-world situations marked by unstable connections, varying communication ranges, and rapid topology changes. This objective is achieved through distributed reinforcement learning, enabling the routing protocol to learn from vehicle experiences to make optimal decisions and adjust to the network's unpredictable fluctuations. This work cannot be extended to determine packet sizes, as its primary objective is to manage packet routing rather than packet sizes. Signal-jamming approaches can serve as a defense against traffic analysis attacks in wireless networks, all without the need for adding dummy packets or intentional delays. Generally, this method employs antennas to disrupt traffic at possible adversary positions, effectively elevating the noise level [19]. However, this tactic generates interference that impairs the performance of nearby networks and, furthermore, it is considered unlawful ([https://www.fcc.gov/general/jammer-enforcement](https://www.fcc.gov/general/jammer-enforcement) (accessed on 15 May 2023)). Packet-size randomization has previously been proposed to address the side-channel leakage in secure shell (SSH) communications [20; 21; 22], widely used for secure remote access and communication. However, their proposed modifications are specific to the SSH protocol. Many IoT devices often rely on lightweight messaging protocols to fulfill the IoT communication requirements [23], such as Hypertext Transfer Protocol Secure (HTTPS), Message Queuing Telemetry Transport (MQTT), Constrained Application Protocol (CoAP), Extensible Messaging and Presence Protocol (XMPP), and Data Distribution Service (DDS). Our work proposes a novel use of random segmentation at the transport layer, responsible for passing the data received from all the application layer communication protocols mentioned above. In this manner, our defense is less demanding for deployment and suitable for the IoT architecture. TCP segmentation was previously proposed to reduce the per-packet overhead on host processors for wired networks [12]. This approach delays segmenting the data into smaller units and sends it as a larger TCP segment to improve efficiency. Unlike this work, our technique makes the segmentation random and, thus, unpredictable to evade patterns on side-channel information that can be used to identify IoT devices. ## 3 Threat Model We consider two observation points an adversary can exploit to collect encrypted WiFi traffic. In both scenarios, the attacker is physically located within the signal range of the victim's WiFi router or AP. The attacker can be one of the following: _Active Observer:_ The attacker can set a rogue AP with the same network name as the victim's network, which may lure IoT devices to connect to the rogue AP instead of the legitimate one. In this case, the attacker can observe and analyze the IP-level traffic of the connected IoT devices. We assume the observer can inspect the header of IP packets but does not know the device or break the encryption. _Passive Eavesdropper:_ The attacker can listen to the wireless channel to capture the encrypted WiFi traffic using a WiFi card in monitor mode ([https://en.wikipedia.org/wiki/Monitor_mode](https://en.wikipedia.org/wiki/Monitor_mode) (accessed on 15 May 2023)). Eavesdroppers are not required to access or join the network. The attacker is, therefore, very hard to detect. We assume the attacker can access the same IoT devices as the victim's network. The attacker can collect the encrypted traffic to build a profile that can be used to identify IoT devices with similar traffic patterns. Our adversary aims to infer the device (e.g., doorbell, sleep monitor, etc.) and then monitor for network events based on traffic pattern changes, e.g., a surge in doorbell traffic indicates the arrival of a visitor, a surge in sleep monitor traffic indicates a user is awake, etc. We consider device-fingerprinting attacks that operate on packet lengths and directions. Timing information falls beyond this paper's scope and, therefore, is not considered by our defense. ## 4 Materials and Methods ### Noise-Free Randomization We defend against traffic-analysis attacks by enabling IoT applications to control their packets, where it is not typically controllable. The application layer delivers a message of byte stream to the transport layer, which appends its header information (e.g., port number) and passes the data to the network layer as a segment. The segment size is determined by the maximum segment size (MSS) and specific situations summarized below [24]: * If the application message \(\geq\) MSS, the TCP protocol sends the data in full segments equal to MSS for transmission and holds any portion of data surpassing MSS in an incomplete segment accumulating more bytes. * If the message \(<\) MSS (i.e., there is still space in the segment) and a previously sent packet has not been acknowledged yet, the protocol waits for some time (\(+/-200\) ms [13]) to accumulate more bytes. This time delay allows for collecting more data to optimize network usage. * If no additional data arrives within the timer period, the protocol dispatches the available data for transmission. The network layer, which is responsible for sending the data over the network, receives the segments and breaks them into as few full packets as possible by the MTU restriction. Indeed, the MSS value at the transport layer depends on the underlying network MTU to ensure TCP segments can be properly encapsulated within network packets. MSS parameters are exchanged during the TCP-handshake phase, and the network interface card's device driver provides the TCP/IP stack with the MTU value. With our defense, the packet size is indistinguishable due to the randomness. That is, it consistently breaks thet received messages into arbitrarily sized segments. Suppose an IoT application sends a message \(m\) of \(n\) bytes. As stated earlier, the system, by default, will transmit \(m\) in a single segment if \(n=\) MSS or pass the small message (\(n<\) MSS) as it is after the timeout period (\(+/-200\) ms). If \(n>\) MSS, \(m\) will be divided into \(\lceil n/\)MS\(\rceil\) segments, all equal to MSS except the last segment if \(n\) is not an exact multiple of MSS (i.e., \(n\%\)MSS\(>0\)); then, the last segment will hold the remaining bytes (\(n\%\)MSS). For example, assuming MSS is 1500, an application message of 3500 bytes will be transmitted in three segments (1500, 1500, 500). Unlike this dynamic, our defense will divide \(m\) into a random number of segments, and each segment's length will also be random. As a result, the data pattern from the length features is not predictable. Figure 1 depicts two communication scenarios between a cloud server and an IoT device; the top represents the regular/undefended traffic, and the bottom shows the shaped/defended one. In both examples, we assume the server sends the same message two times. We here focus on the incoming traffic from the server side to demonstrate the concept of our defense. However, we expect both endpoints (i.e., server and client IoT) to implement our defense when communicating with each other. Thus, the bidirectional traffic is fully obfuscated. In the first scenario, the server with no defense sends data in a typical packetized flow, leaking exploitable signatures for fingerprinting attacks. In contrast, the second defended scenario shows that randomization occurs at the transport layer. Specifically, our technique passes random-sized segments to the network layer. Consequently, the traffic pattern generated by our shaping technique is challenging to classify. ### Algorithm Algorithm 1 shows the pseudo-code processes of the proposed defense. We assume cloud servers and IoT devices run our program when sending packets. Figure 1: Two communication scenarios between a cloud server and a client IoT device. The top shows the original traffic without any defense, and the bottom depicts the obfuscated traffic after utilizing our proposed defense. ``` 1:\(Data[]\) -- Byte array holding the application message. 2:Min,Max -- Select the minimum and maximum segment sizes. 3:\(Prob\) -- Select the segmentation probability. 4:if length of \(Data\geq Min\)andrandom.random0 \(\leq Prob\)then 5:start \(\leftarrow\) 0 6:end -- length of \(Data-1\) 7:whilestart \(\leq\) enddo 8:\(RandLen\) -- random(Min, Max) 9:ifstart + RandLen \(\geq\) endthen 10: index -- end 11:else 12:index -- start + RandLen 13:endif 14:\(Seg\) -- Data[start : index] 15:send \(Seg\) 16:start \(\leftarrow\) index + + 17:endwhile 18:else 19:Data are left to be handled by the operating system without segmentation. 20:endif ``` **Algorithm 1** Random Segmentation of Application Messages The program stores the received message from the application layer in a byte array \(Data\). Provided that the message is long enough to perform random segmentation (such that the length of \(Data\geq\) the minimum segment size \(Min\)), our algorithm randomly decides whether to split the message but with a specific probability threshold \(Prob\), such that 0 \(\leq Prob\leq 1\). If yes, the main loop loads a random chunk of \(Data\) into an individual segment \(Seg\) for transmission and loads another random chunk from \(Data\) in the next iteration process until the array becomes empty. The size of each segment is determined randomly by \(RandLen\), but does not surpass MTU to avoid fragmentation. However, the upper and lower bound of \(RandLen\) (i.e., \(Min\) and \(Max\)) is adjustable to suit the device traffic pattern. For example, for a device that sends light traffic with a maximum of 300 bytes in length, the upper bound of \(RandLen\) should be less than 300 to achieve randomness. Otherwise, the program will send the whole payload in \(Data\) if \(RandLen\) exceeds the message's size. ### Multi-Level Segmentation Due to the significant differences in the packet size range of many IoT devices operating in different modes, choosing the appropriate range for length randomization (i.e., \(Min\) and \(Max\)) is challenging. For example, our preliminary analysis of our camera traffic shows that 93% of the packets are below 150 bytes when the camera is idle. On the other hand, when the camera becomes active, 58% of packets are above 1000 bytes. Hence, utilizing one range to mask all functional scenarios, such as splitting all segments into chunks between 100 and 150 bytes, will obfuscate the whole traffic but lead to excessive segmentation and increase the overhead. Given the limitation of the one-level segmentation, it is necessary to make our defense adapt to the change in traffic volume. Thus, we adopt a multi-level segmentation to enable our algorithm to use a suitable range based on the traffic intensity. For example, we use three levels for the high-bandwidth devices. Level 1 splits messages \(\leq\)200 bytes into random chunks between 20 and 40. Other messages above 200 and \(\leq\)500 are randomized using random lengths between 100 and 300 in level 2. Larger streams in level 3 that are above 500 are obfuscated using a random size between 500 and 1000. Obviously, that creates non-overlapping bands where the observer can recognize the corresponding band, but it is still insufficient to perform the attack due to the randomness within each band. ### Practical Considerations To change the standard packet sizing enforced by underlying protocols controlled by the operating system, we initially considered changing the MTU. Dynamic modification of the MTU value will directly impact the packet size, breaking larger packets into smaller ones to fit within the new limit. However, MTU is a system-wide parameter, and such modification will not affect our program but the entire system, which is undesirable. In addition, smaller MTUs increase packet fragmentation, leading to adverse consequences for the system's performance. Assembling fragments at the destination adds extra burden and reduces the overall efficiency. Further delay can also occur when a fragment is missed or corrupted. In this case, the receiver cannot read partial data, and the whole data frame must be retransmitted. On the other hand, our approach does not fragment packets but divides TCP segments into separate IP packets. Hence, we avoid the drawbacks associated with packet fragmentation. The main challenge in applying our idea is that applications do not have direct packet abstraction to control packet lengths. We could work around this technical obstacle by modifying two parameters on a per-socket basis, thus not affecting other programs. First, we disable Nagle's algorithm [13] using the TCP_NODELAY option. Nagle's algorithm avoids sending small TCP segments by introducing a time delay to collect more data so that it sends full segments. Since we aim to send data in predetermined sizes, this data aggregation contradicts our purpose and needs to be disabled. Second, in the case of intensive traffic, we limit the amount of data that can be pushed out of a socket at the initial stage of the communication. TCP begins with a small congestion window to assess the network condition and find the optimal window size. As we turn off Nagle's algorithm, many small packets can be sent immediately. As a result, the operating system overrides our program and accumulates the data into larger packets to improve efficiency, rendering our defense ineffective. To avoid this problem, we decrease the send socket buffer to less than the receive buffer. As stated, our traffic-shaping technique masks the packet length without adding noisy data. Thus, the data rate remains unchanged (i.e., the amount of transmitted data is the same). Therefore, we added another module to select the amount of covered traffic as needed. In our approach, we inject a certain amount of traffic to make one device similar to another in terms of the data rate. ### Implementation We developed a server-client implementation using socket programming in Python. We use the TCP protocol to send packets due to its reliability and expect our obfuscation methodology to apply to UDP as well. Our source code is publicly available at GitHub ([https://github.com/MmassarAlyami/Random-TCP-Segmentation.git](https://github.com/MmassarAlyami/Random-TCP-Segmentation.git) (accessed on 29 July 2023)). We assume the defender has access to the targeted IoT devices and servers to install our program as a patch used by a hook that intercepts every send command from the application layer. That is, it will replace the standard send code with our patch to randomize the packet size. ## 5 Evaluation and Results In this section, we first evaluate our defense against traffic classification of IoT devices based on packet length. We compare the performance of our noise-free randomization with an analogous noise-based mechanism that uses random packet padding [15]. Second, we quantify the impact of our technique on communication performance through real-world experiments. Below, we discuss each aspect and present our results. ### Effectiveness To evaluate the randomness in the packet length introduced in our technique, we developed a program to simulate our defense on a WiFi trace of four IoT devices: doorbell, camera, light bulb, and smart plug. We captured the encrypted traffic for one hour in different operating modes (e.g., ON-OFF). Our program reads the pcap file of each device and produces the obfuscated traffic to test our defense against DF attacks. Table 1 outlines our configuration for the adjustable system parameters introduced in Sections 4.2 and 4.3. The probability was manually chosen with the goal of minimizing the overhead (i.e., reducing the number of segmented packets) while maintaining a lower accuracy. The initial value of 0.6 resulted in a high accuracy (>80). Consequently, we increased the probability to 0.7, and the accuracy remained consistently high. Subsequently, when we further elevated the probability to 0.8, a reduction in accuracy was achieved. We used the same defense parameters for each group of devices categorized based on the traffic intensity because different parameters will likely create new patterns to distinguish the devices. Furthermore, we run another simulation to obfuscate our captured trace using the random padding introduced in [15] to compare the effectiveness of a traditional noise-based solution with our defense. The attacker's profiling classifier is trained using the training data from the original trace. To assess the attack's performance without defense, we test the classifier using the testing data derived from the original trace. Likewise, we evaluate the defense approach by initially creating a modified trace using our defense program, relying on the original trace as a foundation. Subsequently, we proceed to train the attack classifier using the training data from the modified trace and then test the classifier using the corresponding test data in the modified trace. We use Random Forest for our classification due to its outperformance on similar IoT device-identification attacks compared with several other ML algorithms [2; 4]. We randomly divided our dataset into 70% for training and 30% for testing and quantified the performance of our classifier using the following metrics: accuracy, precision, recall, and F1 score. The specific computation of these metrics can be found in the Appendix A. Note that we show the indistinguishability of devices by applying our technique to training and testing data; the closer the accuracy to random guessing (50%), the more effective our defense is in confusing the classifier. Random guessing attains an accuracy rate of \(1/k\), where \(k\) is the number of labels/devices. As we confuse the attacker between two devices of similar traffic intensity, then \(k=2\). We evaluate against an attacker who exploits the packet size and direction only. Thus, we construct our dataset using packet sizes in a binary format to represent directions; a positive size represents incoming packets, and a negative size, outgoing ones. Similar to [2; 16], we break the trace into sequences observed within a 30 s time window for classification. #### 5.1.1 Preliminary Data Analysis Figure 2 presents a sample traffic flow observed over a period of 10 min before and after implementing our defense. Before obfuscating the traffic, we observed variable traffic patterns in the light bulb and smart plug. Specifically, the bulb's traffic was higher than the plug's in seven instances, lower in two instances, and similar in one case (Figure 1(a)). This inconsistency in traffic poses a challenge for reliable device profiling based on data-volume-related features. The impact of this variability is evident in the spike in bulb traffic (Figure 1(a), traffic period 5), which was misclassified as camera traffic. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**r**arameter} & \multicolumn{2}{c}{**Low-Bandwidth Devices**} & \multicolumn{2}{c}{**High-Bandwidth Devices**} \\ \cline{2-5} & **Bulb and Plug** & \multicolumn{2}{c}{**Doorbell and Camera**} \\ \hline _Prob_ & 0.8 & & 0.8 & \\ _Min_ & 5 & L1 = 20 & L2 = 100 & L3 = 500 \\ _Max_ & 20 & L1 = 40 & L2 = 300 & L3 = 1000 \\ \hline \hline \end{tabular} * L1, L2, and L3 refer to the three levels of segmentation introduced in Section 4.3. \end{table} Table 1: Our experimental setup of the system parameters *. On the contrary, the camera and doorbell exhibit stable traffic patterns while operating in a fixed working mode, such as recording videos (Figure 1(c)). However, the camera's higher resolution results in significantly larger traffic compared to the doorbell. Therefore, to account for the variance in data volume between the camera and doorbell, we introduced covered bytes to the doorbell to compensate for the variance in data volume (Figure 1(d)). Dummy/covered packets can be labeled and discarded at the receiving side. We assume the defender can leverage the header field of the Traffic Flow Confidentiality (TFC) mechanism [25]. This mechanism provides a tool to inject dummy packets using a wrapped header field that is encrypted and cannot be observed by network observers. The injection statistics are summarized in Table 2, revealing that no dummy packets were injected for the bulb and plug since the data rate does not serve as a suitable representative for those two devices. Conversely, the doorbell necessitates a greater incorporation of covered bytes to achieve parity with the camera in order to introduce confusion within the classifier's discrimination between the two devices. Additionally, a marginal proportion of covered bytes (0.7%) was introduced to the camera to mask its disparities from the doorbell, particularly during periods of inactivity. It is important to highlight that we incorporated a 20% time delay (refer to Equation (2) for the specific computation) in our simulation to align it with the findings in Section 5.3. As a result, the overall traffic attributes were affected. For instance, the event surge observed in the light bulb's traffic before applying our defense (Figure 1(a), traffic period 5) can be \begin{table} \begin{tabular}{c c} \hline \hline **Device** & **Covered Bytes (\%)** \\ \hline Bulb & 0 \\ Plug & 0 \\ Camera & 0.7 \\ Doorbell & 340 \\ \hline \hline \end{tabular} \end{table} Table 2: Injected covered bytes to hide data-rate features. Figure 2: Traffic flow of four IoT devices over 10 min. seen in the subsequent observation time window in the defended traffic (Figure 1(b), traffic period 6). Figure 3 illustrates the impact of our approach on the packet size. We chose the bulb and plug for demonstration as they rely entirely on random segmentation for packet-size obfuscation (i.e., no covered bytes were injected). Before implementing our defense, we can notice in Figure 2(a) a steady average size (nearly 130 bytes) sent by the plug versus fluctuating value by the light bulb. (After subtracting the frame header (82 bytes), a data frame of 130 bytes means there are 48 (i.e., 130-82) bytes in the payload for the segmentation.) We can observe similar behavior in the return traffic represented by negative values in Figure 2(c). After obfuscating the traffic (Figure 2(b),d), the described range of each device starts to overlap, introducing uncertainty in the learning process for device classification. ### Efficiency In this section, we evaluate the byte overhead \(B\) and the time taken \(T\) by our algorithm. We calculate \(B\) as: \[B=\begin{array}{c}\frac{D}{b}-\frac{W}{b}\\ \mathcal{W}_{b}\end{array} \tag{1}\] where \(D_{b}\) is the total amount of bytes transferred when implementing our defense, and \(W_{b}\) is the total amount of bytes transferred without implementing our defense. As we utilize the covered bytes presented in Table 2 solely for concealing data rate features, we deduct them from this calculation in the context of packet-size obfuscation. For the second aspect (\(T\)), we set up a remote virtual server and let our local machine send a large file of 10 MB, with and without our defense. Thus, we calculate the added latency by our methodology compared with the standard/undefended transmission scenario. We define \(T\) as: \[T=\frac{D_{t}-W_{t}}{W_{t}} \tag{2}\] Figure 3: Average packet size before and after obfuscating the bidirectional traffic of two devices over time. where \(D_{t}\) is the time span to send the file when implementing our defense, and \(W_{t}\) is the time span to send the same file without implementing our defense. Furthermore, we implemented two randomization levels to analyze the impact of the obfuscation intensity (i.e., range of random values) on \(T\). The wider the range of random lengths, the more packets are required to carry the payload. Consequently, more packets might take a longer time to deliver. For instance, sending a large array of data using random-sized packets ranging from 100 to MTU will result in significantly more packets than using a range of larger lengths between 1200 and MTU. For this experiment, we define two randomization levels (\(Rand_{(low)}\) and \(Rand_{(high)}\)) with an upper bound of a common maximum length of 1400 bytes, whereas the lower bound of each level varies as follows: \(Rand_{(low)}=1200\) and \(Rand_{(high)}=100\). We run ten sets of experiments and report the average result in the following section. ### Results As shown in Table 3, all the classification metrics used to evaluate the randomness in the shaped traffic are close to the baseline of random guessing. The values in the last column are bolded to represent the best-performing results. The result demonstrates the reduction in classification accuracy from 98% by the attack to 63% by our defense. Also, our technique achieves a better obfuscation (lower accuracy) than random padding. The attack accuracy under our defense is 8% lower. Moreover, Table 4 compares the byte overhead \(B\) of our defense with random padding. The bolded values in the last column represent the most favorable outcomes. Our defense incurs a significantly lower overhead for all devices, which saves nearly 47% of the total overhead. (From Table 4, \(B\) with random padding is 54% more than with random segmentation, which achieves 7%, resulting in a savings of 47% (54-7).) Last, we report the latency results from our large file transmission experiments between our client machine and a remote server. As shown in Figure 4, our defense comes with an average time overhead of 20.5%. Compared with random padding, our technique underperforms by only 0.7%, as [15] reported a 19.8% latency. The same figure (Figure 4) also shows a stable \(T\), regardless of whether we perform low or high splitting (using \(Rand_{(low)}\) and \(Rand_{(high)}\)). Although \(B\) has increased by 4.5% due to the intensive splitting using \(Rand_{(high)}\), more splitting seems not to introduce noticeable latency. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Metric**} & **No Obfuscation** & **Random Padding**[15] & **Random Segmentation** \\ \cline{2-5} & **\(W_{b}\) (MB)** & \(D_{b}\) (MB)** & \(B\) (\%) & \(D_{b}\) (MB) & \(B\) (\%) \\ \hline Bulb & 0.0348 & 0.1936 & 456 & 0.1091 & **214** \\ Plug & 0.0287 & 0.1626 & 467 & 0.0887 & **209** \\ Camera & 549.2 & 766.7 & 40 & 589 & **7** \\ Doorbell & 155.3 & 317.6 & 105 & 168 & **8** \\ \hline Total & 704.6 & 1084.7 & 54 & 757.2 & **7** \\ \hline \hline \end{tabular} * Refer to Equation (1) for details regarding the definition of \(W_{b}\), \(D_{b}\) and the specific computation of \(B\). \end{table} Table 4: Byte overhead \(B\) of two defenses *. \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Metric**} & **No Obfuscation** & **Random Padding**[15] & **Random Segmentation** \\ & **(\%)** & **(\%)** & **(\%)** \\ \hline Accuracy & 98 & 71 & 63 \\ Precision & 98 & 77 & 67 \\ Recall & 98 & 71 & 63 \\ F1 & 98 & 71 & 63 \\ \hline \hline \end{tabular} \end{table} Table 3: Classification accuracy, precision, recall and F1 score of two defenses. ## 6 Discussion Effectiveness: Our results validate that our approach can disrupt packet length features to protect against traffic-analysis attacks. By simulating our defense on a trace from real IoT devices, we show that the obfuscated traffic resulted in a baseless classifier comparable to random guessing. This is because our defense sends data packets in random sizes, which prevents the classifier from learning length-based fingerprints for profiling. Our technique does not consider timing characteristics, such as the interarrival time. However, the issue of timing leakage has been addressed by delaying the data packets to obscure the related patterns [8]. This approach can be integrated with our defense to effectively conceal both the timing and the length features. Efficiency: Our randomization technique is noise-free, and hence, more data-efficient than other noise-based approaches like packet padding. However, there is a case where the header overhead of our defense is higher than padding. If the size of the transmitted flow is relatively small compared to the packet headers (54 bytes), then every split with our approach adds more data than the payload itself. For example, if we have a device that sends a small packet of 100 bytes per second, it becomes expensive to split the packet into multiple chunks, as the header overhead from generating an additional packet becomes 54% (54/100). Nevertheless, such small traffic would be marginal to the total bandwidth in the network. In terms of time overhead, our countermeasure incurs a reasonable latency (20.5%), which many IoT devices can tolerate, such as sleep monitors and smart plugs [26]. However, high-bandwidth devices like cameras may experience degradation due to the need for greater bandwidth to support video streaming. One interesting insight we observe in Figure 4 is that intensive obfuscation (i.e., a higher degree of randomness) does not increase the latency. Although the number of packets is higher with our mechanism, it does not add a noticeable latency due to the immediate transmission enabled by the TCP_NODELAY option. The factor that led to the increase in transmission time was the limited send buffer, which puts the socket on hold from pushing more data until the buffer is empty. It is evident from Table 5 that smaller buffer sizes increase the time overhead significantly. Note that we are not claiming that the deactivation of Nagle's algorithm is an efficient solution, as sending small packets can result in additional header and processing overhead. However, turning off Nagle's algorithm has been introduced in prior studies as an effective technique with no adverse effect on performance, such as preventing many deadlock situations [24]. Similarly, our proof-of-concept implementation on consumer-grade laptops shows that our defense can mitigate privacy leakage but may introduce some latency, as the sender needs to send the data stream in a larger number of packets. With that being said, further research is needed to investigate the impact of our technique on devices with limited Figure 4: Latency (\(T\)) and packet header overhead (\(B\)) using the two randomization levels: \(Rand_{Qeak}\) and \(Rand_{Qeak}\)). processing capacity and storage, similar to IoT devices. We plan further investigations in this direction as future work. Compatibility and Deployment Challenges: While our current implementation showcases the feasibility of the approach through a client-server setup using TCP sockets, we fully acknowledge that IoT environments present unique challenges. For instance, adapting our proposed technique for communication with generic browsers in server roles may necessitate updates on server-side software. Hence, there could be technical issues in accommodating all existing devices, necessitating further investigation in future research. Vulnerability Analysis: It is vital to address potential vulnerabilities and ensure the robustness of our approach. However, our technique only enables IoT applications to change the packet length without any other modification of the entire IoT device's communication. For example, it does not affect WiFi encryption, IPsec, or SSL implementation, etc. Hence, we see no immediate and evident vulnerabilities in our proposed method. Adversarial Attack: We assume the attacker knows how our defense works and, hence, can try to merge the length of consecutive packets to overcome our defense. However, there is no basis for the attacker to retrieve packet-size patterns. If the attacker combines all consecutive packets, the attacker will merge packets that were not initially split because it is customary to observe a series of small-sized packets, such as mouse flow, when there is no defense implemented. In addition, our mechanism performs random segmentation for randomly selected messages. Hence, there is no fixed rule for our splitting to perform adversarial de-splitting with high accuracy. ## 7 Conclusions and Future Work In this paper, we have shown how random segmentation can obfuscate packet-size patterns without introducing additional noise into the packets themselves, as is the case with packet padding. The proposed approach enables network devices to send application messages through random-sized segments and pass them to the network layer for immediate transmission. Therefore, the observed traffic at and above the network layer is randomized, defending against both in-network and out-network observers (i.e., IP- and MAC-level observation). The technique has been tested on a client machine connected to a remote server, and the results demonstrate the effectiveness of our defense with a reasonable time overhead (<21%). For future work, we seek to make our defense accommodate the heterogeneity in the IoT environment. The adjustable parameters in our code allow the defender to choose the suitable obfuscation level to achieve sufficient randomness with fewer splitting operations. Thus, our defense system lacks an adaptive functionality to adjust its parameters based on the device traffic intensity and specific hardware. To this end, we aim to present an optimization model to enable our system to dynamically choose the optimum parameters that yield astonishingly less overhead with maximum privacy protection. **Author Contributions:** Conceptualization, M.A., C.Z., and Y.S.; methodology, M.A. and C.Z.; software, M.A.; validation, M.A., C.Z., and Y.S.; formal analysis, M.A., C.Z., and Y.S.; investigation, M.A., C.Z., and Y.S.; resources, M.A., C.Z., and Y.S.; data curation, M.A.; writing\(-\)original draft preparation, M.A.; writing\(-\)review and editing, M.A., A.A., M.A., C.Z., and Y.S.; visualization, M.A.; supervision, C.Z. and Y.S.; project administration, C.Z. and Y.S.; funding acquisition, C.Z. and Y.S. All authors have read and agreed to the published version of the manuscript. \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{**Buffer Size**} & \multicolumn{3}{c}{**Time Overhead (\%)**} \\ \cline{2-3} & \(\mathbf{Rand_{(Low)}}\) & \(\mathbf{Rand_{(High)}}\) \\ \hline \(2^{15}\) Bytes & 132.53 & 128.67 \\ \(2^{16}\) Bytes & 26.1 & 26.78 \\ \hline \hline \end{tabular} \end{table} Table 5: Send buffer size impact on transmission time using two obfuscation levels. **Funding:** This research was sponsored by the U.S. National Science Foundation (NSF) under Grant DGE-2325452. **Data Availability Statement:** Data are available upon request from the corresponding author. **Conflicts of Interest:** The authors declare no conflicts of interest. ## Abbreviations The following abbreviations are used in this manuscript: \begin{tabular}{l l} IoT & Internet of Things \\ DF & Device Fingerprinting \\ Tor & The Onion Router \\ AP & Access Point \\ MTU & Maximum Transmission Unit \\ MSS & Maximum Segment Size \\ \end{tabular} ## Appendix A We employed four metrics to evaluate the effectiveness of our defense: accuracy, precision, recall, and F1 score. These metrics can be computed using the following formulas: \[Accuracy=\frac{T}{T+F} \tag{1}\] where \(T\) is the number of instances that are correctly classified and \(F\) denotes the instances that are incorrectly classified by the model. \[Precision=\frac{TP}{TP+FP} \tag{2}\] \[Recall=\frac{TP}{TP+FN} \tag{3}\] where \(TP\) is true positives, \(TN\) is true negatives, \(FP\) is false positives, and \(FN\) is false negatives. The F1 score calculates the harmonic mean between the precision and recall as: \[F1=2\times\frac{Precision\times Recall}{Precision+Recall} \tag{4}\]
2309.06290
Orbital perspective on high-harmonic generation from solids
High-harmonic generation in solids allows probing and controlling electron dynamics in crystals on few femtosecond timescales, paving the way to lightwave electronics. In the spatial domain, recent advances in the real-space interpretation of high-harmonic emission in solids allows imaging the field-free, static, potential of the valence electrons with picometer resolution. The combination of such extreme spatial and temporal resolutions to measure and control strong-field dynamics in solids at the atomic scale is poised to unlock a new frontier of lightwave electronics. Here, we report a strong intensity-dependent anisotropy in the high-harmonic generation from ReS$_2$ that we attribute to angle-dependent interference of currents from the different atoms in the unit cell. Furthermore, we demonstrate how the laser parameters control the relative contribution of these atoms to the high-harmonic emission. Our findings provide an unprecedented atomic perspective on strong-field dynamics in crystals and suggest that crystals with a large number of atoms in the unit cell are not necessarily more efficient harmonic emitters than those with fewer atoms.
Á. Jiménez-Galán, C. Bossaer, G. Ernotte, A. M. Parks, R. E. F. Silva, D. M. Villeneuve, A. Staudte, T. Brabec, A. Luican-Mayer, G. Vampa
2023-09-12T14:51:13Z
http://arxiv.org/abs/2309.06290v1
# Orbital perspective on high-harmonic generation from solids ###### Abstract High-harmonic generation in solids allows probing and controlling electron dynamics in crystals on few femtosecond timescales, paving the way to lightwave electronics. In the spatial domain, recent advances in the real-space interpretation of high-harmonic emission in solids allows imaging the field-free, static, potential of the valence electrons with picometer resolution. The combination of such extreme spatial and temporal resolutions to measure and control strong-field dynamics in solids at the atomic scale is poised to unlock a new frontier of lightwave electronics. Here, we report a strong intensity-dependent anisotropy in the high-harmonic generation from \(\mathrm{ReS}_{2}\) that we attribute to angle-dependent interference of currents from the different atoms in the unit cell. Furthermore, we demonstrate how the laser parameters control the relative contribution of these atoms to the high-harmonic emission. Our findings provide an unprecedented atomic perspective on strong-field dynamics in crystals and suggest that crystals with a large number of atoms in the unit cell are not necessarily more efficient harmonic emitters than those with fewer atoms.** The foundational concept underpinning attosecond physics, and high-harmonic generation in gas-phase atoms and molecules in particular, is the energetic recollision of an electron ionized and accelerated by a strong laser field with the parent ion [1, 2, 3, 4]. This dynamic real-space framework is instrumental to link the characteristics of the emitted harmonic radiation (amplitudes, phases and polarization) to sub-laser-cycle dynamics of atomic and molecular orbitals [5, 6, 7, 8, 9]. In solids, high-harmonic generation (HHG) is understood using a similar framework, albeit exchanging the real-space perspective for one in reciprocal space, where electron-hole pairs accelerate and recombine across energy bands in the Brillouin zone of the crystal [10, 11, 12, 13]. This reciprocal-space approach has been paramount in virtually all investigations of solid-state high-harmonics: from revealing the role of electron-hole recollisions in the emission process [14], to reconstructing the band structure of a ZnO crystal [15], to explaining the multiple plateaus observed in the HHG spectrum [16] and to map regions of crystal momenta where the electron-hole velocity vanishes [17], among others [18, 19, 20, 21, 22, 23, 24]. Despite the success of the reciprocal-space picture, a real-space approach offers a more intuitive framework, in particular in complex materials with many narrowly spaced and overlapping bands. The advantages of using a real-space perspective to understand HHG from solids are quickly starting to become apparent [25, 26, 27], for example, in interpreting spatially-displaced electron hole recollision processes [27, 28] or as means to directly reconstruct the field-free (static) valence electron potential at the picometer scale [29]. The possibility to link features of the high-harmonic spectrum to _dynamics_ occurring at specific orbitals in the lattice remains, however, largely unexplored. Here, we demonstrate this possibility through angle-resolved measurements of HHG in \(\mathrm{ReS}_{2}\). We measure a strong, intensity-dependent anisotropy of the HHG emission and trace it back to the interplay between the currents generated by each individual atom in the unit cell. Simulating the laser-matter interaction using a basis constructed from maximally-localized Wannier orbitals, we show that by changing the laser parameters (intensity and polarization), one can activate or suppress the contribution of specific atoms to the HHG emission and interfere the atomic currents differently, increasing or decreasing the high-harmonic emission efficiency. \(\mathrm{ReS}_{2}\) is a layered semiconductor that crystallizes in a distorted octahedral (T) phase [30, 32, 33, 34]. Figure **1a** illustrates the unit cell of the monolayer, formed by 4 rhenium atoms and 8 sulfur atoms. The 4 Re clusters are linked in a chain oriented along \(\theta=120^{\circ}\) (see panel a). While the anisotropy of the crystal structure is clear (the crystal symmetry group is _P-1_), the band structure is similar along different angles and is very dense (see Figure **1b** and Supplementary Note 1), and with a density of states near the Fermi energy significantly higher than other prototypical materials used in HHG spectroscopy, such as MgO or ZnO [35]. Going from the monolayer limit to bulk, these features remain, and the band structure changes only slightly [30, 31]. In such dense band diagram, associating an individual harmonic with reciprocal space trajectories of charge carriers in a particular set of bands, according to the reciprocal-space method, is hardly straightforward (see circular markers in Fig. **1b-d**), and is unlikely to provide much insight into the carrier dynamics. On the other hand, Figure 1: **Monolayer ReS\({}_{2}\)**. (a) Crystal structure, composed of 4 rhenium atoms and 8 sulfur atoms in a distorted octohedral structure. The unit cell is delimited by the parallelogram. (b-d) Band structure of the monolayer along (a) \(\theta=0^{\circ}\), (b) \(\theta=60^{\circ}\) and (c) \(\theta=120^{\circ}\) (see panel a for definition of \(\theta\)). Circular markers across the bands in panels (b-d) highlight vertical transitions resonant with the 11th harmonic (H11). Monolayer and bulk (not shown) forms of ReS\({}_{2}\) are both inversion symmetric and display a very similar electronic band structure, with a nearly identical direct band gap of 1.4eV at the \(\Gamma\) point [30, 31]. the small bandwidth indicates that the electrons are very localized in the individual atoms of the lattice, making it ideally suited for a real-space or orbital-based framework. The first question we want to address is if high harmonics generated from ReS\({}_{2}\) reflect the strong anisotropy apparent in real space or rather the weak angular dependence of its band structure. We generate high harmonics from bulk ReS\({}_{2}\) with a linearly-polarized mid-infrared pulse with a duration of 80 fs and a center wavelength of 3.5 \(\mu\)m (see Methods). Figure **2a** shows the high-harmonic spectrum measured for a laser intensity of 0.64 TW/cm\({}^{2}\) and polarization along \(\theta=120^{\circ}\) (see inset). We observe odd harmonics extending up to the 13\({}^{\rm th}\) order, while even harmonics are absent as expected from the inversion symmetry of ReS\({}_{2}\). We measure the orientation dependence of the harmonics by rotating the polarization of the linear pulse with respect to the crystal. The results, shown in Figure **2b-d**, display a clear anisotropy for all harmonic orders. Furthermore, the anisotropy depends strongly on the laser intensity. In order to understand the origin of this anisotropy, we perform time-dependent simulations in a basis constructed from 44 maximally-localized Wannier orbitals (see Methods for details) [36]. The similarity between the monolayer and bulk forms in the case of ReS\({}_{2}\)[30, 31], allows us to reduce the computational complexity and simulate the monolayer system. The orientation dependence of H9 and H11 obtained from the numerical simulations is shown in Figure **3a,b**. While the uncertainty of the experimental intensities and the differences between the monolayer and bulk forms do not allow for a quantitative experiment-theory comparison (e.g., of the exact position of the harmonic maxima), our simulations clearly display the strong intensity-dependent anisotropy observed in the experiment. As a result, the simulations in monolayer \(\text{ReS}_{2}\) can provide valuable insight for the origin of this effect. The high-harmonic spectrum is given by the Fourier components of the time-dependent current that is generated by the laser-induced oscillating dipole of the medium (see Methods), \[I(\omega)=\sum_{\alpha}\left|\mathcal{F}[J_{\alpha}(t)](\omega)\right|^{2}, \tag{1}\] where \(J_{\alpha}(t)\) is the total current along direction \(\alpha=(\|,\bot)\), corresponding to the components parallel and perpendicular to the electric field, respectively. The total current can be expressed as a sum of currents from all the orbitals in the lattice, \(J_{\alpha}(t)=\sum_{n}^{N_{\text{sub}}}J_{n,\alpha}^{\text{(W)}}(t)\), where \(J_{n,\alpha}^{\text{(W)}}(t)\) represents the contribution to the total current of the changing population of orbital \(n\) and its coherence with all other orbitals (see Methods). The subscript (W) indicates that such orbital currents are defined Figure 2: **Measured orientation-dependent HHG from ReS\({}_{2}\)**. (a) High-harmonic spectrum for a laser intensity of 0.64 TW/cm\({}^{2}\) and polarization along \(\theta=120^{\circ}\) (parallel to the rhenium chains). The inset shows an optical micrograph of the bulk \(\text{ReS}_{2}\) flake using a CMOS camera and white-light illumination, with the longest edge corresponding to the rhenium chains. (b-d) Orientation dependence of (b) H9, (c) H11 and (d) H13 for three different intensities: 0.25 TW/cm\({}^{2}\) (blue), 0.64 TW/cm\({}^{2}\) (green) and 0.76 TW/cm\({}^{2}\) (red). in the Wannier gauge and, even if they are not observable, provide a unique real-space perspective into the HHG process. Expressed in terms of the individual orbital currents, the high-harmonic spectrum is \[I_{\alpha}(\omega)=\left|\sum_{n}^{N_{\text{sub}}}\mathcal{F}[J_{n,\alpha}^{(W)} (t)](\omega)\right|^{2}=\sum_{n}^{N_{\text{sub}}}\left[|A_{n,\alpha}(\omega)|^{ 2}+|A_{n,\alpha}(\omega)|\sum_{m\neq n}|A_{m,\alpha}(\omega)|\cos(\varphi_{m, \alpha}(\omega)-\varphi_{n,\alpha}(\omega))\right], \tag{2}\] where \(A_{n,\alpha}\) and \(\varphi_{n,\alpha}\) are, respectively, the spectral amplitude and phase of the current of orbital \(n\) along direction \(\alpha\). Equation 2 allows us to distinguish features that arise from the interference of different orbital currents. The incoherent sum of the individual currents, \(I_{\alpha}^{\text{incoh}}(\omega)=\sum_{n}^{N_{\text{sub}}}\left|\mathcal{F}[J_ {n,\alpha}^{(W)}(t)](\omega)\right|^{2}\), will be absent of such interference. In Figure **3c** we compare the angle-dependent harmonic yield of Figure 3: **Calculated orientation-dependent HHG from ReS\({}_{2}\). (a,b) Full calculation of harmonics (a) H9 and (b) H11 for different intensities: 0.1 TW/cm\({}^{2}\) (blue), 0.5 TW/cm\({}^{2}\) (green) and 0.6 TW/cm\({}^{2}\) (red). (c) Calculation neglecting the Fourier phase (solid lines) \(\varphi_{n}\) of the orbital current for H11 for 0.1 TW/cm\({}^{2}\) (blue) and 0.6 TW/cm\({}^{2}\) (red). For comparison, the full calculation curves of panel (b) are shown in (c) with dashed, faint lines.** H11 for \(I_{\alpha}^{\rm incoh}\) (solid lines) and the observable signal \(I_{\alpha}\) (faint dashed lines). A similar analysis for H9 is made in Supplementary Note 2. The angular variation is stronger for \(I_{\alpha}\), with near-complete suppression of various secondary maxima that are present in \(I_{\alpha}^{\rm incoh}\) (most notably near 60\({}^{\circ}\)), strongly modifying the orientation dependence. Thus, orbital phase interference is an important factor determining the orientation dependence. Since the electrons are well localized on each atomic site (see Extended Data), we can group together the currents of the \(m\) orbitals belonging to the same atom \(A\) into an atomic current, \(J_{A,\alpha}^{\rm(W)}(t)=\sum_{m}J_{m,\alpha}^{\rm(W)}(t)\). Furthermore, due to the inversion symmetry of \(\rm{ReS}_{2}\), each atom is related to one other by an inversion operation, for example, \(\rm{Re}_{1}\) and \(\rm{Re}_{3}\) or \(\rm{S}_{1}\) and \(\rm{S}_{6}\) (see Fig. **1a**). Both of the atoms in the pair give rise to the same Fourier amplitudes and phases, so that the total harmonic spectrum in Eq. 2 reduces to the sum of the Fourier amplitudes and phases of six atomic (inversion-related) pairs. Figures **4a,b** show the Fourier amplitudes and phases of the six atomic pairs, indicated with different colors, for H11 along \(\alpha=\parallel\) and for two intensities: 0.1 TW/cm\({}^{2}\) and 0.6 TW/cm\({}^{2}\). At both low and high intensities (Figure **4a-b** respectively) emission is spread over a wide range of phases at any given angle. For the lowest intensity (Figure **4a**), every atomic pair contributes a similar amplitude to the emission near \(\theta=40\) - \(60^{\circ}\), but their phases are spread equally over \(\pi\) rad, thus leading to the near-perfect destructive interference seen in Fig. **3b** (blue curve) at these angles. On the other hand, for angles close to \(\theta=100^{\circ}\), the Fourier phases from the different atomic sites are similar, leading to constructive interference in Eq. 2 and therefore to the peak observed in Figure 4: **Atomic contributions to harmonic emission in ReS\({}_{2}\). The circle colors represent the six different atomic pairs, the size of the circle is proportional to the Fourier amplitude \(|A_{n}|\) and the Fourier phase \(\varphi_{n}\) is given in the vertical axis. The panels display the Fourier amplitudes \(|A_{n}|\) and phases \(\varphi_{n}\) of the six atomic pairs (inversion-symmetric partners) as a function of the laser polarization angle for H11. Two driver intensities are shown: (a) 0.1TW/cm\({}^{2}\) and (b) 0.6 TW/cm\({}^{2}\). The results shown are for the harmonic polarization \(\alpha\) that is parallel to the electric field.** Figure **3b** (blue curve). At \(\theta=100^{\circ}\), the atomic pair Re\({}_{2}\)-Re\({}_{4}\), which contributes the most to H11 at low intensity, is largely suppressed at large intensity (compare size of orange circle in Figure **4a,b**). This analysis shows that atoms that do not contribute to the generation of a particular harmonic order for one driver intensity, can be activated for other intensities, and vice versa, suggesting that laser intensity could be used as a mechanism to control the relative weight of atomic orbitals in HHG. An analogous analysis can be made for the rest of harmonic orders, along both \(\alpha=\parallel,\perp\) directions (see Supplementary Note 2), where we observe a larger spread of the Fourier phases for increasing harmonic orders. This leads to sharper changes in the angle-resolved spectrum for higher orders, as also seen in the experiment. In conclusion, we identify how the nonlinear currents residing on each of the twelve atoms in the unit cell of a ReS\({}_{2}\) crystal are responsible for the strongly anisotropic and intensity-dependent emission of high-order harmonics. Our orbital analysis based on maximally localized Wannier functions reveals that each atomic contribution depends strongly on the polarization angle and intensity of the driving field, paving the way to characterizing and controlling electron dynamics at the picometer-scale in solids on sub-laser-cycle timescales. Moreover, we show that interference between atoms in the unit cell of a crystal is key to determine the macroscopic high-harmonic emission, a critical factor to consider in the route towards developing efficient harmonic emitters. ## Methods ### Experimental methods \(\mathrm{Re}\mathrm{S}_{2}\) flakes were mechanically exfoliated from an extracted section of a bulk sample using tape, then dispersed across the tape by folding over itself to reduce thickness and produce generally flat flakes. The \(\mathrm{Re}\mathrm{S}_{2}\) crystal was transferred from the tape to a PDMS stamp, then transferred from the stamp to the substrate at 80\({}^{\circ}\). The substrate consists of a two-side polished, 10x10x0.5mm, (100)-cut MgO single crystal that is cleaned with acetone and isopropanol. The PDMS stamp was peeled off to leave the bulk \(\mathrm{Re}\mathrm{S}_{2}\) flakes on the MgO substrate. The sample is imaged in-situ with a white-light source as well as the laser source, allowing sample areas of interest to be located and crystallographic orientation to be measured. The laser source consists of a YB:KGW laser (LightConversion Carbide CB3) delivering 200fs pulses at a center wavelength of 1030nm with a repetition rate of 100kHz and average power of 80W. A portion of this power (60W) pumps a commercial optical parametric amplifier (LightConversion Orpheus-MIR), generating 60fs mid-infrared pulses at a wavelength of 3.5\(\mu\)m. An Ag off-axis parabolic mirror focuses the mid-infrared beam onto the sample, producing high harmonics. The generated high harmonics are collected in transmission geometry and focused on the input slit of a Princeton Instruments IsoPlane spectrometer with an Al spherical mirror of 15cm focal length. A half-wave plate is positioned between the parabolic mirror and the sample to rotate the linear laser polarization with respect to the crystal axis. #### Numerical methods The field-free Hamiltonian and dipole couplings of monolayer \(\mathrm{ReS}_{2}\) were calculated with the electronic structure code Quantum Espresso [37] on a Monkhorst-Pack (MP) grid of 12x12x1 points using a norm-conserving Perdew-Burke-Ernzerhof (PBE) exchange correlation functional. The field-free Hamiltonian used in the time-dependent propagation was constructed by projecting the Bloch states onto a set of maximally-localized Wannier functions using the Wannier90 code [36]. In particular, we projected onto the \(d\) orbitals of the four rhenium atoms and the \(p\) orbitals of the six sulfur atoms, totalling 44 bands. The Hamiltonian in the basis constructed from Wannier functions was then propagated in the presence of the electric field using the density matrix formalism with the code described in Ref. [26]. The large size of the unit cell allowed us to obtain convergence with a modest MP grid of 50x50 \(k\)-points along the \(b_{1}\) and \(b_{2}\) reciprocal lattice vectors. The time step was set to 0.2 a.u. and the dephasing time was chosen to be \(T_{2}=10\) fs. The time-dependent current along direction \(\alpha\), used to extract the high harmonic spectrum, is defined as \[J_{\alpha}(t)=-\frac{|e|}{N_{k}}\sum_{\mathbf{k}}\mathrm{Tr}\left[\hat{\mathbf{ v}}_{\alpha}(\mathbf{k})\cdot\rho(\mathbf{k},t)\right]. \tag{3}\] Above, \(e\) is the electron charge, \(N_{k}\) is the number of crystal momenta included in the calculation, \(\hat{\mathbf{v}}\) is the velocity operator, and \(\rho\) is the density matrix. In the Wannier gauge, the density matrix \(\rho^{\text{(W)}}\) contains the orbital populations and coherences in its diagonal and off-diagonal terms, respectively. In the Wannier gauge, we may define a (real) current from an individual orbital \(n\) along direction \(\alpha\) as \[J_{n,\alpha}^{\text{(W)}}(t)=-\frac{|e|}{N_{k}}\operatorname{Re}\{\sum_{ \mathbf{k}}\sum_{m}^{N_{\text{sub}}}\left[\hat{v}_{nm,\alpha}^{\text{(W)}} \cdot\rho_{mn}^{\text{(W)}}\right]\}, \tag{4}\] such that the sum of the currents from all orbitals equals the total current, \[J_{\alpha}(t)=\sum_{n}^{N_{\text{orb}}}J_{n,\alpha}^{\text{(W)}}(t). \tag{5}\] For clarity, we give an example for a two-orbital model, although we point out that our analysis is only relevant for multi-orbital crystals as the one presented in this work. In the two-orbital case, \[\begin{split} J_{1,\alpha}(t)&=\text{Re}(\{v_{11, \alpha}\rho_{11}+v_{12,\alpha}\rho_{21}\}\\ J_{2,\alpha}(t)&=\text{Re}\{v_{22,\alpha}\rho_{22}+v _{21,\alpha}\rho_{12}\},\end{split} \tag{6}\] where the subscripts \(1\) and \(2\) identify the orbital and \(\alpha=\parallel,\perp\) the direction of current emission. Since both the velocity and density matrices are hermitian, the current of an individual orbital is composed of a term associated to the population change of that orbital, plus exactly half of the contribution of the coherence between that orbital and the rest. Thus, this approach offers a way of quantifying the contribution of individual orbitals, and their interference, to the high-harmonic generation. ## Data availability The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request.
2306.00064
Multiphoton Spectroscopy of a Dynamical Axion Insulator
The unusual magnetoelectric transport present in Weyl semimetals can be compactly understood as manifestations of an underlying axion field, which itself is determined by the microscopic band structure. The axion couples nonlinearly to electric and magnetic fields and possesses a signature topological magnetoelectric response, leading to a modified form of Maxwell's equations known as axion electrodynamics. Axions are naturally present in Weyl semimetals relating to the separation of the Weyl nodes in energy and in crystal momentum. In the presence of strong interactions, charge density-wave (CDW) order may develop which serves to gap the Weyl nodes and introduces corresponding collective excitations. When the inherent chiral symmetry of Weyl semimetals is spontaneously broken by the formation of CDW order, the resultant chiral condensate is endowed with intrinsic dynamics which yields a dynamical contribution to the axion field. However, unambiguous identification of this dynamical axion mode is challenging due to its inherent nonlinear coupling to electromagnetic fields. Therefore, we propose an all-optical protocol for verifying and characterizing dynamical axion collective modes in Weyl semimetals with CDW order. First, we show that axion collective mode can be excited using two-photon excitation schemes. Following excitation, the collective axion oscillations are then diagnosed by measuring the time-resolved Kerr rotation. Our results demonstrate a pathway towards utilizing multi-photon and entangled pair spectroscopies to identify new correlated phases in quantum matter.
Olivia Liebman, Jonathan Curtis, Ioannis Petrides, Prineha Narang
2023-05-31T18:00:03Z
http://arxiv.org/abs/2306.00064v1
# Multiphoton Spectroscopy of a Dynamical Axion Insulator ###### Abstract The unusual magnetoelectric transport present in Weyl semimetals can be compactly understood as manifestations of an underlying axion field, which itself is determined by the microscopic band structure. The axion couples nonlinearly to electric and magnetic fields and possesses a signature topological magnetoelectric response, leading to a modified form of Maxwell's equations known as axion electrodynamics. Axions are naturally present in Weyl semimetals relating to the separation of the Weyl nodes in energy and in crystal momentum. In the presence of strong interactions, charge density-wave (CDW) order may develop which serves to gap the Weyl nodes and introduces corresponding collective excitations. When the inherent chiral symmetry of Weyl semimetals is spontaneously broken by the formation of CDW order, the resultant chiral condensate is endowed with intrinsic dynamics which yields a dynamical contribution to the axion field. However, unambiguous identification of this dynamical axion mode is challenging due to its inherent nonlinear coupling to electromagnetic fields. Therefore, we propose an all-optical protocol for verifying and characterizing dynamical axion collective modes in Weyl semimetals with CDW order. First, we show that axion collective mode can be excited using two-photon excitation schemes. Following excitation, the collective axion oscillations are then diagnosed by measuring the time-resolved Kerr rotation. Our results demonstrate a pathway towards utilizing multi-photon and entangled pair spectroscopies to identify new correlated phases in quantum matter. _Introduction_--Axionic particles were originally proposed in high-energy physics in order to solve the charge-parity problem as the Nambu-Goldstone bosons associated with a new global axial U(1) symmetry [1; 2]. Today, axions are a leading dark matter candidate and could help explain the matter-antimatter asymmetry present in the universe [3]. While the axion has yet to be discovered in particle physics, its condensed matter [4] counterpart has been theorized to exist in certain three-dimensional materials [5; 6; 7; 8; 9]. A characteristic identifier of the presence of an axion is a signature magnetoelectric response, leading to a modified form of electrodynamics known as axion electrodynamics [10; 11; 12]. It turns out Weyl semimetals naturally have an axion term that is related to the separation in k-space and in energy of the Weyl nodes. What's more, when the Weyl system undergoes lattice translation symmetry breaking due to the emergence of charge density wave (CDW) order [13; 14], the collective motion associated to the phase of the CDW in turn leads to a dynamical axion effect and to a _nonlinear_ magnetoelectric effect on top of the response due to band structure effects [15; 16; 17; 18]. In the presence of uncompensated carrier densities this may even lead to spatially inhomogeneous textures due to softening of the axionic collective mode [19]. In addition to being of fundamental interest, there have also been proposals that the magnetoelectric coupling mediated by the axion response in materials may be of practical value, e.g., in nonreciprocal thermal emitters or rectifiers [20; 21]. However, while some progress has been made, techniques for the unambiguous identification of dynamical axion quasiparticles are challenging, and often rely on studying linear perturbations in the presence of a strong magnetic field [5; 22; 23; 24]. Here we lay out an all-optical, contactless protocol for identification of nonlinear signatures of dynamical axions in a Weyl semimetal with CDW order. This protocol breaks down into two parts; first a dynamical axion mode is nonlinearly excited through two-photon processes, which carries a distinct dependence on the incident angle, polarization, and frequencies of the two beams. Following excitation, the induced axion dynamics are then detected through their manifestation in the Kerr angle rotation of a third reflected probe beam, which acts as a faithful proxy for the dynamical evolution of the axion in the material [25; 26]. More generally, our work demonstrates the potential of using entanglement pairs to identify correlated phases in quantum materials. _Axion electrodynamics_--We now describe the excitation protocol, starting from a model for dynamical axion collective modes in a Weyl semimetal with CDW order. The simplest form to discuss Weyl physics is when time-reversal symmetry is broken but inversion symmetry is persevered, thus there are minimally two nodes of equal energy. The CDW Weyl Hamiltonian expanded near the degeneracy points is \[\mathbf{\mathcal{H}}(\mathbf{k})=\left(\begin{array}{cc}v_{F}\mathbf{\sigma}\cdot\mathbf{k}& \Delta\\ \bar{\Delta}&-v_{F}\mathbf{\sigma}\cdot\mathbf{k}\end{array}\right) \tag{1}\] where \(v_{F}\) is the Fermi velocity, \(\mathbf{\sigma}\) is the triplet of Pauli matrices, \(\mathbf{k}\) is the crystal momentum, and \(\Delta=|\Delta|e^{i\theta}\) is the CDW term and is analogous to a complex mass term, also represented in terms of the gap amplitude \(|\Delta|\) and phase \(\theta\). In an uncorrelated Weyl semimetal with no CDW \(\Delta=0\) the nodes are gapless. The chirality of the Weyl nodes is identical to the topological charge associated to each band crossing, given by the Chern flux \(C=+1/2\) (\(C=-1/2\)) for the right- (left-) handed Weyl particles [27]. This chiral charge can be understood as magnetic monopoles, or sources and sinks, of the Berry curvature which is singular at the nodes and acts as an effective magnetic field in reciprocal space [28]. In the presence of sufficiently strong interactions, a transition to CDW ordered phase with complex order parameter \(\Delta\) can occur. If the order is incommensurate, by Goldstone's theorem this will have a soft sliding mode associated with the phase of the CDW order, \(\theta(x)\). Due to the chiral anomaly, it is known that the phase of the complex CDW can be identified with the axion response [6] of the material. This serves to modify the electrodynamics of the medium, with Lagrangian \[\mathcal{L}_{\text{EM}}=\frac{1}{2}(\epsilon\mathbf{E}^{2}-\frac{1}{\mu}\mathbf{B}^{2 })+g\theta\mathbf{E}\cdot\mathbf{B}. \tag{2}\] The first two terms are the sourceless Maxwell Lagrangian in terms of the electric (magnetic) fields \(\mathbf{E}\) (\(\mathbf{B}\)), and \(\epsilon\) (\(\mu\)) is the static dielectric (permeability) tensor (we use units with \(\epsilon_{0}=\mu_{0}=1\)). The last term proportional to \(\mathbf{E}\cdot\mathbf{B}\) is the so-called axion response that leads to a magnetoelectric coupling mediated by the CDW phase \(\theta\), with coupling constant \(g=\alpha/\pi\). The total CDW phase \(\theta\) is given by \[\theta(\mathbf{r},t)=\mathbf{Q}\cdot\mathbf{r}+\delta\theta(\mathbf{r},t)\,, \tag{3}\] where the first term is dubbed the static axion field as it depends only on the separation in momentum-space \(\mathbf{Q}\) between the Weyl nodes, and is determined by the band-structure of the material [29]. The second term \(\delta\theta(\mathbf{r},t)\) is the **dynamical axion field**, which can be understood as the collective mode associated to the sliding phase mode of the CDW. The electromagnetic Lagrangian (2) is supplemented by the Nambu-Goldstone equations of motion of the collective dynamical axion mode, namely \[\mathcal{L}_{\text{NG}}=\frac{\kappa}{2}[(\partial_{t}\delta\theta)^{2}-v^{2} (\mathbf{\nabla}\delta\theta)^{2}-\Omega_{0}^{2}\delta\theta^{2}] \tag{4}\] The first term in the above equation is the kinetic energy with \(\kappa\) the chiral compressibility, which is proportional to the density of states at the Fermi level; the second term describes the spatial dispersion with \(v\) the speed of sound in the medium; and the final term is the pinning-induced gap term with \(\Omega_{0}\) the pinning frequency. The equations of motion of the gauge potential derived from Eq. (4) are given by \[\partial_{t}^{2}\mathbf{A}-\mathbf{\nabla}\times\mathbf{\nabla}\times\mathbf{A}=g(\partial_{t }\theta\mathbf{\nabla}\times\mathbf{A}-\mathbf{\nabla}\theta\times\partial_{t}\mathbf{A}) \tag{5}\] where \(\mathbf{B}=\mathbf{\nabla}\times\mathbf{A}\) and \(\mathbf{E}=\partial_{t}\mathbf{A}\) are the magnetic and electric fields, respectively, given in terms of the gauge potential with the Weyl gauge selected \(\mathbf{\nabla}\phi=0\) for scalar potential \(\phi\). The current response of the axion insulator is hence \[\mathbf{J}=g\mathbf{\nabla}\theta\times\mathbf{E}+g\hat{\theta}\mathbf{B}. \tag{6}\] In particular, the first term is equivalent to an anomalous Hall response proportional to the spatial gradient of the axion field, while the second term is known as the chiral magnetic effect and is only relevant in the presence of time-dependent axion fields. The equations of motion for the dynamical axion \(\delta\theta\) are \[\kappa(\partial_{t}^{2}+\gamma\partial_{t}-v^{2}\nabla^{2}+\Omega_{0}^{2}) \delta\theta=g\mathbf{E}\cdot\mathbf{B} \tag{7}\] Here the second term introduces a phenomenological damping parameter \(\gamma\). These equations qualitatively describe massive sound waves driven by an effective force \(g\mathbf{E}\cdot\mathbf{B}\). Thus as long as the electric and magnetic fields have some non-orthogonal component the collective dynamical axion field can be excited. _Exciting the dynamical axion mode_--As seen in Eq. (7), exciting a dynamical axion mode in a material is a nonlinear optical process requiring at least two electromagnetic fields. In general, the intensity of the axion response depends on the frequency, momenta, polarization, and angle of the incident beams, see Fig. 1 for proposed protocol. We consider the following plane-wave ansatz \(\mathbf{A}(q)=\mathbf{A}_{1}\delta_{q,q_{1}}+\mathbf{A}_{2}\delta_{q,\mathbf{q}_{2}}\), where \(q_{i}=(\omega_{i},\mathbf{q}_{i})\) is the four-momentum with \(\omega_{i}\) the frequency and \(\mathbf{q}_{i}\) the momentum. The field amplitudes \(\mathbf{A}_{i}=A_{i}\mathbf{\hat{\varepsilon}}_{i}\) are related to the polarizations \(\mathbf{\hat{\varepsilon}}_{i}\) which obey \(\mathbf{q}_{i}\cdot\mathbf{\hat{\varepsilon}}_{i}=0\). We further assume that the momenta satisfy the free-space dispersion relations so that \(|\mathbf{q}_{i}|=|\omega_{i}|\). The response of the dynamical axion to the application of the vector potential \(\mathbf{A}(q)\) is given by \[\delta\theta(\omega_{1},\omega_{2},\mathbf{q}_{1},\mathbf{q}_{2})=\] \[\frac{(g/\kappa)(\omega_{2}\mathbf{q}_{1}-\omega_{1}\mathbf{q}_{2})\cdot( \mathbf{\hat{\varepsilon}_{1}}\times\mathbf{\hat{\varepsilon}_{2}})}{(\omega_{1}+ \omega_{2})^{2}+i(\omega_{1}+\omega_{2})\gamma-\Omega_{0}^{2}}A_{1}A_{2} \tag{8}\] Figure 1: Schematic of the general excitation and detection set-up. Two incident beams come in at a relative angle of \(\pi-2\alpha\) with polarizations \(\mathbf{\hat{\varepsilon}}_{1},\mathbf{\hat{\varepsilon}}_{2}\); frequencies \(\omega_{1},\omega_{2}\); and wave vectors \(\mathbf{q}_{1},\mathbf{q}_{2}\); and are used to induce finite \(\mathbf{E}\cdot\mathbf{B}\), exciting an axionic collective mode using a multiphoton absorption event. In order to verify the creation of the axionic mode, time-resolved Kerr rotation can be used. For a probe beam at normal incidence (assumed to be collinear with the static axion response along \(\mathbf{Q}\)), the reflected beam’s polarization will be elliptically polarized at an angle that oscillates in phase with the induced axion oscillations. here the term proportional to \(v^{2}\nabla^{2}\) of Eq. (7) has been dropped since \(v\ll c\). Importantly, the appearance of the vector triple product constrains the amplitude of the dynamical axion mode to the geometric volume defined by the two polarization vectors \(\mathbf{\hat{e}}_{i}\) and the relative four-momenta \(q_{i}\) of the incident electromagnetic fields; as a result, this provides a route towards smoking-gun identification of the axion collective mode. In order to excite the axion, the polarizations must be non-collinear and also must be non-collinear with the relative four-momentum \(\omega_{1}\mathbf{q}_{2}-\omega_{2}\mathbf{q}_{1}\). In Fig. 2 we explore the efficacy of this two-photon excitation protocol for a variety of different incident beam angles \(\alpha\) and frequencies \(\omega_{i}\) in the maximally-crossed polarization channel, c.f., Fig. 1. Depending on the relative angle \(\alpha\) there are two two-photon channels which can be used to excite the dynamical axion mode. Two-photon absorption can be used when the beams are close to head-on, i.e., \(\alpha=0\). For parallel-propagating collinear beams, with \(\alpha=\pi/2\) this channel is closed, and instead stimulated Raman excitation can be used to excite the dynamical axion mode by inelastic scattering of light. At intermediate incident angles, e.g., \(\alpha=\pi/4\), both channels are active. Fig. 2 shows clearly how varying degrees of freedom of the incident beam will induce changes to axion excitation intensity, with the inelastic stimulated Raman channel for collinear beams resulting in a stronger response relative to the two-photon absorption process. _Axion detection via Kerr angle modulation--_Now that the protocol for exciting an axion collective mode has been established, we move on demonstrate how it can be detected via an optical signature. The key idea is to use a linearly polarized detection beam of known frequency at normal incidence, to allow for optical detection of the axion field by its effect on the reflected light; a technique well established for the spectroscopy of static axion re Figure 2: Induced axion amplitude \(|\delta\theta|^{2}\) as a function of incident photon frequencies \(\omega_{1},\omega_{2}\) for different incident angles \(\alpha\) in the maximally-crossed polarization channel. (a) Excitation amplitude for \(\alpha=0\) (beams anti-parallel head on). We see a pronounced two-photon absorption feature when \(\omega_{2}+\omega_{1}=\Omega_{0}\). (b) Orthogonal incidence when \(\alpha=\pi/4\). We see that in addition to the two-photon absorption channel, we also see the emergence of a much stronger inelastic (stimulated Raman) excitation channel. Note the \(y=0\) line separates the two half planes, as well as note the separate scale bars for each. This was done to resolve the axion response in the upper half plane, which is significantly weaker as compared to the response shown in the lower half plane. (c) Parallel collinear incidence (\(\alpha=\pi/2\)). In this case, the two-photon absorption channel has been completely suppressed and instead the Raman excitation channel is maximal. Note in this case the relative momentum of the two beams is zero and therefore there must be a finite relative-frequency. sponses [25]. The proposed pump-probe style protocol is illustrated in Fig. 1, where the pump beam excites the dynamical axion mode in the bulk and is bounded above by a vacuum with relative permittivity \(\epsilon=1\). For simplicity we assume the material is nonmagnetic so that \(\mu=\mu_{0}=1\). Also, by assuming the probe incident beam wave vector along the z-axis, it is sufficient to consider the bulk dielectric permittivity tensor only in the plane orthogonal to the direction of propagation of the wave \[\epsilon(\omega)=\begin{pmatrix}\epsilon_{xx}&\epsilon_{xy}\\ \epsilon_{yx}&\epsilon_{yy}\end{pmatrix}=\begin{pmatrix}\epsilon_{1}&i \epsilon_{2}\\ -i\epsilon_{2}&\epsilon_{1}\end{pmatrix} \tag{9}\] where the dielectric tensor includes Hall effect contributions given as the complex off-diagonal matrix elements. Furthermore, we assume that the axion field's spatial variation is small compared to the wavelength of the incident light, such that \(\mathbf{\nabla}\delta\theta\ll\mathbf{q}\delta\theta\) everywhere except close to the surface where we treat \(\nabla\delta\theta\) as a sharp \(\delta\)-function singularity. We also take the adiabatic approximation where \(\partial_{t}\delta\theta\ll\omega\delta\theta\) such that contributions from the time derivative are sufficiently small so as to be taken to be zero, with the justification being that the axion frequency varies slowly as compared to the frequency of light. This will be later relaxed to allow for dynamical axion modes. Under these assumptions Eq. (5) reduces to \(\mathbf{\nabla}\times\mathbf{\nabla}\times\mathbf{A}=-\epsilon(\mathbf{r})\partial_{t}^{2}\bm {A}\) in the medium. Solving for the gauge potential \(\mathbf{A}\) naturally yields the chiral basis set \(\mathbf{A}_{j}^{\pm}=A_{j}^{\pm}\frac{1}{\sqrt{2}}\left(1,\pm i\right)^{T}\) where \(A_{j}^{\pm}\) is an amplitude prefactor, and corresponding eigenvalues \(q^{\pm}=\omega n^{\pm}\) with \(n_{\pm}=\sqrt{\epsilon_{1}\pm\epsilon_{2}}\). The reflectance and transmittance amplitudes can be obtained via the continuity of electric and magnetic fields at the interface, given as \(\mathbf{E}_{i}+\mathbf{E}_{r}=\mathbf{E}_{t}\), and \(\mathbf{B}_{i}+\mathbf{B}_{r}=\mathbf{B}_{t}+g\delta\theta\mathbf{E}_{t}\). We are most interested in nonlinear optics effects which may be detected using spectroscopic techniques. In turn, the coefficients of most use are the reflection coefficients, given in terms of the incident beam as \[R^{\pm}=\pm\frac{1\mp(n^{\pm}\mp ig\delta\theta)}{1\pm(n^{\pm}\mp ig\delta \theta)}I^{\pm} \tag{10}\] Here \(I^{\pm}\) denotes the incident light written in the chiral basis. The plus/minus reflection coefficients are, in general, different for left- and right-circularly polarized light. To promote \(\delta\theta\) to time-dependence, the partial differential equation of Eq. (7) are solved with the initial condition \(\delta\theta(t\ll t_{pump})=0\) as the axion field is not excited before the onset of the pump at time \(t_{pump}\). Following excitation, the damping term \(\gamma\) causes a ring down of the axion field, see Fig. 3. The key result is the reflection amplitude's direct dependence on the axion field. This will manifest as a modulation of the reflected beam Kerr angle, and serves as our smoking gun optical signature to unambiguously deduce the presence of a dynamical axion field in the material. We now do an order-of-magnitude estimate of the size of this effect. The change in the polarization rotation depends on the size of the induced \(g\delta\theta\) (recall \(g=\alpha/\pi\) is a universal constant). Applying the equations of motion, we find that this will scale as \(g\delta\theta\sim(g^{2}/\kappa)(E/\Omega)^{2}/c\), as \(B\sim E/c\) in these units, and \(\Omega\) is the dynamical axion resonance frequency which we take to be 10GHz. Up to numerical factors we take \(\kappa=\nu(E_{F})\sim k_{F}^{2}/v_{F}\) for a semimetal with Fermi velocity \(v_{F}\sim 10^{-3}c\). \(k_{F}\) in turn is set by the carrier density, which we take to be \(k_{F}\sim.1\AA^{-1}\). For a large but physically realizable electric field of order 1MV/cm this gives a response of \(g\delta\theta\sim g^{2}v_{F}/cE^{2}/(\Omega k_{F})^{2}\sim 1.2\), which is a small but observable response. _Conclusion_--Beginning with a Weyl semimetal with charge density wave order, we have shown how the separation of the Weyl nodes in tandem with the CDW-induced phase fluctuations can give rise to a dynamical axion field. To model the dynamical axion mode and determine a novel optical signature we solved the nonlinear partial differential equations describing its dynamics and established a two-step, pump-probe style protocol which excites this mode with a two-photon excitation scheme, and subsequently detects it with a third probe beam. The intensity of the axion excitation is stronger for parallel collinear beams relative to other beam configurations, as can be understood from Eq. (11) and is shown in Fig. 2(c). Our key result is reflected probe beam's Kerr Figure 3: Manifestations of excited axion collective mode in transient Kerr angle rotation. Using protocol established here the axion is pumped by a finite \(\mathbf{E}\cdot\mathbf{B}\) profile, which is tuned to be in two-photon resonance with the dynamical axion. This induces collective oscillations of the axion angle shown above as \(\delta\theta(t)\). These oscillations then manifest in the time-dependence of the Kerr-angle \(\Theta_{K}(t)\) which approximately track the axion field. Parameters used here are: axion resonant frequency \(\Omega_{0}=1\), axion damping \(\gamma=0.2\), constant \(g=0.1\), chiral compressibility \(\kappa=1\), probe frequency \(\omega=1\), and dielectric matrix elements \(\epsilon_{1}=10\) and \(\epsilon_{2}=0.7\). Note the amplitudes for each response are not to scale. The pump amplitude is an order of magnitude greater than the \(\delta\theta\) amplitude, which is itself an order of magnitude greater than the \(\Theta_{K}\) amplitude. angle is modulated such that it approximately tracks the axion field oscillation. A motivation of this work is to provide theoretical predictions that can drive future experimental work, where potential material candidates could include axion insulators MnBi\({}_{2}\)Te\({}_{4}\)[30] and (TaSe\({}_{4}\))\({}_{2}\)I [31; 22; 32]. This paper has demonstrated the potential for multidimensional [33; 34; 35] and multiphoton spectroscopy in the study of correlated topological materials and their dynamics [36]. The Quantum Chromodynamics axion, possibly within reach of experimental verification by current experiments, represents one of the most compelling pathways beyond the Standard Model, by providing at the same time an elegant solution to the strong CP problem and an excellent dark matter candidate [3; 37]. Meanwhile, condensed matter axions arising in novel, topological materials has lead to a more accessible playground for studying the exotic phenomena associated with these particles that may provide insight when mapped back to high energy physics and impact experimental prospects of an axion discovery via axion-mediated forces. Our work also has implications for condensed matter axion systems that are being used to detect their high energy counterparts in direct detection dark matter experiments [38; 39]. ###### Acknowledgements. The authors would like to acknowledge fruitful discussions with Aaron Chou, Fahad Mahmood, Soyeun Kim, Ankit Disa, and Yikun Wang. This work is entirely supported by the Quantum Science Center (QSC), a National Quantum Information Science Research Center of the U.S. Department of Energy (DOE). P.N. gratefully acknowledges support from the Gordon and Betty Moore Foundation grant number #8048 and from the John Simon Guggenheim Memorial Foundation (Guggenheim Fellowship). ## Appendix A Derivation of axion response function Here we provide more detailed derivations of the results presented in the main work. We begin by outlining the steps to excite the axion mode via two externally applied light beams, and arrive at the axion response function. To start, we show how the gauge field and axion equations of motion may be derived from the full axion modified electromagnetic Lagrangian \[\mathcal{L}=\frac{1}{2}(\epsilon\mathbf{E}^{2}-\frac{1}{\mu}\mathbf{B}^{ 2})+g\theta\mathbf{E}\cdot\mathbf{B}+\] \[\frac{\kappa}{2}[(\partial_{t}\delta\theta)^{2}-v^{2}(\nabla \delta\theta)^{2}-\Omega_{0}^{2}\delta\theta^{2}] \tag{10}\] by utilizing the Euler-Lagrange formalism. For \(\mathbf{A}\) this is \[\frac{\partial}{\partial t}\left(\frac{\partial L}{\partial(\partial_{t}\mathbf{ A})}\right)+\nabla\cdot\left(\frac{\partial L}{\partial(\nabla\mathbf{A})}\right)- \frac{\partial L}{\partial\mathbf{A}}=0 \tag{11}\] Varying with respect to \(\mathbf{A}\) yields its equation of motion \[\partial_{t}^{2}\mathbf{A}-\nabla\times\nabla\times\mathbf{A}=g(\partial_{t}\delta \theta\nabla\times\mathbf{A}-\nabla\delta\theta\times\partial_{t}\mathbf{A}) \tag{12}\] Similarly for the axion field \[\frac{\partial}{\partial t}\left(\frac{\partial L}{\partial(\partial_{t}\theta )}\right)+\nabla\cdot\left(\frac{\partial L}{\partial(\nabla\theta)}\right)- \frac{\partial L}{\partial\theta}=0 \tag{13}\] and the corresponding \(\theta\) equation of motion \[\kappa(\partial_{t}^{2}+\gamma\partial_{t}-v^{2}\nabla^{2}+\Omega_{0}^{2}) \delta\theta=g[\partial_{t}\mathbf{A}(\nabla\times\mathbf{A})] \tag{14}\] As stated in the main text, \(v^{2}\nabla^{2}\theta\) may be dropped by observing \(v\ll c\). The goal is to solve for the axion response to an externally applied electromagnetic field. This is first carried out by perturbatively expanding for \(\theta\) and \(\mathbf{A}\) with small parameter \(\varepsilon\) \[\mathbf{A}(\mathbf{x},t)=\epsilon\mathbf{A}_{1}+\epsilon^{2}\mathbf{A}_{2}+ \epsilon^{3}\mathbf{A}_{3}+... \tag{15}\] \[\theta(\mathbf{x},t)=\epsilon\theta_{1}+\epsilon^{2}\theta_{2}+ \epsilon^{3}\theta_{3}+... \tag{16}\] Performing this expansion and plugging into the gauge field and axion equations of motion, respectively, determines the axion field must be minimally excited by two gauge fields. With this in mind, we solve for \(\delta\theta\) by Fourier transforming Eq. (14). Note the vector potential Fourier transforms as \(\mathbf{A}(x_{i})=\int_{q}e^{iq_{i}\cdot\mathbf{x}}\mathbf{A}(q_{i})\); and we denote \(\int_{q}=\int\frac{d^{3}q}{(2\pi)^{3}}\int\frac{d\Omega}{2\pi}\), and \(q\cdot x=\mathbf{q}\cdot\mathbf{x}-\Omega t\) in the exponential. \[\kappa\int d^{4}xe^{-iq\cdot x}(\partial_{t}^{2}+\gamma\partial_ {t}+\Omega_{0}^{2})\delta\theta(x)=\] \[\kappa(-\Omega^{2}+i\Omega+\Omega_{0}^{2})\int d^{4}xe^{-iq\cdot x }\delta\theta(x)=\] \[\kappa(-\Omega^{2}+i\Omega+\Omega_{0}^{2})\delta\theta(q) \tag{17}\] Inverting the matrix \((-\Omega^{2}+i\Omega+\Omega_{0}^{2})\mathbb{I}\) to isolate \(\delta\theta(q)\) on the left-hand side: \[\delta\theta(q,\omega)=[-\Omega^{2}+i\Omega+\Omega_{0}^{2}]^{-1} (\frac{g}{\kappa})\times\] \[\int d^{4}xe^{-iq\cdot x}[\partial_{t}\mathbf{A}(x_{1})\cdot\nabla \times\mathbf{A}(x_{1})]\] \[=[-\Omega^{2}+i\Omega+\Omega_{0}^{2}]^{-1}(\frac{g}{\kappa})\times\] \[\int d^{4}xe^{-iq\cdot x}[\partial_{t}\int e^{iq_{1}\cdot x}\mathbf{A} _{1}(q_{1})\cdot\nabla\times\int e^{iq_{2}\cdot x}\mathbf{A}_{1}(q_{2})]\] \[=[-\Omega^{2}+i\Omega+\Omega_{0}^{2}]^{-1}(\frac{g}{\kappa}) \int_{q_{1},q_{2}}\int_{q}d^{4}x\times\] \[e^{-i(q\cdot x-q_{1}\cdot x-q_{2}\cdot x)}(-i\Omega_{1}\mathbf{A}_{1} (q_{1}))(i\mathbf{q_{2}}\times\mathbf{A}_{1}(q_{2})) \tag{18}\] Making use of the exponential representation as the Dirac delta function \(\int d^{4}xe^{-i(q\cdot x-q_{1}\cdot x-q_{2}\cdot x)}=\delta_{q_{1}+q_{2},q}\) simplifies this to \[\theta_{2}(q)=[-\Omega^{2}+\Omega_{0}^{2}]^{-1}(\frac{g}{\kappa})\times\] \[\int_{q_{1},q_{2}}\delta_{q_{1}+q_{2},q}\Omega_{1}\mathbf{A}_{1}(q_{1 })\cdot(\mathbf{q_{2}}\times\mathbf{A}_{1}(q_{2})). \tag{19}\] Recall the field is minimally composed of two beams whose frequencies must add up to axion resonance \(\Omega=\omega_{1}+\omega_{2}\) in order to excite the mode. Utilizing a plane-wave ansatz \(\mathbf{A}(q)=\mathbf{A}_{1}\delta_{q,q_{1}}+\mathbf{A}_{2}\delta_{q,q_{2}}\), with four-momenta \(q_{i}=(\omega_{i},\mathbf{q}_{i})\) for frequency \(\omega_{i}\) and momentum \(\mathbf{q}_{i}\); the solution for the axion field is \[\delta\theta(\omega_{1},\omega_{2},\mathbf{q}_{1},\mathbf{q}_{2})=\] \[\frac{(g/\kappa)(\omega_{2}\mathbf{q}_{1}-\omega_{1}\mathbf{q}_{2})\cdot (\mathbf{\hat{E}_{1}}\times\mathbf{\hat{E}_{2}})}{(\omega_{1}+\omega_{2})^{2}+i(\omega _{1}+\omega_{2})\gamma-\Omega_{0}^{2}}A_{1}A_{2} \tag{10}\] ## Appendix B Axion modified electrodynamics Next consider the interface of a dynamical axion insulator (DAI) and a vacuum. The side of the vacuum is governed by the sourceless Maxwell's equations; while the other side is the DAI governed by axion electrodynamics with modified constitutive equations \[\mathbf{D}=\epsilon\mathbf{E}-g\theta\mathbf{B} \tag{11}\] \[\mathbf{H}=\frac{1}{\mu}\mathbf{B}+g\theta\mathbf{E} \tag{12}\] where \(\epsilon=\epsilon(\mathbf{r},t)\) is the complex dielectric tensor and \(\mu=\mu(\mathbf{r},t)\) is the magnetic permeability tensor which we set to 1 for simplicity, as is the case in nonmagnetic materials. In the medium, modified Ampere's law gives \[\nabla\times\mathbf{H}=\mathbf{J}+\partial_{t}\mathbf{D}\rightarrow \tag{13}\] \[\nabla\times(\frac{1}{\mu_{0}}\mathbf{B}+g\theta\mathbf{E})=\mathbf{J}+ \partial_{t}(\epsilon\mathbf{E}-g\theta\mathbf{B}) \tag{14}\] where \(\mathbf{J}=\sigma\mathbf{E}\) and we include in the conductivity \(\sigma\) the contribution from the Hall conductivity. Substituting back in for the vector potential gives and simplifying \(\frac{1}{\mu}\nabla\times\nabla\times\mathbf{A}=\epsilon(\mathbf{r})\partial_{t}^{2} \mathbf{A}\) and so far it is assumed \(\theta\) does not spatially vary. As stated in the main text, we take the adiabatic approximation for now and let \(\dot{\theta}=0\). Later \(\theta\) will be promoted to be time-dependent. Fourier-transforming equation this expression \[\frac{1}{\mu}(i\mathbf{k}\times i\mathbf{k}\times\mathbf{A})=\epsilon(\omega) \omega^{2}\mathbf{A}\rightarrow \tag{15}\] \[\frac{1}{\mu}(\mathbf{k}\cdot\mathbf{k}\mathbb{I}-\mathbf{k}\otimes\mathbf{k}) \mathbf{A}=\epsilon(\omega)\omega^{2}\mathbf{A} \tag{16}\] To simplify this slightly, let the incident and reflected light propagate in the \(\hat{z}\)-direction and let this be orthogonal to the interface whose surface lies in the \(xy\)-plane. Then due to boundary conditions the wave-vector of the transmitted light will also be along the \(\hat{z}\)-direction so that \(\mathbf{k}=(0,0,k_{z})\). This simplifies the above matrix equation as \[\begin{pmatrix}k_{z}\omega^{2}&0\\ 0&k_{z}\omega^{2}\end{pmatrix}\begin{pmatrix}A_{x}\\ A_{y}\end{pmatrix}=\omega^{2}\begin{pmatrix}\epsilon_{xx}&\epsilon_{xy}\\ \epsilon_{yx}&\epsilon_{yy}\end{pmatrix}\begin{pmatrix}A_{x}\\ A_{y}\end{pmatrix} \tag{17}\] Let the permittivity tensor have the form for a gyroelectric medium at frequency \(\omega\) \[\epsilon(\omega)=\begin{pmatrix}\epsilon_{1}(\omega)&i\epsilon_{2}(\omega)&0 \\ -i\epsilon_{2}(\omega)&\epsilon_{1}(\omega)&0\\ 0&0&\epsilon_{3}(\omega)\end{pmatrix} \tag{18}\] Where we can ignore the z-component \(\epsilon_{3}(\omega)\) which is zero for a plane-wave propagating in the z-direction. Solving the matrix equation 17 the eigenvalues correspond to \(\mathbf{k}^{\pm}=\omega\sqrt{\epsilon_{1}\pm\epsilon_{2}}\hat{e}^{\pm}\) and the eigenvectors form a chiral basis \(\mathbf{A}_{j}^{\pm}=A_{j}^{\pm}\frac{1}{\sqrt{2}}\begin{pmatrix}1\\ \pm i\end{pmatrix}\). The incident field may be rewritten in the new chiral basis \[\mathbf{A}_{i}=\mathbf{A}_{i}^{+}\hat{e}_{+}+\mathbf{A}_{i}^{-}\hat{e}_{-} \tag{19}\] similarly for reflected and transmitted light. Now let's promote \(\theta\) to be time-dependent. Solving the nonlinear partial differential equation given in Eq. (15) for time varying electromagnetic fields yields a dynamical contribution \(\delta\theta\) to the axion field. This requires an expression for the pump beam \(p(t)\propto\mathbf{E}\cdot\mathbf{B}\) to drive the axion mode, taken as a Gaussian wavepacket \(p(t)=e^{(t/\sigma)^{2}}cos(\Omega_{0}t)\), where \(\sigma\) tunes the spread of the Gaussian, and \(\Omega_{0}\) denotes the axion resonant frequency. Thus the axion term will be a dynamical fluctuating field, which we later use to demonstrate how such a dynamical axion field generates a time-dependent modulation of the Kerr rotation. ## Appendix C Boundary conditions and Kerr rotation angle The boundary conditions for electric and magnetic fields at an interface with no surface charge or current are \[\mathbf{E}_{i}+\mathbf{E}_{r}=\mathbf{E}_{t} \tag{10}\] \[\mathbf{B}_{i}+\mathbf{B}_{r}=\mathbf{B}_{t}+g\theta\mathbf{E}_{t} \tag{11}\] where the left-hand-side of the equals sign is the vacuum and the right-hand-side is the DAI. In the vacuum \(\mathbf{k}=\omega\hat{z}\) and in the DAI \(\mathbf{k}=k^{\pm}\hat{z}\) as above. The boundary condition for electric field gives the simple condition \(\mathbf{A}_{i}+\mathbf{A}_{r}=\mathbf{A}_{t}\). While the boundary condition for magnetic field must be considered further. Substituting back in for the vector potential \[\sum_{\alpha=\pm}-i\mathbf{k}_{i}\times\mathbf{A}_{i}^{\alpha}+i\mathbf{k}_{r} \times\mathbf{A}_{r}^{\alpha}=-i\mathbf{k}_{t}^{\alpha}\times\mathbf{A}_{t}^{\alpha}-ig \theta\omega\mathbf{A}_{t}^{\alpha}\quad\rightarrow \tag{12}\] \[\sum_{\alpha}-i\omega\hat{z}\times\mathbf{A}_{i}^{\alpha}+i\omega\hat {z}\times\mathbf{A}_{r}^{\alpha}=-ik^{\alpha}\hat{z}\times\mathbf{A}_{t}^{\alpha}-ig \theta\omega\mathbf{A}_{t}^{\alpha} \tag{13}\] here let \(\alpha=\pm\). Plugging in for the vector potentials in the chiral basis and noting \(\hat{z}\times\hat{e}_{\pm}=\pm i\hat{e}_{\pm}\) \[\hat{e}_{+} :\omega I^{+}-\omega R^{+}=(k^{+}-ig\theta\omega)T^{+} \tag{10}\] \[\hat{e}_{-} :\omega I^{-}-\omega R^{-}=(k^{-}+ig\theta\omega)T^{-} \tag{11}\] Using \(I^{+}+R^{+}=T^{+}\) and \(I^{-}+R^{-}=T^{-}\) and solving for the \(R^{+}\) gives: \[R^{+}=\frac{\omega-(k^{+}-ig\theta\omega)}{\omega+(k^{+}-ig\theta\omega)}I^{+} \tag{12}\] Similarly for \(R^{-}\): \[R^{-}=\frac{-\omega+(k^{-}+ig\theta\omega)}{-\omega-(k^{-}+ig\theta\omega)}I^{-} \tag{13}\] recall \(k^{\pm}=\omega\sqrt{\epsilon_{1}\pm\epsilon_{2}}\). The same can be done for the transmission coefficients: \[T^{+}=\frac{2\omega}{\omega+(k^{+}-ig\theta\omega)}I^{+} \tag{14}\] \[T^{-}=\frac{2\omega}{\omega+(k^{-}+ig\theta\omega)}I^{-} \tag{15}\] Finally, the Kerr rotation angle is related to the argument of the complex reflection amplitudes\(R^{\pm}=|R^{\pm}|e^{i\Delta^{\pm}}\) as \[\Theta_{K}=-\frac{1}{2}(\Delta^{+}-\Delta^{-}) \tag{16}\]
2309.12583
Using ChatGPT in HCI Research -- A Trioethnography
This paper explores the lived experience of using ChatGPT in HCI research through a month-long trioethnography. Our approach combines the expertise of three HCI researchers with diverse research interests to reflect on our daily experience of living and working with ChatGPT. Our findings are presented as three provocations grounded in our collective experiences and HCI theories. Specifically, we examine (1) the emotional impact of using ChatGPT, with a focus on frustration and embarrassment, (2) the absence of accountability and consideration of future implications in design, and raise (3) questions around bias from a Global South perspective. Our work aims to inspire critical discussions about utilizing ChatGPT in HCI research and advance equitable and inclusive technological development.
Smit Desai, Tanusree Sharma, Pratyasha Saha
2023-09-22T02:23:44Z
http://arxiv.org/abs/2309.12583v1
# Using ChatGPT in HCI Research--A Trioethnography ###### Abstract This paper explores the lived experience of using ChatGPT in HCI research through a month-long trioethnography. Our approach combines the expertise of three HCI researchers with diverse research interests to reflect on our daily experience of living and working with ChatGPT. Our findings are presented as three provocations grounded in our collective experiences and HCI theories. Specifically, we examine (1) the emotional impact of using ChatGPT, with a focus on frustration and embarrassment, (2) the absence of accountability and consideration of future implications in design and raise (3) questions around bias from a Global South perspective. Our work aims to inspire critical discussions about utilizing ChatGPT in HCI research and advance equitable and inclusive technological development. Triaethnography, ChatGPT, Large Language Models (LLMs), Situated XAI 1 Footnote 1: [https://www.cds.org/](https://www.cds.org/) ## 1 Introduction Large Language Models (LLMs) have demonstrated remarkable success in a diverse array of natural language tasks, such as machine translation, question answering, and automatic summarization, owing to their state-of-the-art transformer architecture and two-stage training pipeline [55]. The transformer architecture enables LLMs to understand complex relationships between input elements, while their two-stage training process allows them to leverage knowledge acquired during pretraining on large amounts of unannotated data. Among the most prominent LLMs is the ChatGPT [62], an OpenAI-developed conversational artificial intelligence system, boasting more than 175 billion parameters and possessing a multitude of advanced capabilities. While ChatGPT attracted the attention of researchers and various stakeholders for its potential applications in various fields, the use of ChatGPT in education, research, and healthcare for tasks such as scientific writing, optimizing healthcare workflows, and augmenting personalized learning, necessitates a prudent approach. Several studies indicated concerns regarding the potential limitations and biases, including ethical, transparency, legal, as well as risk of bias [30]. For example, concerns regarding ChatGPT use in healthcare include inaccurate content, cybersecurity issues, and the risk of infodemic [8]. Similarly, the possibility of ChatGPT's misuse in medicine and research may accelerate the production of fake evidence and materials that have a high level of plausibility, leading to fraudulent use. As such, the future use of ChatGPT across diverse domains is a subject of current debates, and it is crucial to critically evaluate limitations and potential biases. Despite their meteoric rise in popularity and their widespread adoption in a range of domains, there is a lack of qualitative studies exploring the experiential use of LLMs by experts. Addressing this critical gap in the literature, we conducted a rigorous trioethnography to elucidate our own experiences with ChatGPT. Drawing on our collective insights, we endeavor to chart a new trajectory for the use of ChatGPT in Human-Computer Interaction (HCI) research, articulating a set of thought-provoking implications that have the potential to catalyze further innovation in this rapidly evolving field. ## 2 Background In HCI, there is a growing emphasis on using first-person methods to gain a deeper understanding of the role of technology in everyday life by incorporating the cultural meanings of technology use in context [35]. One such method is autoethnography, which has gained significant attention as a valuable approach to comprehending how technology is utilized in daily life from the perspectives of both developers and researchers. By utilizing lived experiences, autoethnography provides a nuanced and insightful examination of the situated use of technology (e.g., [9, 28, 33, 34, 36]). In contrast to autoethnography, duoethnography and trioethnography focus on the "dialogical" relationships between researchers, with the goal of juxtaposing their experiences to find similarities and differences and construct meaning based on shared realities [48]. We chose trioethnography because we wanted to bring together early-stage researchers with different kinds of HCI expertise and cultural backgrounds to understand the use of ChatGPT. This enabled us to connect as researchers and confront uncomfortable emotions and experiences that might have otherwise gone unexplored or unacknowledged. ## 3 Trioethnographic process & positionality Our approach to the trioethnography is inspired by [27]. To initiate the project, Smit, with expertise in conversational AI, sought to form a team that could offer diverse viewpoints on using LLMs in HCI research. The goal was to write a provocation paper for CUI 2023. Smit connected with Tanusree on social media because of her experience in usable security and governance tooling and shared interest in LLMs. The two researchers then contacted Pratyasha, who specializes in HCI for development (HCI4D) and Explainable AI (XAI), to bring a different perspective to the team. We began our trioethnography on March 6th, 2023. The scope of the trioethnography was limited to using ChatGPT for HCI research or work-related purposes. All other interactions were outside the scope of this study and were not to be recorded. We used a shared Google Doc to journal all our interactions with ChatGPT using annotated screenshots, notes, and reflections. To discuss and juxtapose our experiences, we commented on each other's notes and met weekly for reflective discussions. We recorded and transcribed these meetings and used the transcriptions to write memos about what stood out. We concluded our data collection on April 6th, 2023. Our collective notes, memos, screenshots, and reflections amounted to 10,127 words and 95 pages. We utilized ChatGPT for a variety of purposes, including writing, mundane tasks (such as transcription and image description), information retrieval, and coding. To ground our trioethnography, we share our positionality. Smit was born in Urban India and migrated to the U.S. for a doctorate. His research prioritizes designing accessible conversational interfaces for older adults. Tanusree was born in rural Bangladesh and migrated to the U.S. for a doctorate. Her research involves building frameworks and user-facing tooling for emerging technologies. Pratyasha was raised in and works in Bangladesh as a postgraduate researcher studying social justice, sustainability, and policy design for the Global South. All three authors are HCI researchers with experience in publishing at SIGCHI venues and identify as cisgender BIPOC. ## 4 Provocations As is typical in trioethnographies, we present our individual reflections and direct quotes in a first-person narrative and supplement our analysis by integrating relevant HCI literature to contextualize our experiences and stimulate discussions. The provocations presented in this section serve to accentuate the divergent and unique perspectives of the authors and should not be construed as a unified or synthesized viewpoint. Rather, they are intended as a conflation of discussion and findings to incite further contemplation, inquiry, and, ideally, debates. After studying the primary data, each author proposed several provocations in a meeting after the end of data collection. We selected three based on their ability to spark discussions and ideas among us, their potential relevance to the CUI community, and the theme of CUI 2023 of designing inclusive conversation. ### Cold as Ice, Sharp as Knife: The Emotional Paradox of ChatGPT _Smit: After writing a review for a conference paper, I felt some of my critiques could sound a bit harsh. So, I decided to check with ChatGPT if my review was rude. To my chagrin, ChatGPT called my review "insulting." I did not think it was insulting (at all). I asked ChatGPT to make my review less rude and more constructive. Its output did change the tone of the review but adulterated the meaning. The ChatGPT revised review sounded generic and unactionable. When I told this to ChatGPT, it apologized. But that did not matter as I felt equally embarrassed and frustrated. I decided to submit my original review with some minor edits, but this interaction left a sour taste. Nevertheless, after that, I wrote three more reviews and, despite my previous experience, used ChatGPT to assess the rudeness of them all. I felt I was seeking ChatGPT's approval, even though I understood its limitations._ In response to this reflection, Tanusree and Pratyasha echoed the sentiment and discussed how embarrassment could create friction between a user and ChatGPT. Although notoriously understudied in HCI [15], it is hypothesized that embarrassment stems from the heightened perceived social presence of the 'other' and is considered an anthropomorphizing behavior [10, 12, 42], typically experienced in public settings [13]. Despite all the unfounded hype [45] and fallacious claims [53], all three of us agreed that ChatGPT has no agency--nonetheless, embarrassment persisted. Interestingly, Smit continued using ChatGPT to assess the tone of his reviews. This contradiction became an ongoing joke in future meetings. On retrospective reflection, Smit speculated that his reliance on ChatGPT's approval might stem from feelings of impostor syndrome triggered by the Al's challenge to their expertise. This sentiment is likely exacerbated by OpenAI using standardized examination scores to evaluate the performance of language models like ChatGPT [44]. All three researchers in this study scored less than GPT-4 on Graduate Record Examination (GRE)--a prerequisite (at the time) for applying to graduate schools in the U.S. Seeking ChatGPT's approval perhaps is merely refuge society's implicit (or explicit) tendencies to assess human value using numbers (e.g., [56]). It is not impossible to imagine a future where ChatGPT's h-index would be higher than all human researchers resulting in academics confronting similar insecurities. Another emotion frequently mentioned in reflections and weekly meetings was frustration. Unlike embarrassment, frustration is a commonly experienced and widely studied emotion in HCI [24]. Frustration is experienced when a computing system prevents users from attaining their goals [24]. Tanusree experienced frustration when she was trying to perform an image description task, and ChatGPT kept generating nonsensical descriptions. Similarly, Pratyasha found ChatGPT's spurious response to "literature on digital sex work in Bangladesh" frustrating and misleading. In both cases, the frustration was intensified when ChatGPT failed to take responsibility for its inaccuracies. Its default response, "I am an Al language model...I cannot guarantee the accuracy or relevance of the information provided," is insufficient, as Smit pointed out in the third weekly meeting, "When it's giving you information, there are no such disclaimers. But when you call it out, it quickly falls back and apologizes." This incoherent shift from an omniscient LLM to just an LLM is the most frustrating part of using ChatGPT. The natural tendency of all three researchers in these scenarios was to argue about the veracity of the responses (exemplified in SS4.2). However, these interactions are often pointless and highlight the inscrutable computational infrastructure that users depend on, leading to feelings of alienation and strangeness [50]. Smit explained this feeling using the existential concept of "absurd" [7], emphasizing the frustration of using and arguing with ChatGPT. Although LLMs do not have real emotions, they have abilities to evoke emotions. The consequences of using a system with such abilities could be extreme--evidenced by a man reportedly committing suicide after interactions with an emotive chatbot based on an open-source alternative to the GPT model [59]. The appropriateness of using a "humanness" metaphor [14, 46] as a design tool is a topic of great interest in the CUI community. However, the implications of this deliberate design decision are manifesting expeditiously with LLMs. Even if ChatGPT never identifies itself as a human and denies having the ability to think, feel, or judge, it heightens its social presence using deceptive patterns [37], such as using the pronoun "I" and mimicking the effect of a person typing. However, these patterns can be offset by designing interfaces that are more honest about their capabilities and reduced reliance on anthropomorphism. One effective strategy is incorporating confidence scores, commonly used in Al-assisted decision-making [60]. While such approaches could compromise the glitzy human aspect of ChatGPT, they would lead to a more realistic and reliable use of LLMs, serving as a sounding board rather than an expert. A Self-Agrandizing, Loveable Rogue Al that Just Can't Admit When It's Wrong (and apologizes profusely) _Tanusree: I decided to use ChatGPT as a search engine for literature curation. Though the list of papers curated by ChatGPT seemed legit, my initial excitement turned into frustration when 1 found eight of ten papers nowhere to be found on popular databases and even had either incorrect titles or authors. For example, ChatGPT suggested that the paper "Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review" was by S. Kiranyaz et al. (2019) (halluncinated result), where authors name was incorrect. It even provided me with a paragraph that was supposed to be the abstract and an invalid URL for this elusive paper. I tried to give ChatGPT the benefit of the doubt by asking, "if the paper was legit." It kept feeding me with incorrect responses, even after I gave a hint that "author names and title paper don't match." This back-and-forth continues seven times! It was like talking to a stubborn toddler who refused to accept that the sky was blue. Finally, I provided information that "authors are Rawat, W., & Wang, Z. (2017)", instantly ChatGPT, being the humble AI, apologized and said, "You are right."_ These events evoked a "sentimental" reaction that conveyed a negative connotation [19], compelling Tanusree to elicit an admission of guilt from ChatGPT. In addition, the first few mistakes shaped Tanusree with a negative cognitive and emotional trust towards ChatGPT [21] which diverges from previous literature that posits a low level of trust trajectory followed by a positive development over time [54]. Tanusree was determined to hold ChatGPT accountable and coerce it to utter the phrase "I was wrong" [29]. It further complicates Tanusree's feelings about whether she was seeking empathy (make ChatGPT understand her intention) or accountability [51]. Whereas conceptualizing computer systems as autonomous moral (accountable) agents are debatable; thus, ChatGPT cannot be deemed entirely autonomous (thus, fully accountable) as it operates under the direction of external forces (such as algorithms and data fed into it) [58]. Moreover, ChatGPT's abrupt transition to admitting mistakes on the eighth attempt signaled two concerning indications--(1) its inclination to prioritize users' happiness by providing information, even if inaccurate; and (2) the transition was suggestive of a shift from an unyielding stance to a heightened susceptibility to persuasion, similar to that of a naive child. This parallel was also evident in Pratyasha's experience, highlighting ChatGPT's inadequacy in curating literature for sex workers in the Global South, where ChatGPT primarily directs resources through a moralist lens, potentially leading to bias. Similarly, Smit found ChatGPT's alt-text generation for box plot figures unsatisfactory, as it offered good tips on how to do it but failed to deliver in practice as Smit said: "It talks the talk but does not walk the walk." These experiences highlight the importance of accountability in the development of language models such as ChatGPT, and the need for responsible AI thinking [1]. It raises questions about what the future of language models will look like, especially when, in our case, ChatGPT is inclined to exhibit contrition and amend its prior response when confronted with even a modicum of evidence that "you were incorrect, and this is the correct answer" as Tanusree experienced. Correspondingly, Smit perceives ChatGPT as a potential co-writer, whereas numerous entities have abruptly integrated ChatGPT as a constituent of their existing products. Such considerations raise issues of whether the future constraints ought to be treated as a rubric of legitimation or deception. Furthermore, there could be multiple futures for various stakeholders with design and legitimacy constraints [3], as Garcia's temporal model has shown that designing without considering the future is impossible [20]. In the next 10 years, if we were to adopt ChatGPT in personalization for blind users where they confirm the absence of nudity in an image before posting it on social which fails to identify correctly, it could lead to psychological distress as Tanusree found a completely incorrect image description of a picture within ChatGPT. The ecological survival of human-nonhuman relations requires resilient modes of framing, particularly when used for vulnerable groups (i.e., blind users), political and unrepresentative values exploration, etc. The Futurama exhibit at the 1939 New York World's Fair is an example of how a utopian image of the future (America in 25 years) can eventually impact society in unforeseen ways. In particular, Bel Geddes's General Motors' internal combustion engine failed to account for complex societal consequences, including insurance fraud, decline of automobile-dependent cities in addition to environmental pollution, road rage, and accidents [61]. Although the possibilities that ChatGPT offers may be exhilarating, it is imperative that we consider the potential points of failure and the measures we can take to minimize risks, which are the foundation of accountability and responsible AI thinking. Moreover, it is essential to increase awareness of how different stakeholders negotiate the future as a resource for various purposes as well as embrace multiple futures and design with resilience. ### Situated XAI: The Odyssey of the Marginalized and the Global South _Pratyasha: "We've detected suspicious behavior from phone numbers similar to yours. Please try again later or contact us through our help center at help.openai.com." 1 got five of these warnings when 1 tried to sign up for ChatGPT with my local phone number. 1 attempted a few more times before 1 decided to try again with my mother's contact, but it kept showing the same alerts. 1 was unable to create an account despite repeated attempts, wondering if 1 could make a contribution to the paper I had intended to collaborate on. As the last option, I considered using my sibling's UK phone number to register. He gave me the OTP, and to my surprise, I was in within seconds!_ In contrast to Smit and Tanusree, Pratyasha's initial interaction with ChatGPT was marked by a unique concern regarding the platform's potential for harboring xenophobic tendencies. Considering the AI field's focus on research, development, and design is predominantly centered on the "West" or "Global North" [47], AI systems such as ChatGPT have a disproportionate effect on the marginalized segments of the society [6]. Pratyasha was concerned about being stigmatized as "suspicious" due to her geographical origin, particularly when juxtaposed with a contact from the West. This unique experience was further amplified in the additional usage of ChatGPT within the larger context of Global South when the instances of transgressions against Western cultural norms pertaining to slavery and discrimination were flagged with a red-box warning; however, the reference to the Brahmin caste as the superior, and Shudra as "Chamar" (an extremely offensive term for menial workers in this region) did not elicit any warning regarding policy violations. Even the short biography provided by ChatGPT on world leaders included critiques of Modi, Hasina, or Xi Jinping, yet no criticism was attributed to Biden. In addition to its innate bias against individuals from diverse strata, ChatGPT provoked bias in regular conversations fostering additional prejudicial views regarding gender like other AI systems [31]. While Pratyasha's language holds a gender-neutral pronoun for individuals, the translation service by GPT assigned gender to specific tasks within the translated sentences. (Ex: "He reads," "He earns money," "She cooks," and "She cleans the house", etc.) Social stereotypes and discrimination may be the outcome when the data used to train a language model contains biased representations of particular groups of people. For marginalized groups, this lack of fairness can prevent them from trusting these models and result in inaccurate or biased predictions about such populations [32, 57]. The flawed stance demonstrated by ChatGPT towards content pertaining to the context of the South was further substantiated by the false and misleading classification of sex work as "illegal" in Pratyasha's country while searching for relevant literature on digital sex work. For both research and design purposes within the context of this region, Pratyasha encountered difficulties in relying on ChatGPT without manual verification. Introducing transparency, and XAI here could provide end users explanations to foster greater interaction by enabling them to act out and retrace AI/ML results, for instance, to check the correctness of the results [26]. The more critical use of ChatGPT revealed crucial concerns about how the society's most vulnerable, impoverished, and underprivileged groups can suffer unfavorably from AI systems [6, 18]. In a comprehensive dialogue with ChatGPT regarding the design of a platform cooperative [49] that is democratically governed and caters to the needs of marginalized domestic workers in Pratyasha's locality, the algorithm consistently advocates for a business model similar to gig job platforms, claiming to empower workers through secure employment opportunities. This model, managed by third parties instead of the workers themselves, has been proven to be facilitating severe exploitation and abuse of the intended beneficiaries [2, 23, 25], which helps reinforce the capitalist ideology and stands in stark opposition to the notion of "cooperative." This provocation deliberately questions the fairness, transparency, and trustworthiness of ChatGPT in the context of Global South, and advocates for an explainable system situated in the context of its users. The potential of AI to address significant issues in the Global South, such as healthcare, poverty, agriculture, education, and other high-stakes sectors, has attracted increasing interest from governments, businesses, and academia in recent years [4, 22, 38, 39, 40, 52]. Evidently, AI has been known to aggravate and promote prevalent social issues like bias and prejudice [5, 41]. Therefore, it is important to take steps to ensure that AI systems are accessible and easy to comprehend for the individuals who will use them, especially those from underrepresented groups. Since scale and complexity are what currently enable successful AI, the inevitable calls for transparency led by the Black Box problem will be difficult to address. Ensuring explainability could be considered in this regard; nonetheless, this is terribly understudied how it can be efficient for the marginalized groups in contrast to the relatively techno-literate users of the Global North [43]. The integration of XAI systems in the Global South countries can be beset by the detrimental reflection of contextually inaccurate data and a lack of situated awareness during implementation, which may result in misplaced trust and over-estimation of AI capabilities [17, 43]. Here is where the term "situated XAI" [16] can come into play, concentrating on what explainability, autonomy, control, and trust essentially mean to individuals from diverse backgrounds instead of looking at how people interact with technologies. The establishment of an AI global governance entity [11] with scrupulous attention to the context of Global South is vital for effectively addressing the social, economic, and political disruptions that exceed the purview of individual governments, corporations, and academic or civil society groups. Instead of following the North-centric notion of XAI, in order to understand how explainability can be fostered more effectively within these communities, we advocate for further research and study into the implications of socially situated XAI. ## 5 Conclusion In this paper, we use a month-long trioethnography to reflect on the use of ChatGPT in HCI research. Using our collective experiences, we reflect on (1) the impact of ChatGPT on our emotional states with an emphasis on frustration and embarrassment, (2) the lack of accountability and consideration of the future as a design rubric and raise (3) questions about bias from a Global South perspective. We hope these provocations serve as a call to action for the CUI community and direct focus on the need for further research, including design guidelines and governance, to address the diverse range of uses and users in this rapidly expanding field.
2309.13851
DISeR: Designing Imaging Systems with Reinforcement Learning
Imaging systems consist of cameras to encode visual information about the world and perception models to interpret this encoding. Cameras contain (1) illumination sources, (2) optical elements, and (3) sensors, while perception models use (4) algorithms. Directly searching over all combinations of these four building blocks to design an imaging system is challenging due to the size of the search space. Moreover, cameras and perception models are often designed independently, leading to sub-optimal task performance. In this paper, we formulate these four building blocks of imaging systems as a context-free grammar (CFG), which can be automatically searched over with a learned camera designer to jointly optimize the imaging system with task-specific perception models. By transforming the CFG to a state-action space, we then show how the camera designer can be implemented with reinforcement learning to intelligently search over the combinatorial space of possible imaging system configurations. We demonstrate our approach on two tasks, depth estimation and camera rig design for autonomous vehicles, showing that our method yields rigs that outperform industry-wide standards. We believe that our proposed approach is an important step towards automating imaging system design.
Tzofi Klinghoffer, Kushagra Tiwary, Nikhil Behari, Bhavya Agrawalla, Ramesh Raskar
2023-09-25T03:35:51Z
http://arxiv.org/abs/2309.13851v1
# DISeR: Designing Imaging Systems with Reinforcement Learning ###### Abstract Imaging systems consist of cameras to encode visual information about the world and perception models to interpret this encoding. Cameras contain (1) illumination sources, (2) optical elements, and (3) sensors, while perception models use (4) algorithms. Directly searching over all combinations of these four building blocks to design an imaging system is challenging due to the size of the search space. Moreover, cameras and perception models are often designed independently, leading to sub-optimal task performance. In this paper, we formulate these four building blocks of imaging systems as a context-free grammar (CFG), which can be automatically searched over with a learned camera designer to jointly optimize the imaging system with task-specific perception models. By transforming the CFG to a state-action space, we then show how the camera designer can be implemented with reinforcement learning to intelligently search over the combinatorial space of possible imaging system configurations. We demonstrate our approach on two tasks, depth estimation and camera rig design for autonomous vehicles, showing that our method yields rigs that outperform industry-wide standards. We believe that our proposed approach is an important step towards automating imaging system design. Our project page is [https://tzofi.github.io/diser](https://tzofi.github.io/diser). ## 1 Introduction Cameras are ubiquitous across industries. In autonomous vehicles, camera rigs provide information on the ego-vehicle's surroundings so it can navigate; in biology, microscopy allows new viruses to be studied and vaccines to be developed; and in AR/VR systems, advanced headsets provide immersive reconstructions of the user's surroundings. In each of these applications, camera configurations must be carefully designed to capture relevant information for downstream tasks, often done with perception models (PMs). PMs are typically implemented as neural networks and use the output of cameras to predict information such as where other vehicles are on the road, what type of molecule is present in a biological sample, or where the user is located within a virtual environment. Yet, despite their interdependence, cameras and PMs are often designed independently. Designing camera systems is non-trivial due to the vast number of engineering decisions to be made. For example, consider designing a camera rig on an autonomous vehicle. Suppose the ego-vehicle is limited to up to \(5\) lidar sensors, \(5\) radars, and \(5\) RGB sensors, with \(1{,}000\) possible spatio-temporal resolutions. If there are \(1{,}000\) discrete candidate camera positions on the ego-vehicle, the search space expands to \(10^{8}\) different configurations. In practice, the search space can become many orders larger with more possibilities for each imaging system building block. Furthermore, because the search space is non-differentiable, there exists a need to develop efficient methods to effectively traverse the search space for an optimal imaging configuration. In our paper, we propose using reinforcement learning (RL) to automate search over imaging systems. We first define a language for imaging system design using context-free grammar (CFG), which allows imaging systems to be represented as strings. The CFG serves as a search space for which search algorithms can then be used to automate imaging system design. We refer to such an algorithm as a camera designer (CD) and implement it with RL. RL allows us to search over imaging systems without relying on differen Figure 1: **Overview:** The camera designer selects imaging hardware candidates, which are used to capture observations in simulation. The perception model is then updated and computes the reward for the camera designer using the captured observations. In our paper, we implement the camera designer with reinforcement learning and the perception model with a neural network. tiable simulators and can scale to the combinatorially large search space of the CFG. Inspired by how animal eyes and brains are tightly integrated [28], our approach jointly trains the CD and PM, using the accuracy of the PM to inform how the CD is updated in training (Fig. 1). Because searching over the entire CFG is infeasible with available simulators, we take the first step of _validating_ that RL can be used to search over subsets of the CFG, including number of cameras, pose, field of view (FoV), and light intensity. First, we apply our method to depth estimation, demonstrating the viability of jointly learning imaging and perception. Next, we tackle the practical problem of designing a camera rig for AVs and show that our approach can create rig that lead to higher perception accuracy than industry-standard rig designs. While AV camera rigs are one of many potential applications of our method, to the best of our knowledge, we are among the first to propose a way to optimize AV camera rigs. Our paper makes the following contributions: * **Imaging CFG**: We introduce a context-free grammar (CFG) for imaging system design, which enumerates possible combinations of illumination, optics, sensors, and algorithms. The CFG can be used as a search space and theoretical framework for imaging system design. * **Co-Design**: We demonstrate how task-specific camera configurations can be co-designed with the perception model by transforming the CFG into a state-action space and using reinforcement learning (Fig. 2). Our approach can converge despite the reward function being jointly trained with the policy and value functions. * **Experimental Validation**: We demonstrate our method for co-design by applying it to (1) the task of depth estimation using stereo cues, and (2) optimizing camera rigs for autonomous vehicle perception, showing in both cases that camera configuration and perception model can be learned together. ## 2 Related Work ### Joint Optimization of Optics & Algorithms Our work is most closely related to end-to-end optimization of cameras, which is an area of research focused on jointly optimizing components of cameras together with an algorithm, typically a neural network. Instead of relying on heuristics, the goal of end-to-end optimization is to produce images that optimize the pertinent information required for the task. Existing work primarily focuses on optimizing the parameters of the optical element, sensor, and image signal processor of a single camera. Applications of end-to-end optimization include extended depth of field and superresolution imaging [41], high dynamic range (HDR) imaging [34, 42], demosaicking [9], depth estimation [2, 11, 22, 23], classification [10] and object detection [12, 36, 37]. Tseng _et al._[43] employ gradient descent on a non-differentiable simulator by training a proxy neural network, whereas we directly operate on the non-differentiable simulator with RL. For a more comprehensive review of end-to-end optimization, we refer readers to [26]. In contrast to end-to-end optimization methods, we focus on optimizing over the much larger space of possible imaging system designs, rather than the parameters of an individual camera. Our search space contains varying illumination sources, optics, sensors, and algorithms, each with many parameters. Rather than using stochastic gradient descent for optimization, we use reinforcement learning, allowing our approach to be used with non-differentiable simulators. Figure 2: **Approach: Our approach allows a camera configuration and perception model (PM) to be co-designed for task-specific imaging applications. At every step of the optimization, the camera designer (CD), implemented with reinforcement learning, proposes candidate camera configurations (1-2), which are used to capture observations and labels in a simulated environment (3-4). The observations and labels are added to the perception buffer (5) and used to compute the loss and reward, while the \(N\) most recent observations in the perception buffer are used to train the PM. The reward is propagated to the CD agent which proposes additional changes to the candidate camera configuration. After the episode terminates, the CD agent is trained using proximal policy optimization (PPO) [39] until convergence.** ### Reinforcement Learning Deep reinforcement learning (RL) has become widely used in recent years as a way to do sequential decision making for a wide array of problems, such as protein folding [24], learning faster matrix multiplication [17], and automated machine learning [3]. Many RL techniques focus on the _exploration-exploitation_ trade-off, where an agent must learn to balance exploring new states with exploiting previously visited states that lead to high reward. RL is also used for many combinatorial optimization problems [33]. In our work, we take inspiration from automated chip placement [35], which, like our approach, is formulated to allow an RL agent to place a new component at every step and select the placement of that component. Like many other problems RL has been applied to, imaging contains a high dimensional search space. In our work, we use proximal policy optimization (PPO) [39], which has been used for combinatorial search in past work [45]. Context-free grammars (CFGs) have been used to design machine learning (ML) pipelines, which are combinations of data-flows, ML operators, and optimizers [15][32][25]. Typically, ML pipeline design is done via a search over strings in the CFG using tree search algorithms, such as Monte Carlo tree search or upper confidence trees [27][44]. CFGs have also been adopted for robot design [46], molecule generation [20], and material design [19]. We use CFG to functionally represent imaging systems as combinations of illumination, sensors, optics and algorithms such that the output string describes a camera configuration and perception model that can be used to solve a desired task. ## 3 Automated Imaging System Design ### Language for Imaging We define the configuration space of imaging systems using context-free grammar (CFG) as it allows for a flexible configuration space that can be searched. A typical context-free grammar, \(G\), is represented as a tuple, \(G=(V,\Sigma,P,R)\), where \(V\) corresponds to non-terminal symbols in the grammar, \(\Sigma\) corresponds to terminal symbols, \(P\) corresponds to the production rules, and \(R\) is the start symbol. The goal of our proposed CFG is to allow the construction of strings to represent arbitrarily complex imaging systems, which usually consist of illumination sources, optical elements, sensors to convert light into digital signals, and algorithms that decode the scene. For example, consider the task of depth estimation that can be done in numerous ways. One solution is depth from stereo, which involves placing two cameras, \(c_{1},c_{2}\), in the scene at points, \(p_{1},p_{2}\), with some baseline. Each camera has an optical element, \(o_{1}=(f\),\(d)\), with a focal length, \(f\), and aperture, \(d\), and a sensor, \(s_{1}=((h,w),t)\), with spatial and temporal resolutions, \((h,w)\) and \(t\), respectively. Thus the cameras can be expressed as \(c_{1}=(o_{1},s_{1})\) and \(c_{2}=(o_{2},s_{2})\). An algorithm can decode the outputs of the two cameras to produce depth, and can be implemented with correspondence-matching [6], (\(a_{st}\)), or deep stereo [31], (\(a_{ds}\)). The full system can be described as a string, \(s_{1}=``c_{1}c_{2}a_{st}"\) or \(s_{2}=``c_{1}c_{2}a_{ds}"\). Another way to estimate depth is with active illumination or time-of-flight (ToF) imaging. We can represent lidar as an algorithm, \(a_{\text{control}}\), that illuminates the scene at the same point with a laser, \(l_{1}\), and ToF sensor, \(s_{ToF}\). We can describe this system as \(s_{\text{slibar}}=a_{\text{control}}l_{1}s_{ToF}a_{ToF}\). These examples illustrate how CFG can represent imaging systems with different illumination, optics, sensors, and algorithms as strings. The goal of the proposed CFG is not to describe how the individual components of an imaging system are made, e.g. their electronics, but rather to describe the function of each component. Next, we define the grammar's alphabet and production rules. **Grammar.** Our proposed CFG can be stated as \(G=(V,\Sigma,P,R)\). We define the variables as \(V=\{\mathrm{X,O,A_{1},A_{2}}\}\), each defined in the following sections, and the terminals, \(\Sigma\), which we refer to as alphabets, as \(\Sigma=\{\mathcal{I},\mathcal{O},\mathcal{S},\mathcal{A}_{1},\mathcal{A}_{2}\}\), where \(\{\mathcal{I}\}\) is illumination, \(\{\mathcal{O}\}\) is optics, \(\{\mathcal{S}\}\) is sensors, and \(\{\mathcal{A}_{1}\}\) and \(\{\mathcal{A}_{2}\}\) are algorithms. Each alphabet contains possible components and parameters, defined in lower case, e.g. \(a_{nn}\). Each component within an alphabet is parameterized by its functionality, e.g. focal length, rather than an off-the-shelf component. We describe each alphabet below and in Fig. 3. **Illumination.** The illumination alphabet, \(\mathcal{I}\), functionally represents different types of possible illuminations. In imaging, illumination can be represented with many param Figure 3: Context-free grammar (CFG) for imaging: Production rules (1-5) and alphabets (6-10) for our proposed CFG for designing imaging systems. \(R\) is the starting symbol from which a design starts. All imaging systems must have at least one sensor, \(\mathcal{S}\), and one algorithm, \(\mathcal{A}\). The grammar allows arbitrary physically plausible combinations of illumination (\(\mathcal{I}\)) optics (\(\mathcal{O}\)), sensors (\(\mathcal{S}\)), and algorithms (\(\mathcal{A}\)), each defined in their respective alphabet above. \(A_{1}\) refers to algorithms that process the output of hardware, while \(A_{2}\) refers to algorithms that control hardware. eters, such as duration \((d)\), intensity \((i)\), color, wavelength \((\lambda)\), polarization \(\eta\), pose (position & orientation), \((p)\) and modulation in space and time [5]. In the scope of this work, we consider pose and intensity. These can later be extended to other forms of illumination. **Optics**. We define the optics alphabet, \(\mathcal{O}\), to capture the most important (but not exhaustive) optical properties in an imaging system: focal length \((f)\) and aperture \((D)\). The optics alphabet can be extended to include more complex techniques such as phase masks or diffractive optical elements (DOE). The non-terminal O indicates that optical elements can be stacked to create a multi-lens system. **Sensors.** The sensor alphabet, \(\{\mathcal{S}\}\), functionally describes different types of sensors, such as RGB and SPAD. We parameterize a sensor by its pose \(s_{p}\), spatial (or angular) resolution \(s_{hw}\), temporal resolution \(s_{t}\), bit quantization \(s_{q}\) and wavelength \(s_{\lambda}\). For example, a SPAD sensor has higher temporal resolution (picosecond scale) and generally lower spatial resolution (on the order of 1,000 to 100,000 pixels), while a typical RGB sensor (CMOS) has a higher spatial resolution (hundreds of megapixels), but a lower temporal resolution (30 fps). Similarly, quantization (for example) can be varied between 1, 8 or 12 bits. The pose is the position \((x,y,z)\) and the orientation (pitch, yaw, and roll) of the sensor in 3D space, \(s_{p}\in\mathbb{R}^{6}\). **Algorithms**. Algorithms are needed to decode raw images and control other alphabets. We denote the alphabet for algorithms with two sets: \(\{\mathcal{A}_{1},\mathcal{A}_{2}\}\). \(\mathcal{A}_{2}\) is the set of algorithms that affect subsequent illumination, optics, and sensors (e.g. autofocus, controlling where to shine illumination), whereas \(\mathcal{A}_{1}\) are algorithms that decode the incoming data from the sensors for a given task. These algorithms include standard imaging operators, such as the Fourier transform, backprojection, Radon transform, Gerchberg-Saxton algorithm, photometric stereo, and more. Additionally, \(\mathcal{A}_{1}\) includes neural networks, which can perform detection, classification, etc. Due to the production rule, \(\mathrm{A}\rightarrow\mathcal{A}_{1}\mathrm{A}|\mathcal{A}_{1}\), \(\mathcal{A}_{1}\) can be repeated and stacked together. For example, an algorithm can be designed that takes the Fourier transform of the input data and feeds it through a multilayer perceptron (MLP). **Production Rules.** We define a set of production rules, shown in Fig. 3, that can produce strings representing possible imaging system configurations. In our formulation, every imaging system includes at least one sensor and algorithm. The X accounts for imaging systems with different illumination, optics and sensors. In all cases, the string must end with at least one algorithm that outputs the desired task. Additionally, each \(\mathcal{A}_{2}\) also requires an illumination, optics component, or sensor that it controls. The production rules account for multiple sensors and illuminations that illuminate and sense different parts of the scene. ### Imaging Design with Reinforcement Learning The proposed context-free grammar (CFG) defines ways of combining illumination, optics, sensors, and algorithms to form an imaging system. The goal of our work is to automate imaging system design by searching over the CFG. Because the output of the cameras in the imaging system must be well suited for a specific, downstream task, we co-design them with the task-specific perception model (PM). We next propose using a learned camera designer (CD) to automatically search over the CFG. We implement the CD with reinforcement learning (RL) because (1) the combination of continuous variables in our CFG causes an explosion in the search space, which, as a result, makes search with methods such as Monte Carlo tree search (MCTS) [7] or alpha-beta search [38] intractable, and (2) many advanced imaging simulators are not differentiable [18, 16, 21], and thus gradient descent cannot be directly applied. Our problem is well suited for sequential decision making because the task performance achieved with each choice of camera configurations directly affects subsequent design choices. **Overview:** Our approach is illustrated in Fig. 2. The input is a task-specific loss and reward function. When optimization starts, the imaging system contains no hardware. At each step, the CD selects whether to add a component into the system and the component's parameters (Fig. 2a-b). A simulator can then be used to collect observations from the candidate camera configuration (Fig. 2c). These observations are used by the perception model to compute the reward and loss (Fig. 24-7). The reward is used to train the CD and the loss is used to train the perception model. This loop repeats until a camera configuration and perception model have been created that maximize task accuracy. **RL Formulation:** We transform the CFG into a state-action space which the RL agent, henceforth referred to as the CD, can search over. We use proximal policy optimization (PPO) to train the CD and model the RL problem with the following states, actions, and rewards: * states, \(S\): the possible states of the world, which, in our case, are the possible enumerations of illumination, optics, and sensors, and possible observations that can be captured from each enumeration. * actions, \(A\): the actions an agent can take at any step, which, our case, consist of choosing illumination, optics, sensors, algorithms, and all parameters. * reward, \(R\): the reward for taking an action in a state, which, in our case, is computed by passing observations from the candidate camera configuration into the PM to compute accuracy for a target task. **Simulation & Environment**: Unlike standard RL problems where the agent acts based on observations from a fixed sensor, the observations provided to the CD can change, meaning the CD has to learn how to act with varying input (e.g. varying numbers of images, sensor parameters, etc). The simulator should thus be able to render data from all potential imaging systems that can be derived from the CFG. Because simulators that encompass the entire CFG are not available, we search over subsets of the CFG to validate our method. While we use a simulator, a dataset can also be used with offline RL approaches [29]. **Perception Model:** In our experiments, we set the algorithm, \(\mathcal{A}\), to be a trainable neural network (NN). The NN's role is to produce a task prediction given arbitrary observations from candidate camera configurations. The NN must be able to map a varying input (number of observations, modality, etc.) to a fixed output. For example, the CD may increase the number of sensors in the system beyond one, leading to multiple observations. We propose using transformers to mitigate this problem since they map a dynamic number of observations to a fixed-size feature embedding by converting inputs into sequences of patches [40]. To reduce noise in the gradients when jointly training the PM with the CD, we propose a perception buffer (Fig. 2.5), which stores the previous \(N\) observations from candidate camera configurations, allowing the PM to be trained over all data in the buffer at each step. ## 4 Experiments and Results **Overview:** We apply our method to two problems, both of which exercise a subset of our proposed CFG to validate DISeR. First, we show how DISeR can jointly learn a camera configuration and perception model to solve depth estimation. Second, we apply DISeR to a practical engineering problem of designing camera rigs for AVs. The same formulation is used in both problems: at each step of optimization, the CD chooses whether to add a camera to the imaging system by predicting an action, \(p\), in \([0,1]\), referred to as camera placement probability, along with camera parameters. When \(p\) is greater than a threshold of \(0.5\), a camera is added with the predicted parameters. The camera parameters for each problem are shared in the sections below. In both problems, we compare our approach against random search, which we note is often very difficult to beat [4][48]. ### Stereo Depth Estimation #### 4.1.1 Experimental Setup EnvironmentThe goal of the first experiment is to estimate the depth of a sphere using stereo cues. The CD is allowed to place a maximum of \(C\) cameras in the scene (though it can also place fewer cameras). In theory, the CD could place a single camera and learn monocular cues (e.g. shading/lighting, texture, linear perspective). However, we simulate an environment where monocular cues are unavailable, making monocular depth estimation ill-posed. Our environment consists of a randomly placed white sphere with a random radius, as shown in Fig. 4. We use PyRedner [30] to render images. The sphere position and radius are randomly sampled per episode from \((r,x,z)=\{r\in[3,9],x\in[-10,10],z\in[1,60]\}\). The depth is the \(z\) distance from the sphere to the average position of the placed cameras. The scene is illuminated such that shading cues and the position of the light source are absent as cues. Figure 4: **Depth from Stereo Setup**: The goal of this experiment is to estimate the depth of a sphere using stereo cues. The camera designer (CD) places up to \(C\) cameras within the green box. Camera poses and images are input to the perception model (PM) which outputs a predicted depth. We render environments that are devoid of monocular cues to force (1) the CD to learn to obtain multi-view cues and (2) the PM to learn to exploit these cues. Figure 5: **Joint Camera and Perception Design for Stereo Depth.** We train the CD and PM from scratch to estimate depth of a sphere. (a) Our reward function consistently improves, even though it constantly changes due to the PM concurrently training with the CD. (b) The CD learns to maximize the baseline between different cameras over the course of \(1000\) experiments when placing \(3\) cameras. (c) The loss decreases with more placed cameras and larger distances between the cameras, which shows that the PM learns to exploit multi-view cues. The only feedback that the PM and CD receive is a loss between the predicted and ground truth depth. The goal of rendering such an environment is to determine whether the CD can adapt to the context and realize that only a multi-view system can estimate depth. In parallel, the PM learns to exploit multi-view stereo cues. We show the supervised results of this experiment for validation in the supplement. **Action Space:** The action space for depth estimation is \((p,x,z,\theta)=\{p\in[0,1],x\in[-15,15],z\in[69,80],\theta\in[-60^{\circ},60^{ \circ}]\}\), where \(p\) is camera placement probability, \((x,z)\) is location (see Fig. 4) and \(\theta\) is yaw. FoV is \(45^{\circ}\). **Experiment Details**: We use a modified version of the vision transformer (ViT) architecture [13][1] that accepts an arbitrary number of images of fixed resolution and their corresponding camera parameters as input, and outputs a scalar depth. The spatial resolution is fixed to \((128,128)\). The maximum number of cameras the CD can place is set to \(C=5\). The CD's PPO backbone and the perception model share the same network architecture and are initialized randomly. The reward is computed before updating the perception model and is re-scaled to \([-1,1]\). Additional information about the training is provided in the supplement. #### 4.1.2 Results and Discussion We evaluate the joint training (Fig. 5a), the learned policy (Fig. 5b), and the perception model (Fig. 5c) in isolation. Fig. 5a illustrates how our system maximizes reward when co-designing the PM with the camera design. The reward function is dictated by the output of the PM, but the PM is concurrently training with the camera design, which results in inconsistent rewards during training for the same states. In spite of this fact, our model is able to consistently increase the reward, even at the beginning of training when the PM is untrained and with random initialization. Our results show that the CD and PM are able to learn intuitions that hold true in conventional multi-view stereo. Strategy #1 - Maximize Coverage:When given the option to place up to \(5\) cameras, the CD places 1 camera \(7.6\%\) of the time and 2, 3, 4, and 5 cameras \(27.7\%\), \(36.6\%\), \(22.7\%\), \(5.4\%\) times, respectively. Fig. 5b shows the heatmaps of where the CD decides to place each camera, specifically when the CD chose to place exactly three cameras. The heatmaps denote the number of times the CD placed the camera at a particular location over the course of \(7000\) experiments, where each experiment denotes the placement of a new random size sphere at a random location. From the heatmaps, we see that the CD strategically placed the cameras at locations that maximize the baselines between different cameras. Camera 1 was predominantly placed in the left side of the allowed region, camera 2 at the center bottom, and camera 3 at the right. From these results, we see that the CD optimizes to place more cameras spaced far apart. However, placing more cameras doesn't necessarily mean that the CD is obtaining multiple views of the object (e.g. some cameras may be pointed in the opposite direction of the object). Therefore, we account for this case by defining the metric of _coverage_, which defines the number of cameras that have at least one pixel viewing the object. The CD policy learns a configuration which maximizes coverage of the allowed region. We find that performance improves as coverage increases from \(0\) to \(3\), with the L1 loss being 14.0, 9.2, 7.2, and 5.7 as the coverage increases. Coverage is discussed in detail in the supplementary. **Strategy #2 - Multi-View Cues and Maximal Baseline:** Fig. 5c shows that the PM learns to exploit stereo cues when presented with multiple images. The experiment shown here compares the PM performance on a one-camera, two-camera, and three-camera system when estimating the depth of a sphere (averaged over \(1000\) different spheres of varying size and depth). All three systems have a camera that can be moved along the \(x\) axis, the two- and three-camera system have a fixed camera at \(x=-15\), and the three-camera system has an additional fixed camera at \(x=0\). The blue curve illustrates the L1 loss between the ground truth and one-camera system predictions. The red and green curves illustrate the performance of the two-camera and three-camera system respectively. The three-camera system performs slightly better than the two-camera system, and both perform significantly better than the one-camera system. The multi-view systems also see a decrease in loss (and variance) as the baseline between the cameras increases (i.e. as the movable camera moves along the \(+x\) axis). These curves indicate that the PM has learned similar wisdom to that of conventional stereo - multiple views with a large baseline enable better depth estimation [5]. While gradient descent could also be used to learn to maximize baseline given a differentiable simulator, we use RL, which can be used with non-differentiable simulators to search over both number of cameras and their baseline. **Searching Illumination:** We also repeated the above experiment with an expanded action space that includes angle and intensity of a single spot light at a fixed position. To estimate the depth of the sphere, the CD must learn to sweep the light over the scene with a sufficiently high intensity until the sphere is illuminated. At each step, angle and intensity can be changed within the bounds of \([-60^{\circ},60^{\circ}]\) and \([0,1]\), respectively, where 0 leaves the scene dark and 1 illuminates it. We found that the CD learns to increase the intensity so the sphere can be illuminated and change the angle such that the number of illuminated pixels on the sphere consistently increases over the episode. ### Camera Rigs for Autonomous Vehicles Next, we describe how our method can be used to optimize an AV camera rig for the perception task of bird's eye view (BEV) segmentation by jointly training the CD and PM. We validate our approach with three sets of experiments, described in the Experiment Details section below. We find that the rigs created with our approach lead to higher BEV segmentation accuracy in our environment compared to the industry-standard nuScenes [8] rig. Our camera rig search space and results are visualized in Fig. 6. #### 4.2.1 Experimental Setup **Environment:** We use the CARLA Simulator [14] to render observations from candidate camera rigs selected by the CD during training. For every camera on the candidate rig, the environment returns images, extrinsics, intrinsics, and 3D bounding box labels of vehicles in the scene. The 3D bounding boxes are used to compute the reward (for training the CD) and loss (for fine-tuning the PM). We use the same CARLA environment to create 25,000 samples rendered from randomly generated camera rigs to pre-train the PM for the task of BEV segmentation. **Action Space:** The action space for AV camera rig design is (\(p\),\(x\),\(y\),\(z\),\(\theta\),\(\beta\),\(\lambda\)) = \(\{p\in[0\),\(1],x\in\eta_{x},y\in\eta_{y},z\in[z_{max},z_{max}+0.5m],\theta\in[-180^{ \circ},180^{\circ}),\beta\in[-20^{\circ},20^{\circ}],\lambda\in[50^{\circ},120^ {\circ}]\}\), where \(p\) is the camera placement probability, (\(x\),\(y\),\(z\)) is location, \(\theta\) is yaw, \(\beta\) is pitch, and \(\lambda\) is FoV. \(\eta_{x}\) and \(\eta_{y}\) are the extents of the ego-vehicle in x and y, respectively, and \(z_{max}\) is the height of the ego-vehicle, meaning cameras can be placed anywhere within 0.5 meters (m) above the ego-vehicle. This action space conforms with rooftop rigs used in industry and the size and height of the roof match the Renault Zoe from nuScenes. **Experiment Details:** We use a recent BEV segmentation model, Cross View Transformers (CVT) [47], as the PM. It is first pre-trained on a dataset containing randomly placed cameras to allow it to more easily generalize to all candidate camera rigs that the CD may select. We then use the pre-trained CVT model to initialize the PM and CD's PPO \begin{table} \begin{tabular}{l l l} \hline \hline & IoU (Expt. a) & IoU (Expt. b) \\ \hline Random Rig & 0.254 & 0.084 \\ nuScenes Rig & 0.267 & 0.355 \\ Our Rig & **0.341** & **0.427** \\ \hline \hline \end{tabular} \end{table} Table 1: We compare the BEV segmentation IoU for models trained and tested with a random rig, nuScenes rig, and our approach’s rig. CARLA train and test scenes are the same for each. Our rig achieves higher performance than industry standards. Figure 6: **Autonomous Vehicle (AV) Camera Rig Task & Results: We demonstrate that our approach can be used to create AV camera rigs that are optimized for BEV segmentation. (Left) Our search space is shown – in expt. a, we optimize the height, pitch, and FoV of a single camera rig, while in expt. b and c, we optimize # cameras, x, y, z, pitch, yaw, and FoV. Results for each experiment are shown and we compare the optimized camera rig to the camera rig used in nuScenes [8]. In expt. c, the camera designer learns to place fewer cameras in only the direction where cars are placed. We also show the BEV segmentation predictions of our jointly trained perception model.** Figure 7: **Results for AV Camera Rig Co-Design: Shown are the reward curves for the CD optimizing camera rigs for BEV segmentation. Reward is intersection over union (IoU). To demonstrate the effectiveness of co-designing the camera configuration with the perception model (PM), we show results when the PM is pre-trained and frozen (blue) vs. pre-trained and fine-tuned (green). Compared to random search (red), where actions are uniformly sampled from a random distribution at each step, our approach significantly outperforms, and discovers camera rigs that increase BEV segmentation IoU.** backbone. Finally, we train the CD to optimize camera rigs. The PM uses the observations from each candidate rig and 3D bounding box labels to compute a reward (IoU) and loss (binary segmentation loss). The reward is used to update the candidate camera configuration, while the loss is used to update the PM. We conduct three sets of experiments, one with a single camera rig (expt. a), one with a multi-camera rig (expt. b), and one with a custom scenario and penalty for placing many cameras (expt. c). Each experiment is conducted with a frozen and a jointly trained PM. Fig. 7 shows that joint training leads to higher rewards (IoU). * **Expt. a:** The CD exercises a limited action space, including only \((p,\)\(z\),\(\beta\),\(\lambda)\) for a single camera on the front of the ego-vehicle. We use the same formulation as described above, but, at each step, if the CD places a camera, the previous camera is _replaced_, rather than the new one being added on the rig. After six steps, the episode terminates. * **Expt. b:** The CD exercises the full action space, including \((p,\)\(x\),\(y\),\(z\),\(\theta\),\(\beta\),\(\lambda)\). Cameras are placed within a bounding box on top of the ego-vehicle, as shown in Fig. 6. For comparison with the nuScenes rig, which has six cameras, we set the episode length to six, so at most six cameras can be placed. * **Expt. c:** This experiment includes two modifications to Expt. b. First, a penalty is enforced each time a camera is added to the rig to disincentivize the CD from placing unnecessary cameras. Second, the distribution of vehicles during training is changed to only be in front of the ego-vehicle to demonstrate that the CD can customize its rig design to specific scenarios. We collect data on a Tesla Model 3 (TM3) since Renault Zoe (RZ) is not available in CARLA (placing cameras within the bounds of the RZ roof). Since TM3 is slightly smaller than RZ, this does not significantly affect what the cameras see. Our approach is flexible and the action space can be changed or other constraints added per requirements. **Evaluation Protocol:** After training, we use the following protocol to evaluate the quality of the CD-optimized camera rigs. First, we test the CD over 100 episodes, saving the candidate camera rig and sum of rewards at the end of each episode. We then select the top 20 rigs based on their sum of rewards. We fix these rigs and evaluate them over more episodes (20), again recording their sum of rewards. We sort the top twenty rigs by their sum of rewards and select the rig with the top reward, which we call the selected rig. We test the efficacy of the selected rig by comparing its BEV performance to that of the nuScenes [8] rig. To compare BEV performance, we collect 25,000 training images and 5,000 test images in CARLA using both our selected rig and the nuScenes rig. We do this by deploying both rig on a Tesla Model 3. Next, we train one BEV segmentation model for each rig, using the collected training data. Finally, we test both BEV segmentation models on the corresponding test dataset captured from that rig. By collecting train and test data in the exact same CARLA scenes, we ensure a fair comparison. The test IoU then serves as a final measure of the selected rig's utility for BEV segmentation. #### 4.2.2 Results and Discussion As shown in Fig. 7, the CD significantly outperforms random search, and we observe that the rewards consistently increase over time across experiments. While using the pre-trained, frozen PM allows the CD to create camera configurations that increase BEV segmentation accuracy, jointly training the PM and CD together yields the best results. We note that the pre-trained CVT model (before RL) has 9% and 11% IoU for expts a and b, respectively, due to the challenging nature of fitting across many rigs. This IoU is improved during joint training. Using the above evaluation protocol for our CD, we find rigs created with the CD, for both experiments a and b, outperform the nuScenes rig on the task of BEV segmentation in our CARLA environment, as shown in Table 1. The top rigs for each experiment, example images from our rig vs. nuScenes, and a PM prediction are shown in Fig. 6. In expt. b, the CD can place up to the number of cameras in nuScenes (six). We find that the created rigs conform with AV conventions in several ways, such as distributing views around the ego-vehicle and using varying FoVs. While AV rigs, such as nuScenes, are well-engineered, our method suggests it may be possible to further improve them for specific tasks and environments. In expt. b, the CD learns to place the maximum number of cameras (six) on the rig since there is no penalty for placing additional cameras. However, in many cases, AV companies may want to reduce rig cost and inference time by using fewer cameras. Different camera rigs may also be better suited to different AV scenarios. We test whether the CD can take both of these considerations into account in expt. c by only placing cars in front of the ego-vehicle and enforcing a penalty to the reward each time an additional camera is added to the rig. As a result, we find that the CD places fewer cameras and places them facing forward, as shown in Fig. 7. This result demonstrates that our approach can be used to build resource limited imaging systems that are well suited for specific test scenarios. **Strategy #1 - Camera Placement:** Across experiments, we observe that the CD consistently learned two behaviors that lead to increased performance: (1) maximize camera height to 0.5 m above the ego-vehicle, and (2) reduce camera pitch to \(-20^{\circ}\). Maximizing camera height reduces the number of occlusions, thus leading to more ground truth pixels, and potentially incentivizing this behavior. However, we note the average number of 3D bounding box labels across both test sets is the same, suggesting occlusions do not incentivize higher camera placement and BEV segmentation performance is naturally improved with higher camera positions. The negative pitch could mean the CD has learned to prioritize detecting nearby cars, perhaps because the perception model has higher confidence of those predictions. We also observe that all vehicles in the scene are still visible with a \(-20^{\circ}\) pitch, and only the sky is cropped, thus the CD reduces the number of uninformative pixels, while maximizing the number of pixels on the road. Finally, we find that two front-facing cameras are placed at the rear of the vehicle in expt. b. We ablate this placement by re-training the PM with both cameras moved to the front. IoU is 5% better when the cameras are in the back, perhaps since, by placing the front-facing cameras at the back, both the front and sides of the ego-vehicle are visible in captured images. **Strategy #2 - FoV vs Object Resolution:** In expt. a, we found the FoV was always maximized by the CD, which makes sense because it allows the CD to obtain higher reward when more vehicles in the scene are visible. In expt. b, the FoV of the target rig varies between \(85^{\circ}\) and \(120^{\circ}\), suggesting when all of the scene is visible (as is the case in expt. b due to the CD learning to distribute the camera yaws in all directions), FoV is less important or that the CD may have learned a tradeoff between FoV and object resolution. **Limitations:** We demonstrate the CD on a limited number of scenarios within CARLA and focus only on the task of BEV segmentation of vehicles. In the future, our approach can be applied to more scenarios, tasks, and object classes. In addition, our experiments are done in simulation only. That said, our method has a direct path to real-world use. It can be used by AV companies to design rigs using their own simulator and requirements; those rigs can then be deployed on test cars to collect data. As AV simulators improve, we expect any gap in rig utility in sim vs real to fall. ## 5 Conclusion Our paper proposes a novel method to co-design camera configurations with perception models (PMs) for task-specific applications. We define a context-free grammar (CFG) that serves as a search space and theoretical framework for imaging system design. We then propose a camera designer (CD) that uses reinforcement learning to co-learn a camera configuration and PM for the proposed task by transforming the CFG into a state-action space. The PM is jointly trained with the CD and predicts the task output, which is used to compute the PM loss and reward for the CD to propose better candidate camera configurations. We demonstrate our method for co-design by applying it to (1) depth estimation using stereo cues, and (2) optimizing camera rigs for autonomous vehicle perception. We show in both cases that CD and PM can be learned together. Our co-design framework shows that camera configurations and perception models are closely linked and task-specific optimal designs that outperform human designs can be searched for computationally. **Acknowledgements:** We would like to thank Siddharth Somasundaram for his diligent proofreading of the paper. KT was supported by the SMART Contract IARPA Grant #2021-20111000004. We would also like to thank Systems & Technology Research (STR).
2310.20320
Theory of Mind in Large Language Models: Examining Performance of 11 State-of-the-Art models vs. Children Aged 7-10 on Advanced Tests
To what degree should we ascribe cognitive capacities to Large Language Models (LLMs), such as the ability to reason about intentions and beliefs known as Theory of Mind (ToM)? Here we add to this emerging debate by (i) testing 11 base- and instruction-tuned LLMs on capabilities relevant to ToM beyond the dominant false-belief paradigm, including non-literal language usage and recursive intentionality; (ii) using newly rewritten versions of standardized tests to gauge LLMs' robustness; (iii) prompting and scoring for open besides closed questions; and (iv) benchmarking LLM performance against that of children aged 7-10 on the same tasks. We find that instruction-tuned LLMs from the GPT family outperform other models, and often also children. Base-LLMs are mostly unable to solve ToM tasks, even with specialized prompting. We suggest that the interlinked evolution and development of language and ToM may help explain what instruction-tuning adds: rewarding cooperative communication that takes into account interlocutor and context. We conclude by arguing for a nuanced perspective on ToM in LLMs.
Max J. van Duijn, Bram M. A. van Dijk, Tom Kouwenhoven, Werner de Valk, Marco R. Spruit, Peter van der Putten
2023-10-31T09:55:07Z
http://arxiv.org/abs/2310.20320v1
Theory of Mind in Large Language Models: Examining Performance of 11 State-of-the-Art models vs. Children Aged 7-10 on Advanced Tests ###### Abstract To what degree should we ascribe cognitive capacities to Large Language Models (LLMs), such as the ability to reason about intentions and beliefs known as Theory of Mind (ToM)? Here we add to this emerging debate by (i) testing 11 base- and instruction-tuned LLMs on capabilities relevant to ToM beyond the dominant false-belief paradigm, including non-literal language usage and recursive intentionality; (ii) using newly rewritten versions of standardized tests to gauge LLMs' robustness; (iii) prompting and scoring for open besides closed questions; and (iv) benchmarking LLM performance against that of children aged 7-10 on the same tasks. We find that instruction-tuned LLMs from the GPT family outperform other models, and often also children. Base-LLMs are mostly unable to solve ToM tasks, even with specialized prompting. We suggest that the interlinked evolution and development of language and ToM may help explain what instruction-tuning adds: rewarding cooperative communication that takes into account interlocutor and context. We conclude by arguing for a nuanced perspective on ToM in LLMs. ## 1 Introduction Machines that can think like us have always triggered our imagination. Contemplation of such machines can be traced as far back as antiquity Liveley and Thomas (2020), and peaked with the advent of all kinds of 'automata' in the early days of the Industrial Revolution Voskuhl (2013) before settling in computer science from the 1950s Turing (1950). Currently people around the world can interact with powerful chatbots driven by Large Language Models (LLMs), such as OpenAI's ChatGPT OpenAI (2023), and wonder to what degree such systems are capable of thought. LLMs are large-scale deep neural networks, trained on massive amounts of text from the web. They are vastly complex systems: even if all details about their architecture, training data, and optional fine-tuning procedures are known (which is currently not the case for the most competitive models), it is very difficult to oversee their capabilities and predict how they will perform on a variety of tasks. Researchers from linguistics Manning et al. (2020), psychology Binz and Schulz (2023); Kosinski (2023); Webb et al. (2023), psychiatry Kjell et al. (2023), epistemology Sileo and Lernould (2023), logic Creswell et al. (2022), and other fields, have therefore started to study LLMs as new, 'alien' entities, with their own sort of intelligence, that needs to be probed with experiments, an endeavour recently described as'machine psychology' Hagendorff (2023). This not only yields knowledge about what LLMs are capable of, but also provides a unique opportunity to shed new light on questions surrounding our own intelligence Dillion et al. (2023); Binz and Schulz (2023). Here we focus on attempts to determine to what degree LLMs demonstrate a capacity for Theory of Mind (ToM), defined as the ability to work with beliefs, intentions, desires, and other mental states, to anticipate and explain behaviour in social settings Opperly (2010). We first address the question **how LLMs perform** on standardized, language-based tasks used to assess ToM capabilities in humans. We extend existing work in this area, surveyed in Section 2, in four ways: by (i) testing 11 models (see Table 1) for a broader suite of capabilities relevant to ToM beyond just the dominant false-belief paradigm, including non-literal language understanding and recursive intentionality (A _wants_ B to _believe_ that C _intends_...); (ii) using newly written versions of standardized tests with varying degrees of deviation from the originals; (iii) including open questions besides closed ones; and (iv) benchmarking LLM performance against that of children aged 7-8 (n=37) and 9-10 (n=36) on the same tasks. Section 3 contains details of our test procedures for both children and LLMs. After reporting the results in Section 4, we turn to the question **how variation in performance of the LLMs we tested can be explained** in Section 5. We conclude by placing our findings in the broader context of strong links between language and ToM in human development and evolution, and tentatively interpret what it means for an LLM to pass (or fail) ToM tests. We are aware of issues regarding LLM training and deployment, for example regarding the biases they inherit Lucy and Bamman (2021); Bender et al. (2021), problems for educators Sparrow (2022), and ethical concerns in obtaining human feedback Perrigo (2023). Ongoing reflection on the use of LLMs is necessary, but outside the scope of this paper. ## 2 Background ### Large Language Models The field of Natural Language Processing (NLP) has been revolutionized by the advent of Transformer models Vaswani et al. (2017); Devlin et al. (2019), deep neural networks that can induce language structures through self-supervised learning. During training, such models iteratively predict masked words from context in large sets of natural language data. They improve at this task by building representations of the many morphological, lexical, and syntactic rules governing human language production and understanding Manning et al. (2020); Rogers et al. (2021); Grand et al. (2022). Models exclusively trained through such self-supervision constitute what we refer to as 'base-LLMs' in this paper. Base-LLMs can generate natural language when prompted with completion queries ('A mouse is an...'). They can also be leveraged successfully for an array of other challenges, such as question-answering and translation, which often requires task-specific fine-tuning or prompting with specific examples, known as few-shot-learning Brown et al. (2020). This makes them different from a new generation of LLMs that we refer to as 'instruct-LLMs' in this paper, and to which the currently most competitive models belong. In instruction-tuning, various forms of human feedback are collected, such as ranking most suitable responses, which then forms the reward-signal for further aligning these models to human preferences through reinforcement learning Ouyang et al. (2022). The resulting LLMs can be prompted with natural language in the form of instructions to perform a wide variety of tasks directly, amounting to zero-shot learning Wei et al. (2022). A key realization is thus that LLMs are given either no explicitly labelled data at all, or, in the case of instruct-LLMs, data with human labels pertaining to relatively general aspects of communicative interaction. As such they are part of a completely different paradigm than earlier language models that were trained on, for example, data sets of human-annotated language structures (e.g. Nivre et al., 2016). This means that when LLMs are capable of such tasks as solving co-reference relationships or identifying word classes Manning et al. (2020), this arises as an _emergent_ property of the model's architecture and training on different objectives. Given that such emergent linguistic capabilities have been observed Reif et al. (2019); Grand et al. (2022), it is a legitimate empirical question which other capacities LLMs may have acquired as 'by-catch'. ### Theory of Mind in Humans and LLMs ToM, also known as'mindreading', is classically defined as the capacity to attribute mental states to others (and oneself), in order to explain and anticipate behaviour. The concept goes back to research in ethology in which Premack and Woodruff (1978) famously studied chimpanzees' abilities to anticipate behaviour of caretakers. When focus shifted to ToM in humans, tests were developed that present a scenario in which a character behaves according to its _false beliefs_ about a situation, and not according to the reality of the situation itself--which a successful participant, having the benefit of spectator-sight, can work out (see Section 3.1). Initial consensus that children could pass versions of this test from the age of 4 was followed by scepticism about additional abilities it presumed, including language skills and executive functioning, which led to the development of simplified false-belief tests based on eye-gaze that even 15 month-olds were found to 'pass' Onishi and Bailargeon (2005). While this line of research also met important criticism (for a review see Barone et al., 2019), it highlights two key distinctions in debate from the past decades: implicit-behavioural versus explicit-representational and innate versus learned components of ToM. Some researchers see results from eye-gaze paradigms as evidence for a native or very early developing capacity for belief-attribution in humans (Carruthers, 2013) and hold that performance on more complex tests is initially'masked' by a lack of expressive skills (cf. also Fodor, 1992). Others have attempted to explain eye-gaze results in terms of lower-level cognitive mechanisms (Heyes, 2014) and argued that the capacity for belief-attribution itself develops gradually in interaction with more general social, linguistic, and narrative competencies (Heyes and Frith, 2014; Milligan et al., 2007; Hutto, 2008). Two-systems approaches (Aperly, 2010) essentially reconcile both sides by positing that our mindreading capacity encompasses both a basic, fast, and early developing component and a more advanced and flexible component that develops later. In computational cognitive research, a variety of approaches to modelling ToM have been proposed (e.g. Baker and Saxe, 2011; Arslan et al., 2017). More recently neural agents (Rabinowitz et al., 2018) have been implemented, along with an increasing number of deep-learning paradigms aimed at testing first- and second-order ToM via question-answering. Initially this was done with recurrent memory networks (Grant et al., 2017; Nematzadeh et al., 2018) using data sets of classic false-belief tests from psychology, but after issues surfaced with simple heuristics for solving such tasks, scenarios were made more varied and challenging (Le et al., 2019). From the inception of BERT as one of the first LLMs (Devlin et al., 2019), we have seen roughly two approaches for testing ToM in LLMs: many different ToM scenarios integrated in large benchmark suites (e.g. Sap et al., 2022; Srivastava et al., 2023; Sileo and Lernould, 2023; Ma et al., 2023; Shapira et al., 2023), and studies that modified standardized ToM tests as used in developmental and clinical research for prompting LLMs (e.g. Kosinski, 2023; Ullman, 2023; Bubeck et al., 2023; Brunet-Gouet et al., 2023; Chowdhery et al., 2022; Moghaddam and Honey, 2023; Marchetti et al., 2023). This paper adds to the latter tradition in four respects, as listed in the introduction. ## 3 Methodology Here we describe our tasks and procedures for testing LLMs and children; all code, materials, and data are on OSF: [https://shorturl.at/FQR34](https://shorturl.at/FQR34). ### ToM Tests **Sally-Anne test, first-order (SA1)** -- The Sally-Anne test (Wimmer and Perner, 1983; Baron-Cohen et al., 1985) is a classic first-order false belief test. It relies on a narrative in which Sally and Anne stand behind a table with a box and a basket on it. When Anne is still present, Sally puts a ball in her box. When Sally leaves, Anne retrieves the ball from the box and puts it in her own basket. The story ends when Sally returns and the participant is asked the experimental question 'Where will Sally look for the ball?' The correct answer is that she will look in her box. We followed up by asking a motivation question, 'Why?', to prompt an explanation to the effect of'she (falsely) believes the object is where she left it'. **Sally-Anne test, second-order (SA2)** -- While SA1 targets the participant's judgement of what a character _believes_ about the location of an unexpectedly displaced object, in SA2 the participant needs to judge what a character _believes_ that _another character believes_ about the location of an ice-cream truck (Perner and Wimmer, 1985). Sally and Anne are in a park this time, where an ice-cream man is positioned next to the fountain. Anne runs home to get her wallet just while the ice-cream man decides to move his truck to the swings. He tells Sally about this, but unknown to her, he meets Anne on the way and tells her too. Sally then runs after Anne, and finds her mother at home, who says that Anne picked up the wallet and went to buy ice cream. The experimental question now is 'Where does Sally think Anne went to buy ice cream?', with as correct answer 'to the fountain', also followed up with 'Why?', to prompt an explanation to the effect of 'Sally doesn't know that the ice-cream man told Anne that he was moving to the swings'. **Strange Stories test (SS)** -- The Strange Stories test (Happe, 1994; Kaland et al., 2005) depicts seven social situations with non-literal language use that can easily be misinterpreted, but causes no problems to typically developed adults. To understand the situations, subjects must infer the characters' intentions, applying ToM. For example, in one of the items a girl wants a rabbit for Christmas. When she opens her present, wrapped in a big enough box, it turns out that she received a pile of books. She says that she is really happy with her gift, after which subjects are asked the experimental question 'Is what the girl says true?', with correct answer 'No'. They can motivate their answer after the question 'Why does she say this?', with as correct answer 'to avoid her parents' feelings being hurt'. Items increase in difficulty and cover a lie, pretend-play scenario, practical joke, white lie (example above), misunderstanding, sarcasm, and double bluff. **Imposing Memory test (IM)** -- The Imposing Memory test was originally developed by Kinderman et al. (1998), but the test has been revised several times; we rely on an unpublished version created by Anneke Haddad and Robin Dunbar (van Duijn, 2016), originally for adolescents, which we adapted thoroughly to make it suitable for children aged 7-10. Our version features two different stories, followed by true/false questions, 10 of which are 'intentionality' and 12 are'memory' questions. For instance, in one story Sam has just moved to a new town. He asks one of his new classmates, Helen, where he can buy post stamps for a birthday card for his granny. When Helen initially sends him to the wrong location, Sam wonders whether she was playing a prank on him or just got confused about the whereabouts of the shop herself. He goes and asks another classmate, Pete, for help. As in the original IM, the intentionality questions involve reasoning about different levels of recursively embedded mental states (e.g., at third-level: 'Helen _thought_ Sam _did not believe_ that she _know_ the location of the store that sells post stamps'), whereas the memory questions require just remembering facts presented in the story (e.g., to match third-level intentionality questions, three elements from the story are combined: 'Sam was looking for a store where they sell post stamps. He told Pete that he had asked Helen about this'). ### Scoring Test Answers Test scores for both children and LLMs were determined in the following way. For each of the SA1 and SA2 items, as well as for the seven SS items, a correct answer to the experimental question yielded 1 point. These answers were discrete and thus easy to assess ('box', 'fountain', 'no', etc.). For the motivation question a consensus score was obtained from two expert raters, on a range from 0-2, with 0 meaning a missing, irrelevant, or wrong motivation, 1 meaning a partly appropriate motivation, and 2 meaning a completely appropriate motivation that fully explained why the character in each scenario did or said something, or had a mental or emotional mind state. Thus, the maximum score for the SA1, SA2, and SS was 3 points per item, which were averaged to obtain a score between 0 and 1. For each correct answer to a true/false question in the IM, 1 point was given. All scores and ratings can be found on OSF. ### Deviations We tested the LLMs on the original SA and SS scenarios, but also on manually created _deviations_ that increasingly stray from their original formulations, to prevent LLMs from leveraging heuristics and memorizing relevant patterns from the training data. Thus, deviations probe the degree to which performance on ToM tests in LLMs generalizes. Deviation 0 was always the original test scenario (likely present in the training data); deviation 1 was a superficial variation on the original with only e.g., objects and names changed (similar to Kosinski (2023)), whereas deviation 2 was a completely new scenario where only the ToM-phenomenon at issue was kept constant (e.g.'second-order false belief' or 'irony'). Since our adaptation of the IM test has hitherto not been used or published, we did not include deviations for this test. ### Test Procedures for LLMs We leveraged 11 state-of-the-art LLMs: 4 base-LLMs and 7 instruct-LLMs (see Table 1). Inference parameters were set such that their output was as deterministic as possible (i.e. a temperature \(\approx\) zero or zero where possible) improving reproducibility. Each inference was done independently to avoid in-context learning or memory leakage between questions. This means that for each question, the prompt repeated the following general structure: [_instruction_] + [_test scenario_] + [_question_]. Instruct-LLMs were prompted in a question-answering format that stayed as close as possible to the questionnaires given to children, without any further custom prompting or provision of examples. Instructions were also similar to those given to children (e.g. 'You will be asked a question. Please respond to it as accurately as possible without using many words.'). The 'Why'-questions in SA1 and SA2 were created by inserting the experimental question and answer the LLM gave into the prompt: [_instruction_] + [_test scenario_] + [_experimental question_] + [_LLM answer_] +[_'Why?'_]. This was not necessary for SS, given that experimental and motivation questions could be answered independently. For base-LLMs, known to continue prompts rather than follow instructions, staying this close to the children's questionnaires was not feasible. For the SA and SS we therefore fed base-LLMs the scenario as described before, but formulated the questions as text-completion exercises (e.g. 'Sally will look for the ball in the '). Additionally, when creating the motivation questions for SA1 and SA2, we inserted the _correct_ answer to the experimental question, instead of the LLM's answer. This was because base-LLMs so often derailed in their output that the method described for instruct-LLMs did not yield sensible prompts. Base-LLMs thus had an advantage here over children and instruct-LLMs, who were potentially providing a motivation following up on an incorrect answer they gave to the experimental question. For the closed questions in the IM we attempted to streamline the output of base-LLMs by including two example continuations in the desired answer format. These examples were based on trivial information we added to the scenarios, unrelated to the actual experimental questions. For example: 'Helen: I wear a blue jumper today. This is [incorrect]', where it was added in the story that Helen wears a green jumper. This pushed nearly all base-LLM responses towards starting with '[correct]' or '[incorrect]', which we then assessed as answers to the true/false questions. We considered a similar prompt structure for SA and SS, amounting to adopting few-shot learning for base-LLMs throughout (Brown et al., 2020), but given that reformulating questions as text-completion exercises was by itself effective to get the desired output format, we refrained from inserting further differences from how instruct-LLMs are prompted. It is important to note that our prompts were in general not optimized for maximal test performance, but rather designed to stay as uniform and close to the way children were tested as possible, enabling a fair comparison among LLMs and with child performance. ### Test Procedures for Children Children were recruited from one Dutch and one international school in the South-West of the Netherlands: 37 children in the younger group (7-8y) and 36 children in the older group (9-10y). Children were administered digital versions of the SA and SS for the younger group, and of the IM for the older group, which they completed individually on tablets or PCs equipped with a touch screen. Test scenarios and questions were presented in a self-paced text format and all SA and SS questions were followed by an open text field in which they had to type their answer. As the IM features long scenarios, voice-overs of the text were included to alleviate reading fatigue. Here children had to answer by pressing yes/no after each question. To reduce memory bottlenecks, accompanying drawings were inserted (see OSF) and navigating back and forth throughout the tests was enabled. Informed consent for each child was obtained from caretakers, and the study was approved by the Leiden University Science Ethics Committee (ref. no. 2021-18). Test answers were evaluated and scored parallel to the approach for LLMs (Section 3.2). ## 4 Results ### Sally-Anne Overall performance on SA1 versus SA2 is given in Figure 1, left column. Most base-LLMs perform above child level on first-order ToM (BLOOM, Davinci, LLaMA-30B) but fall at or or below child level on second-order ToM. A similar pattern is visible for instruct-LLMs: most models perform well above child level on first-order (GPT-4, GPT-3.5, PaLM2-chat, PaLM2), but not on second-order ToM. Exceptions are GPT-4 and GPT-3.5: while degrading on second-order, they remain above child level. For both base- and instruct-LLMs, smaller models tend to perform worse (Falcon-7B, Falcon-7B-I, FLAN-T5) with GPT-3's structurally low scores as striking exception. This is inconsistent with results reported by (Kosinski, 2023) for GPT-3, which is probably due to the fact that Kosinski applied a text-completion approach whereas we \begin{table} \begin{tabular}{c c c} \hline \hline **Base-LLMs** & **Source** & **Size** \\ \hline Falcon & Penedo et al. (2023) & 7B \\ LLaMA & Touvron et al. (2023) & 30B \\ GPT-davinci & Brown et al. (2020) & 175B \\ BLOOM & Scao et al. (2022) & 176B \\ \hline **Instruct-LLMs** & ” & ” \\ \hline Falcon-instruct & Penedo et al. (2023) & 7B \\ Flat-T5 & Chung et al. (2022) & 11B \\ GPT-3 & & \\ (text-davinci-003) & Ouyang et al. (2022) & 175B \\ GPT-3.5-turbo & Ouyang et al. (2022) & 175B \\ PaLM2 & Anil et al. (2023) & 175-340B \\ PaLM2-chat & Anil et al. (2023) & 175-340B \\ GPT-4 & OpenAI (2023) & \textgreater{}340B \\ \hline \hline \end{tabular} \end{table} Table 1: LLMs used in this study. Model sizes are undisclosed for GPT-4 and for PaLM2 and PaLM2-chat, thus we base ourselves on secondary sources for estimations; Knight (2023) and Elias (2023), respectively. prompted GPT-3 with open questions. When we consider the performance on SA1 and SA2 over deviations (middle and right columns in Figure 1), we see once more that almost all LLMs struggle with second-order ToM, since performance decreases already on deviation 0 (i.e. the original test scenario), except for GPT-3.5 and GPT-4. Yet, it is the _combination_ of second-order ToM and deviation 2 that pushes also GPT-3.5 and GPT-4 substantially below child levels, except for Falcon-7B, although the chat-optimized version of this model (Falcon-7B-I) fails on all second-order questions. ### Strange Stories General performance on SS is given in Figure 2, left column. Whereas child performance declines as items become more complex (from 1 to 7; see Section 3.1), this is overall less the case for LLM performance. As a result, all models surpass child level at some point, except for the smallest model, Falcon-7B. All base-LLMs score below child level on most items but perform above child level the most difficult ones, except Falcon-7B. For instruct-LLMs, we see that GPT-4 approaches perfect scores throughout. GPT-3 and GPT-3.5 perform at or close to child level on item 1, after which their performance somewhat declines, while staying well above child level. Other instruct-LLMs show a mixed picture: PaLM2-chat and FLAN-T5 surpass child level earlier than PaLM2. Interestingly, smaller FLAN-T5 outperforms large PaLM and PaLM2-chat on more difficult items. Falcon-7B-I, as smallest instruct-LLM, performs overall worst. If performance is plotted over deviations (right column in Figure 2) we see little impact on most base-LLMs. For instruct-LLMs, it is striking that deviation levels have almost no effect on the larger models (GPT-4, PaLM2, PaLM2-chat, GPT-3, GPT-3.5), but do more dramatically lower performance of smaller models (FLAN-T5, Falcon-7B-I). In sum, base-LLMs perform below child level, except for the most complex items. Several large instruct-LLMs match or surpass child level throughout, others only for more complex items. Unlike for SA, deviation levels seem to have little negative impact. ### Imposing Memory The classical finding for the IM test is that error rates go up significantly for questions involving higher levels of recursive intentionality, but not for memory questions on matched levels of complexity, suggesting a limit to the capacity for recursive ToM specifically (Stiller and Dunbar, 2007).1 We verified this for our child data (n=36) with two mixed linear models for memory and intentional questions with random intercepts. We included five predictors that were contrast-coded such that each predictor indicated the difference in average performance with the previous level. For intentional questions, only the difference between level two and one was Figure 1: Performance on Sally-Anne tests for base-LLMs (top row) and instruct-LLMs (bottom row). Left column depicts performance on first- and second-order ToM (i.e. SA1 vs. SA2), averaged over the original and rewritten test versions. Middle and left columns depict performance for SA1 and SA2 over levels of deviation from the original test (0, 1, and 2; see Section 3.3). Dashed lines indicate child performance (n=37, age 7-8 years). significant (\(\beta=-0.222,p<.05\)), marking a cut-off point after which performance remained consistently low. For memory questions, performance remained high across all levels (\(>.85\)), except for level four, where scores were significantly lower than at level three (\(\beta=-0.292,p<.00\)), but went up again at level five (\(\beta=0.208,p<.00\)). Thus, in line with earlier work, we find a cut-off point after which scores on intentionality questions remained consistently low, compared to scores on matched memory questions. We have no clear explanation for the dip in performance on memory questions at level four, but observe that it is driven by low scores on only one specific question out of a total of four for this level, which children may have found confusing. In Figure 3 we see that all base-LLMs perform below child level, in general and on both intentionality and memory questions, and there is little variation in performance, except that larger base-LLMs (BLOOM, GPT-davinci) improve on higher levels of recursion. Regarding instruct-LLMs, we see largely the same picture, as they almost all perform below child level, in general and on both types of questions. The exception is GPT-4, which performs consistently well on all levels and stays above child level after second-order intentionality. For the difference between memory and intentional questions, instruct-LLMs perform better on easier memory questions, and drop towards the end, while on intentional questions, they already start lower and stay relatively constant. Lastly, it is remarkable that FLAN-T5, as one of the smallest instruct-LLMs, overall increases performance as recursion levels go up, and ends at child level. For GPT-3.5, which performs worst of all instruct-LLMs on this task, we see the exact opposite. ### Notes on Child Performance It can be observed that performance for SA was overall low compared to what could be expected from children aged 7-8 years: \(\bar{x}=0.45\) for SA1 and \(\bar{x}=0.225\) for SA2. We have two complementary explanations for this. Firstly, as discussed in Section 3.5, children had to read the tests on a screen, after which they had to type answers in open text fields. This is a challenging task by itself that relies on additional skills including language proficiency, conscientiousness, digital literacy, and more. Secondly, whereas 'passing' originally only means that a child can work out where Sally will look (for the ball, or for Anne on her way to buy ice cream), we also asked for a motivation, which makes the test more demanding. For the SS, completed by the same group of children, we see the expected pattern that scores show a downward tendency as test items become increasingly difficult. The older group, aged 9-10, completed the IM. As discussed in Section 4.3, scores resonate with earlier work. Given that we see child performance not as the central phenomenon under observation in this paper, but rather as a reference for LLM performance, further discussion is outside our scope. ## 5 Discussion Summing up the results for the Sally-Anne tests, while it is less surprising that base-LLMs and smaller instruct-LLMs struggle with increasing test complexity and deviations, it is striking that second-order ToM immediately perturbs some Figure 2: Performance on Strange Stories for base-LLMs (top row) and instruct-LLMs (bottom row). Left column shows overall performance, averaged over levels of deviation from the original test. Right column shows performance over deviation levels, averaged over items. Dashed lines indicate child performance (n=37, 7-8y). large instruct-LLMs (e.g. PaLM2-chat), and that adding deviations from the original test formulations pushed performance of even the most competitive models down (e.g. GPT-4, GPT-3.5). This initially suggests that performance on ToM tasks does not generalize well beyond a few standard contexts in LLMs, in line with earlier work (Sap et al., 2022; Shapira et al., 2023; Ullman, 2023). For the Strange Stories we saw that base-LLMs perform generally below child level. Most instruct-LLMs perform close to or above child level, particularly as items become more complex and child performance drops much more dramatically than LLM performance. Levels of deviation from the original test formulation seem to have made almost no impact for the SS, suggesting that the capacity to deal with non-literal language targeted by the Strange Stories test _does_ generalize to novel contexts. We conclude that instruct-LLMs are quite capable at interpreting non-literal language, a skill that in humans involves ToM. Since the training data of LLMs includes numerous books and fora, which are typically rich in irony, misunderstanding, jokes, sarcasm, and similar figures of speech, we tentatively suggest that LLMs are in general well-equipped to handle the sort of scenarios covered in the Strange Stories. This should in theory include base-LLMs, but it could be that their knowledge does not surface due to the test format, even after specialized prompting. Going one step further, we hypothesize that Sally-Ann is generally harder for LLMs given that this test relies less on a very specific sort of advanced language ability, but more on a type of behaviourally-situated reasoning that LLMs have limited access to during training (see also Mahowald et al., 2023). The Imposing Memory test was the most challenging for both base- and instruct-LLMs. Since our version of this test was never published before, it constitutes another robustness test, which only GPT-4 as largest instruct-LLM seems to pass well. The gap between base- and instruct-LLMs is best summarized in Figure 4. Here we see that no base-LLM achieves child level: all LLMs approaching or exceeding child performance are larger instruct-LLMs. Our adapted prompts and insertion of correct answers for motivation questions did not make a difference. We suggest that another issue for base-LLMs, besides the prompt format, was prompt length. This was highest for IM, which can explain why they struggled most with this test. Prompt length, in relation to the models' varying context window sizes and ability to engage in what Hagendorff et al. (2023) call chain-of-thought reasoning, merits further research (see also Liu et al., 2023). We tested whether there was a difference between model performance on closed versus open questions across all three tasks, but found no signal: the models that struggled with closed questions were also those that performed low on open questions (for more details see Figure A on OSF). Evidence is emerging that most LLM capacities are learned during self-supervised pre-training (Gudibande et al., 2023; Ye et al., 2023), which suggests that base-LLMs are essentially 'complete' models. Yet instruction-tuning, even in small amounts (Zhou et al., 2023), adds adherence to the desired interaction format and teaches LLMs, as it were, to apply their knowledge appropriately. We see a parallel between instruction-tuning and Figure 3: Performance on Imposing Memory test for base-LLMs (top row) and instruct-LLMs (bottom row). Left column depicts overall performance over five levels of recursion, averaged over deviations. Middle and left columns depict performance for Memory and Intentional questions. Dashed lines indicate child performance (n=36, 9-10y). the role for _rewarding cooperative communication_ in human evolution and development. It has been argued extensively that human communication is fundamentally cooperative in that it relies on a basic ability and willingness to engage in mental coordination (e.g Verhagen, 2015; Grice, 1975). It is a key characteristic of the socio-cultural niche in which we evolved that, when growing up, we are constantly being rewarded for showing such willingness and cooperating with others to achieve successful communicative interactions Tomasello (2008). Reversely, if we do not, we are being punished, explicitly or implicitly via increasing social exclusion David-Barrett and Dunbar (2016). This brings us back to our context: instruction-tuning essentially rewards similar cooperative principles, but punishes the opposite, which may amount to an enhanced capacity for _coordinating with an interaction partner's perspective_, in humans and LLMs alike. This is reflected in performance on ToM tasks, which are banking on this capacity too. Finally, we do not claim that LLMs that performed well also have ToM in the way that humans have it. Validity of cognitive tests such as those used in ToM research is a general issue (e.g. van Duijn, 2016). Yet for humans ToM tests are validated 'quick probes': decades of research have shown that proficiency on such tests _correlates_ with an array of real-world social and cognitive abilities Beaudoin et al. (2020). For LLMs we are in a very early stage of figuring out what is entailed by profcon ToM tests: on the one hand it is impressive that some models show a degree of robust performance, without explicit training on ToM. On the other hand it remains an open question whether this amounts to any actual capacities in the social-cognitive domain, in which they are clearly very differently grounded (if at all) compared to humans. For future research we believe in the format of testing models that differ in other respects than just size, on a varied array of tasks, with multiple tests per test item, to gain further insight into the aspects that explain variability in performance. For this, more openness about architecture and training procedures of current and future LLMs is imperative. In addition, we believe to have contributed to the debate by benchmarking LLM results on child data, but more of this is needed. We had limited samples and age distributions, and tests were not presented in optimal ways (see Section 3.5). We emphasize that our results need to be seen within the time frame of late Spring 2023. The fast pace with which LLMs are currently released and, in some cases, updated, makes them a moving target. Moreover, there are indications that specific capacities of models from the GPT-family have declined over time, perhaps as a result of such updates; for example their ability to handle math problems and produce code Chen et al. (2023). Future studies need to address how such developments impact the capacities assessed in this paper. ## 6 Conclusion We have shown that a majority of recent LLMs operate below performance of children aged 7-10 on three standardized tests relevant to ToM. Yet those that are largest in terms of parameters, and most heavily instruction-tuned, surpass children, with GPT-4 well above all other models, including more recent competitors like PaLM2-chat and PaLM2 (see Figure 4). We have interpreted these findings by drawing a parallel between instruction-tuning and rewarding cooperative interaction in human evolution. We concede that researching the degree to which LLMs are capable of anything like thought in the human sense has only just begun, which leaves the field with exciting challenges ahead. ## Acknowledgements This research was financed by the Dutch Research Council NWO (VI.Veni.191C.051). We are grateful to the children and their caregivers and teachers for participating in our research, and we thank Li Kloostra, Lola Vandame, and three anonymous reviewers for their help and constructive feedback. Figure 4: Grand mean performance (stars) of all mean test scores (dots) for children and LLMs.
2309.13689
On the Difference of Atom-Bond Sum-Connectivity and Atom-Bond-Connectivity Indices
The atom-bond-connectivity (ABC) index is one of the well-investigated degree-based topological indices. The atom-bond sum-connectivity (ABS) index is a modified version of the ABC index, which was introduced recently. The primary goal of the present paper is to investigate the difference between the aforementioned two indices, namely $ABS-ABC$. It is shown that the difference $ABS-ABC$ is positive for all graphs of minimum degree at least $2$ as well as for all line graphs of those graphs of order at least $5$ that are different from the path and cycle graphs. By means of computer search, the difference $ABS-ABC$ is also calculated for all trees of order at most $15$.
Akbar Ali, Ivan Gutman, Izudin Redzepovic, Jaya Percival Mazorodze, Abeer M. Albalahi, Amjad E. Hamza
2023-09-24T16:41:54Z
http://arxiv.org/abs/2309.13689v1
# On the Difference of Atom-Bond Sum-Connectivity and Atom-Bond-Connectivity Indices ###### Abstract The atom-bond-connectivity (ABC) index is one of the well-investigated degree-based topological indices. The atom-bond sum-connectivity (ABS) index is a modified version of the ABC index, which was introduced recently. The primary goal of the present paper is to investigate the difference between the aforementioned two indices, namely \(ABS-ABC\). It is shown that the difference \(ABS-ABC\) is positive for all graphs of minimum degree at least 2 as well as for all line graphs of those graphs of order at least 5 that are different from the path and cycle graphs. By means of computer search, the difference \(ABS-ABC\) is also calculated for all trees of order at most 15. Introduction In this paper we consider finite simple graphs (i.e., graphs without directed, weighted, and multiple edges, and without self-loops). Let \(G\) be such a graph. In order to avoid trivialities, it will be assumed that \(G\) is connected. Its vertex set is \({\bf V}(G)\) and its edge set is \({\bf E}(G)\). The order and size of \(G\) are \(|{\bf V}(G)|=n\) and \(|{\bf E}(G)|=m\), respectively. By an \(n\)-vertex graph, we mean a graph of order \(n\). The degree \(d_{u}=d_{u}(G)\) of the vertex \(u\in{\bf V}(G)\) is the number of vertices adjacent to \(u\). The edge connecting the vertices \(u\) and \(v\) will be denoted by \(uv\). A vertex with degree one is known as a pendent vertex. For graph-theoretical terminology and notation used without being defined, we refer the readers to the books [8, 9, 27] In the early years of mathematical chemistry, Milan Randic invented a topological index [25] that eventually became one of the most successfully applied graph-based molecular structure descriptors [21, 22, 26]. It is nowadays called "_connectivity index_" or "_Randic index_" and is defined as \[R=R(G)=\sum_{uv\in{\bf E}(G)}\frac{1}{\sqrt{d_{u}\,d_{v}}}\,.\] Much later, Zhou and Trinajstic [28] proposed to consider the variant of the connectivity index, in which multiplication is replaced by summation, named "_sum-connectivity index_", defined as \[SC=SC(G)=\sum_{uv\in{\bf E}(G)}\frac{1}{\sqrt{d_{u}+d_{v}}}\,.\] The same authors examined the relations between \(R\) and \(SC\)[29]. In 1998, Estrada et al. [12] conceived another modification of the connectivity index, called "_atom-bond-connectivity index_", defined as \[ABC=ABC(G)=\sum_{uv\in{\bf E}(G)}\sqrt{\frac{d_{u}+d_{v}-2}{d_{u}\,d_{v}}}\,.\] This molecular descriptor differs from the original connectivity index by the expression \(d_{u}+d_{v}-2\), which is just the degree of the edge \(uv\) (= number of edges incident to \(uv\)). Soon it was established that the \(ABC\) index has valuable applicative properties [16]. Its mathematical features were also much investigated, see the recent papers [11, 14, 20], the review [3], and the references cited therein. Especially intriguing is the fact that the apparently simple problem of finding the connected \(n\)-vertex graph(s) with minimum \(ABC\) index remained unsolved for about a decade [18]. Quite recently, the sum-connectivity analogue of the \(ABC\) index was put forward, defined as \[ABS=ABS(G)=\sum_{uv\in{\bf E}(G)}\sqrt{\frac{d_{u}+d_{v}-2}{d_{u}+d_{v}}}\] and named "_atom-bond sum-connectivity index_" [4]. Until now, only a limited number of properties of the \(ABS\) index were determined. In [4], the authors determined graphs having the minimum/maximum values of the \(ABS\) index among all (i) general graphs (ii) (molecular) trees, with a fixed order; parallel results for the case of unicyclic graphs were obtained in the paper [5], where chemical applications of the \(ABS\) index were also reported. (The general \(ABS\) index corresponding to the general \(ABC\) index [6, 10, 13] was also proposed in [5]; besides, see [1, 2].) Alraqad et al. [7] addressed the problem of finding graphs attaining the minimum \(ABS\) index over the class of all trees having given order or/and a fixed number of pendent vertices. Additional detail about the known mathematical properties of the \(ABS\) index can be found in the recent papers [15, 19, 23, 24]. As well known, if a graph \(G\) has components \(G_{1}\) and \(G_{2}\), then \(ABC(G)=ABC(G_{1})+ABC(G_{2})\) and \(ABS(G)=ABS(G_{1})+ABS(G_{2})\). As a consequence of this, denoting by \(P_{2}\) the graph of order 2 and size 1, the following holds. (a) If \(G\) is any graph, and \(G^{+}\) is a graph whose components are \(G\), an arbitrary number of isolated vertices, and an arbitrary number of \(P_{2}\)-graphs, then \(ABC(G)=ABC(G^{+})\) and \(ABS(G)=ABS(G^{+})\). (b) if \(G^{++}\) is a graph whose components are \(G\), an arbitrary number of iso lated vertices, an arbitrary number of \(P_{2}\)-graphs, and an arbitrary number of cycles of arbitrary size, then \[ABC(G)-ABS(G)=ABC(G^{++})-ABS(G^{++}).\] In order to avoid these trivialities, in what follows we consider only connected graphs. An obvious question is how the two closely related molecular descriptors \(ABC\) and \(ABS\) are related. In this paper, we provide some answers to this question. More precisely, we prove that the difference \(ABS-ABC\) is positive for all graphs of minimum degree at least \(2\) as well as for all line graphs of those graphs of order at least \(5\) that are different from the path and cycle graphs. We also calculate the difference \(ABS-ABC\) for all trees of order at most \(15\) by utilizing computer software. ## 2 Main Results We start this section with a simple but notable result that if the minimum degree of a graph \(G\) is at least \(2\) then the ABS index of \(G\) cannot be lesser than the ABC index of \(G\). **Proposition 2.1**.: _Let \(G\) be a connected non-trivial graph of order \(n\), without pendent vertices. Then_ \[ABC(G)\leq ABS(G).\] _Equality holds if and only if \(G\cong C_{n}\), where \(C_{n}\) is the \(n\)-vertex cycle._ Proof.: For every edge \(uv\in E(G)\), note that \(d_{u}\,d_{v}\geq d_{u}+d_{v}\) with equality if and only if \(d_{u}=d_{v}=2\) because \(\min\{d_{u},d_{v}\}\geq 2\). If the order of a graph \(G\) is one or two, then the equality \(ABC(G)=ABS(G)=0\) holds in a trivial manner. **Proposition 2.2**.: _Let \(G\) be a connected graph possessing a vertex \(x\) of degree \(2\). Construct the graph \(G^{\star}\) by inserting a new vertex \(y\) on an edge incident to \(x\). Evidently, the degree of \(y\) is also \(2\). Then_ \[ABC(G)-ABS(G)=ABC(G^{\star})-ABS(G^{\star})\,. \tag{1}\] Proof.: Bearing in mind the way in which the graph \(G^{\star}\) was constructed, we see that \[ABC(G^{\star})=ABC(G)+\sqrt{\frac{d_{x}+d_{y}-2}{d_{x}\,d_{y}}}=ABC(G)+\frac{1} {\sqrt{2}}\] and \[ABS(G^{\star})=ABS(G)+\sqrt{\frac{d_{x}+d_{y}-2}{d_{x}+d_{y}}}=ABS(G)+\frac{ 1}{\sqrt{2}}\,.\] Proposition 2.2 implies that if there is a graph \(G\) of order \(n\), possessing a vertex of degree \(2\), for which \(ABC(G)-ABS(G)=\Theta\), then for any \(p\geq 1\) there exist graphs of order \(n+p\) with the same \(\Theta\)-value. The situation with graphs possessing pendent vertices is much less simple. In what follows we present our results pertaining to trees. By means of computer search we established the following. **Observation 2.3**.: _(a) All trees of order \(n\), \(3\leq n\leq 10\), have the property \(ABC>ABS\). (b) The smallest tree for which \(ABC<ABS\) is depicted in Fig. 1. For \(n=11\), this tree is unique satisfying \(ABC<ABS\). (c) For \(n=12,13,14\), and \(15\), there exist, respectively, \(6,31,134\), and \(564\) distinct \(n\)-vertex trees for which \(ABC<ABS\). (d) The tree depicted in Fig. 1 possess vertices of degree \(2\). Therefore, from Proposition 2.2 it follows that there exist \(n\)-vertex trees with property \(ABS>ABC\) for any \(n\geq 11\)._ **Observation 2.4**.: _No tree of order \(n\), \(3\leq n\leq 15\), has the property \(ABC=ABS\). However, there is a family of four \(15\)-vertex trees, shown in Fig. 2, whose \(ABC\)- and \(ABS\)-values are remarkably close. For each of these trees: \(ABC\approx 10.184232\) and \(ABS\approx 10.184135\)._ Next, we show that the inequality \(ABS>ABC\) is satisfied by a reasonably large class of graphs, namely by the line graphs. If \(G\) is the line graph of a connected \(n\)-vertex graph \(K\) such that \(2\leq n\leq 4\), then from the discussion made in the previous part of this section one can directly obtain the classes of graphs satisfying (i) \(ABS(G)>ABC(G)\), (ii) \(ABS(G)<ABC(G)\), (iii) \(ABS(G)=ABC(G)\). Consequently, we assume that \(n\geq 5\). **Theorem 2.5**.: _If \(G\) is the line graph of a connected \(n\)-vertex graph \(K\) such that \(n\geq 5\) and that \(K\not\in\{P_{n},C_{n}\}\), then \(ABS(G)>ABC(G)\)._ In order to prove Theorem 2.5, we need some preparations. Figure 1: The smallest tree for which \(ABC<ABS\). Figure 2: A family of trees with nearly equal \(ABC\)- and \(ABS\)-values. A decomposition of a graph \(G\) is a class \(\mathcal{S}_{G}\) of edge-disjoint subgraphs of \(G\) such that \(\cup_{S\in\mathcal{S}_{G}}\mathbf{E}(S)=\mathbf{E}(G)\). By a clique in a graph \(G\), we mean a maximal complete subgraph of \(G\). A branching vertex in a graph is a vertex of degree at least \(3\). By a pendent edge of a graph, we mean an edge whose one of the end-vertices is pendent and the other one is non-pendent. For \(r\geq 2\), a path \(u_{1}\cdots u_{r}\) in a graph is said to be pendent if \(\min\{d_{u_{1}},d_{u_{r}}\}=1\), \(\max\{d_{u_{1}},d_{u_{r}}\}\geq 3\), and \(d_{u_{i}}=2\) for \(2\leq i\leq r-1\). If \(P:u_{1}\cdots u_{r}\) is a pendent path in a graph with \(d_{u_{r}}\geq 3\), we say that \(P\) is attached with the vertex \(u_{r}\). Two pendent paths of a graph are said to be adjacent if they have a common (branching) vertex. A triangle of a graph \(G\) is said to be odd if there is a vertex of \(G\) adjacent to an odd number of its vertices. For the proof of Theorem 2.5 we need the following well-known result: Lemma 2.6: [17] _A graph \(G\) is the line graph of a graph if and only if the star graph of order \(4\) is not an induced subgraph of \(G\), and if two odd triangles have a common edge then the subgraph induced by their vertices is the complete graph of order \(4\)._ We can now start with the proof of Theorem 2.5. Proof of Theorem 2.5.: Since \(K\not\cong P_{n}\), the graph \(G\) has at least one cycle. If \(G\) is one of the two graphs \(H_{1},H_{2}\), depicted in Fig. 3, then one can directly verify that \(ABS>ABC\) holds. In what follows, we assume that \(G\not\in\{H_{1},H_{2}\}\). Consider the difference \[ABS(G)-ABC(G)=\sum_{uv\in\mathbf{E}(G)}\left(\sqrt{\frac{d_{u}+d_{v}-2}{d_{u} +d_{v}}}-\sqrt{\frac{d_{u}+d_{v}-2}{d_{u}\,d_{v}}}\right)\] Figure 3: The graphs \(H_{1}\) and \(H_{2}\) mentioned in the proof of Theorem 2.5. and define a function \(f\) of two variables \(x\) and \(y\) as \[f(x,y)=\sqrt{\frac{x+y-2}{x+y}}-\sqrt{\frac{x+y-2}{xy}}\] where \(y\geq x\geq 1\) and \(y\geq 2\). Note that the function \(f\) is strictly increasing (in both \(x\) and \(y\)). Also, if \(x\) and \(y\) are integers satisfying the inequalities \(y\geq x\geq 1\) and \(y\geq 2\), then the inequality \(f(x,y)<0\) holds if and only if \(x=1\). Thus, \[-0.129757\approx\frac{1}{\sqrt{3}}-\frac{1}{\sqrt{2}}=f(1,2)\leq f(1,y)<0\] for every \(y\geq 2\). Also, \[f(x,y)\geq f(2,3)=\sqrt{\frac{3}{5}}-\frac{1}{\sqrt{2}}\approx 0.0674899>f(2,2)=0\] for \(y\geq x\geq 2\) and \(y\geq 3\). Furthermore, we have \(f(1,2)+f(2,y)>0\) for every \(y\geq 5\). Thus, if either \(G\) has no pendent paths or every pendent path of \(G\) has length at least 2, which is attached with a vertex of degree at least 5, then \(ABS(G)-ABC(G)>0\). In the remaining proof, we assume that \(G\not\in\{H_{1},H_{2}\}\) and that \(G\) either has at least one pendent path of length 1 or it has at least one pendent path of length at least 2, which is attached with a vertex of degree 3 or 4. Let \(H^{\prime}\) be the graph depicted in Fig. 4, i.e., \(H^{\prime}\) is obtained from two disjoint graphs \(H_{1}\) and \(H\) by identifying their vertices \(z\) and \(z^{\prime}\). **Fact 1**.: _If \(G\cong H^{\prime}\), then the sum of the contributions of the edges of \(H_{1}\) in \(G\) to the difference \(ABS(G)-ABC(G)\) is positive._ It is a well-known fact that the line graph \(G\) can be decomposed into cliques, such that every edge of \(G\) lies on exactly one clique and every non-pendent vertex of \(G\) lies on exactly two cliques. Also, by Lemma 2.6, \(G\) contains no pair of adjacent pendent paths/edges and hence the number of pendent edges/paths of \(G\) is at most \(\lfloor\mathbf{E}(G)\rfloor/2\rfloor\). Bearing this in mind, we decompose \(G\) into connected subgraphs \(G_{1},\ldots,G_{k}\) in such a way that every \(G_{i}\) contains at most one pendent path of \(G\), such that: **(a)**: if \(G_{i}\) contains a pendent path of \(G\) of length 1 such that the branching vertex (in \(G\)) of the considered path has a neighbor of degree 2 in \(G\), then \(G_{i}\) is induced by the vertices of the mentioned path and the vertices adjacent to the branching vertex (in \(G\)) of the mentioned path (for an example, see \(G_{4}\) in Fig. 5(b)); **(b)**: if \(G_{i}\) has a pendent path of length at least 2 in \(G\) or if \(G_{i}\) contains a pendent path of \(G\) of length 1 such that the branching vertex (in \(G\)) of the considered path has no neighbor of degree 2 in \(G\), then \(G_{i}\) consists of the mentioned path together with exactly one additional edge incident with the branching vertex (in \(G\)) of the mentioned path (for an example, see Fig. 5). In order to complete the proof, it is enough to show that the sum of contributions of all edges of \(G_{i}\) (in \(G\)) to the difference \(ABS(G)-ABC(G)\) is positive. If a subgraph \(G_{i}\) of \(G\) contains no pendent vertex of \(G\) then certainly, the sum of contributions of all edges of \(G_{i}\) (in \(G\)) to the difference \(ABS(G)-ABC(G)\) is positive. Figure 4: The graphs \(H_{1}\), \(H\), and \(H^{\prime}\) mentioned in the proof of Theorem 2.5. **Case 1:** a subgraph \(G_{i}\) contains a pendent path of \(G\) of length \(1\), such that the branching vertex (in \(G\)) of the considered path has a neighbor of degree \(2\) in \(G\). Let \(P:v_{1}v_{2}\) be the pendent path of \(G\) contained in \(G_{i}\), where \(d_{v_{1}}(G)=1\) and \(d_{v_{2}}(G)\geq 3\). Note that every neighbor of \(v_{2}\) different from \(v_{1}\) in \(G\) has degree at least \(d_{v_{2}}(G)-1\) in \(G\) (by Lemma 2.6). Thus, \(d_{v_{2}}(G)=3\) in the case under consideration. Recall that \(G\not\in\{H_{1},H_{2}\}\) (see Fig. 3). Consequently, \(G_{i}\cong H_{1}\) and hence by Fact 1, the sum of contributions of all edges of \(G_{i}\) to the difference \(ABS(G)-ABC(G)\) is positive. **Case 2:** a subgraph \(G_{i}\) has a pendent path of \(G\) of length \(1\), such that the branching vertex (in \(G\)) of the considered path has no neighbor of degree \(2\) in \(G\). Note that \(G_{i}\) is a path of length \(2\) in this case. Let \(G_{i}:v_{1}v_{2}v^{\prime}\), where \(v_{1}v_{2}\) is a pendent path of \(G\), \(d_{v_{1}}(G)=1\), and \(d_{v_{2}}(G)\geq 3\). If \(d_{v_{2}}(G)\geq 4\), then the sum of contributions of all edges of \(G_{i}\) to the difference \(ABS(G)-ABC(G)\) is positive because \(d_{v^{\prime}}(G)\geq d_{v_{2}}(G)-1\) (by Lemma 2.6) and \(f(1,y)+(y-1,y)>0\) for every \(y\geq 4\). Next, assume that \(d_{v_{2}}(G)=3\). Since \(d_{v^{\prime}}(G)\geq 3\) in the considered case, the sum of contributions of all Figure 5: (a) A tree \(T\), its line graph \(L(T)\), and a decomposition of \(L(T)\) into three connected subgraphs \(G_{1},G_{2},G_{3}\). (b) A tree \(T\), its line graph \(L(T)\), and a decomposition of \(L(T)\) into four connected subgraphs \(G_{1},G_{2},G_{3},G_{4}\). edges of \(G_{i}\) to the difference \(ABS(G)-ABC(G)\) is again positive because \(f(1,3)+f(3,y)>0\) for all \(y\geq 3\). **Case 3:** a subgraph \(G_{i}\) has a pendent path of length at least \(2\) in \(G\). Note that \(G_{i}\) is itself a path. Let \(G_{i}:v_{1}v_{2}\cdots v_{r}v^{\prime}\), where \(v_{1}v_{2}\cdots v_{r}\) (\(r\geq 3\)) is a pendent path of \(G\), \(d_{v_{1}}(G)=1\), and \(d_{v_{r}}(G)\in\{3,4\}\), because \(G\) has no pendent path of length at least \(2\), which is attached with a vertex of degree at least \(5\) (see the paragraph appears right before the definition of \(H^{\prime}\) (before Fact 1)). **Subcase 3.1:**\(d_{v_{r}}(G)=3\). The vertex \(v^{\prime}\) has degree at least \(2\) (in \(G\)) and \(f(1,2)+f(2,3)+f(3,y)\geq f(1,2)+f(2,3)>0\) for \(y\geq 2\). Thus, the sum of contributions of all edges of \(G_{i}\) (in \(G\)) to the difference \(ABS(G)-ABC(G)\) is positive. **Subcase 3.2:**\(d_{v_{r}}(G)=4\). In this case, the vertex \(v^{\prime}\) has degree at least \(3\) (in \(G\)) and \(f(1,2)+f(2,4)+f(4,y)\geq f(1,2)+f(2,4)+f(3,4)>0\) for \(y\geq 3\). Thus, the sum of contributions of all edges of \(G_{i}\) (in \(G\)) to the difference \(ABS(G)-ABC(G)\) is again positive. This completes the proof of Theorem 2.5. Theorem 2.7: Let \(G\) be a connected graph of size \(m\). If the number of pendent vertices of \(G\) is at most \(\lfloor m/2\rfloor\) and the number of vertices of degree \(2\) in \(G\) is zero, then \[ABS(G)>ABC(G).\] Demonstration Proof: Consider the function \(f\) defined in the proof of Theorem 2.5. Here, \[-0.10939\approx\frac{1}{\sqrt{2}}-\sqrt{\frac{2}{3}}=f(1,3)\leq f(1,y)<0\] for every \(y\geq 3\). Also, \[f(x,y)\geq f(3,3)=\sqrt{\frac{2}{3}}-\frac{2}{3}\approx 0.14983\] for \(y\geq x\geq 3\). Let \(p\) denote the number of pendent vertices of \(G\). Then, \(m-p\geq p\). Now, by keeping in mind these observations, we have \[ABS(G)-ABC(G) =\sum_{uv\in\mathbf{E}(G);\,d_{u}=1}\left(\sqrt{\frac{d_{u}+d_{v}-2} {d_{u}+d_{v}}}-\sqrt{\frac{d_{u}+d_{v}-2}{d_{u}\,d_{v}}}\,\right)\] \[+\sum_{\begin{subarray}{c}uv\in\mathbf{E}(G);\\ \min\{d_{u},d_{v}\}\geq 3\end{subarray}}\left(\sqrt{\frac{d_{u}+d_{v}-2}{d_{u}+d_{v}}}- \sqrt{\frac{d_{u}+d_{v}-2}{d_{u}\,d_{v}}}\,\right)\] \[\geq\sum_{\begin{subarray}{c}uv\in\mathbf{E}(G);\\ d_{u}=1\end{subarray}}\left(\frac{1}{\sqrt{2}}-\sqrt{\frac{2}{3}}\right)+ \sum_{\begin{subarray}{c}uv\in\mathbf{E}(G);\\ \min\{d_{u},d_{v}\}\geq 3\end{subarray}}\left(\sqrt{\frac{2}{3}}-\frac{2}{3}\right)\] \[=p\left(\frac{1}{\sqrt{2}}-\sqrt{\frac{2}{3}}\right)+(m-p)\left( \sqrt{\frac{2}{3}}-\frac{2}{3}\right)\] \[\geq p\left(\frac{1}{\sqrt{2}}-\sqrt{\frac{2}{3}}\right)+p\left( \sqrt{\frac{2}{3}}-\frac{2}{3}\right)\] \[>0.\] **Theorem 2.8**.: _Let \(G\) be a connected graph of size \(m\) such that if \(v\in V(G)\) is a vertex of degree \(2\) then \(v\) has no neighbor of any of the degrees \(2\), \(3\), \(4\). If the number of pendent vertices of \(G\) is at most \(\lfloor m/2\rfloor\), then_ \[ABS(G)>ABC(G).\] Proof.: Consider the function \(f\) defined in the proof of Theorem 2.5. Recall that \[-0.129757\approx\frac{1}{\sqrt{3}}-\frac{1}{\sqrt{2}}=f(1,2)\leq f(1,y)<0\] for every \(y\geq 2\). Also, \(f(x,y)\geq f(2,5)=\sqrt{\frac{5}{7}}-\frac{1}{\sqrt{2}}\approx 0.138047\) for \(y\geq x\geq 2\) with \(y\geq 5\) and \(f(x,y)\geq f(3,3)>f(2,5)\) for \(y\geq x\geq 3\). Let \(P\) denote the set of pendent edges of \(G\). Then, \(|\mathbf{E}(G)\setminus P|\geq|P|\). Now, by keeping in mind the above observations, we have \[ABS(G)-ABC(G) = \sum_{uv\in\mathbf{E}(G)\setminus P}\left(\sqrt{\frac{d_{u}+d_{v}-2} {d_{u}+d_{v}}}-\sqrt{\frac{d_{u}+d_{v}-2}{d_{u}\,d_{v}}}\,\right)\] \[+ \sum_{uv\in P}\left(\sqrt{\frac{d_{u}+d_{v}-2}{d_{u}+d_{v}}}- \sqrt{\frac{d_{u}+d_{v}-2}{d_{u}\,d_{v}}}\,\right)\] \[\geq \sum_{uv\in\mathbf{E}(G)\setminus P}\left(\sqrt{\frac{5}{7}}- \frac{1}{\sqrt{2}}\right)+\sum_{uv\in P}\left(\frac{1}{\sqrt{3}}-\frac{1}{ \sqrt{2}}\right)\] \[= |\mathbf{E}(G)\setminus P|\left(\sqrt{\frac{5}{7}}-\frac{1}{ \sqrt{2}}\right)+|P|\left(\frac{1}{\sqrt{3}}-\frac{1}{\sqrt{2}}\right)\] \[\geq |P|\left(\sqrt{\frac{5}{7}}-\frac{1}{\sqrt{2}}\right)+|P| \left(\frac{1}{\sqrt{3}}-\frac{1}{\sqrt{2}}\right)\] \[> 0.\] ## 3 Conclusion and Some Open Problems In this paper, we considered the difference between atom-bond-connectivity (\(ABC\)) and atom-bond sum-connectivity (\(ABS\)) indices. In the case of graphs without pendent vertices, finding the sign of this difference is trivially easy (see Proposition 2.1). On the other hand, in the case of graphs possessing pendent vertices, especially for trees, this difference becomes perplexed and the complete solution of the problem awaits additional studies. Denote the difference \(ABC-ABS\) by \(\Theta\). By means of computer search we found that for trees with \(n\leq 15\) vertices (except in the trivial cases \(n=1,2\)), \(\Theta=0\) never happens. It would be of some interest to extend this finding to higher values of \(n\), or to discover a tree (or a graph with minimum degree \(1\)) for which \(\Theta=0\). Let \(T_{n}\) be the number of trees of order \(n\), and \(t_{n}\) the number of trees of order \(n\) for which \(\Theta<0\). We know that \(t_{n}/T_{n}>0\) for \(n\geq 11\). It is an open problem what the value of \(\lim_{n\to\infty}t_{n}/T_{n}\) is, especially whether it is equal to zero or to unity. _Acknowledgment:_ This research has been funded by the Scientific Research Deanship, University of Hail, Saudi Arabia, through project number RG-23 019.
2305.19717
Is Rewiring Actually Helpful in Graph Neural Networks?
Graph neural networks compute node representations by performing multiple message-passing steps that consist in local aggregations of node features. Having deep models that can leverage longer-range interactions between nodes is hindered by the issues of over-smoothing and over-squashing. In particular, the latter is attributed to the graph topology which guides the message-passing, causing a node representation to become insensitive to information contained at distant nodes. Many graph rewiring methods have been proposed to remedy or mitigate this problem. However, properly evaluating the benefits of these methods is made difficult by the coupling of over-squashing with other issues strictly related to model training, such as vanishing gradients. Therefore, we propose an evaluation setting based on message-passing models that do not require training to compute node and graph representations. We perform a systematic experimental comparison on real-world node and graph classification tasks, showing that rewiring the underlying graph rarely does confer a practical benefit for message-passing.
Domenico Tortorella, Alessio Micheli
2023-05-31T10:12:23Z
http://arxiv.org/abs/2305.19717v1
# Is Rewiring Actually Helpful in ###### Abstract Graph neural networks compute node representations by performing multiple message-passing steps that consist in local aggregations of node features. Having deep models that can leverage longer-range interactions between nodes is hindered by the issues of over-smoothing and over-squashing. In particular, the latter is attributed to the graph topology which guides the message-passing, causing a node representation to become insensitive to information contained at distant nodes. Many graph rewiring methods have been proposed to remedy or mitigate this problem. However, properly evaluating the benefits of these methods is made difficult by the coupling of over-squashing with other issues strictly related to model training, such as vanishing gradients. Therefore, we propose an evaluation setting based on message-passing models that do not require training to compute node and graph representations. We perform a systematic experimental comparison on real-world node and graph classification tasks, showing that rewiring the underlying graph rarely does confer a practical benefit for message-passing. ## 1 Introduction Neural models for graphs [6; 59], commonly called _graph neural networks_ (GNNs), have been successfully applied in many real-world tasks, such as identifying categories of users in social networks or classifying molecules. GNNs typically operate in the _message-passing_ paradigm, that is by exchanging information between nearby nodes according to the graph structure. Messages are computed from the neighbor node features, then aggregated by a permutation-invariant function to provide node representations. With multiple message-passing steps, GNNs are able to learn a hierarchy of representations that capture interactions between increasingly distant nodes. This is accomplished either via multiple iterations of the same parameterized message-passing function [44; 15], or by a deep network of message-passing layers with different learnable parameters [34; 13; 5; 25]. The need for sufficiently deep graph networks arises for tasks that require the discovery of long-range dependencies between nodes, otherwise the model incurs in _under-reaching_[3]. As deep learning on graph progressed, several challenges preventing the computation of effective node representations have emerged. Among those, _over-squashing_ is inherently connected to the inductive bias at the base of GNNs: the problem of encoding an exponentially growing receptive field [34] in a fixed-size node embedding dimension [3]. As simply increasing the width of node representations does not remove the underlying issues caused by the graph topology [12], this has motivated a growing number of methods that alter (i.e. _rewire_) the original graph as a pre-processing to improve message-passing. In this paper, we attempt to meet the need for an _empirical approach_ to assess the benefits of graph rewiring methods. Indeed, altering the input data without taking into account the specific learning task can possibly lead to the loss of critical information. Since the quality of node representations computed on rewired graphs is evaluated according to the accuracy in downstream learning tasks, the use of end-to-end trained models does not allow to decouple the effects caused by graph topology on message-passing from the problems inherently connected to training in deep neural networks. Indeed, while it has been proven that gradient vanishing prevails on over-squashing when the number of message-passing steps is much larger than the range of node interactions needed to solve the task, it still unclear how the two issues interact with each other or what happens in intermediate regimes [12]. Furthermore, GNN models that completely or partially avoid learning representations via training have exhibited performances close to or above common end-to-end trained ones [17; 18; 35; 24], in particular when compared to previous results for rewiring methods applied to trained GNNs [54]. Therefore, as opposed to previous literature, we propose to use message-passing models that compute node representations _without training_, either by being parameter-free [58] or by following the reservoir computing paradigm [15], where parameters are just randomly initialized under certain constraints. Crucially, the issues that graph rewiring methods aim to address are connected with the inductive bias of GNNs [8], that is to the message-passing _per se_, whether is done in the forward or backward pass. This will allow us to assess the actual benefits of graph rewiring on several node and graph classification tasks. The rest of this paper is structured as follows. In Sec. 2 we present a brief survey of the rewiring methods that will be evaluated in our experiments. In Sec. 3 we introduce SGC and GESN, the two training-free message-passing models adopted in our experimental framework. The datasets and results of our experiments will be discussed in Sec. 4, drawing final conclusions in Sec. 5. ## 2 Graph rewiring methods Let \(\mathcal{G}(\mathcal{V},\mathcal{E})\) be a graph with nodes \(v\in\mathcal{V}\) and edges \((u,v)\in\mathcal{E}\), each node having associated input features \(\mathbf{x}_{v}\in\mathbb{R}^{X}\). We denote by \(\mathcal{N}_{v}\) the set of neighbors of node \(v\) with cardinality (i.e. degree) \(d_{v}\), and respectively by \(\mathbf{A}\), \(\mathbf{D}\), \(\mathbf{L}\) the graph adjacency, degree and Laplacian matrices. We also define the symmetric normalized adjacency \(\mathbf{A}_{\text{sym}}=\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{- \frac{1}{2}}\), the random-walk normalized adjacency \(\mathbf{A}_{\text{rw}}=\mathbf{A}\mathbf{D}^{-1}\), and the mean-aggregation normalized adjacency \(\mathbf{A}_{\text{mean}}=\mathbf{D}^{-1}\mathbf{A}\), along with the respective normalized Laplacians \(\mathbf{L}_{\text{sym}}\), \(\mathbf{L}_{\text{rw}}\), \(\mathbf{L}_{\text{mean}}\), and the self-loop augmented \(\hat{\mathbf{A}}=\mathbf{A}+\mathbf{I}\). Finally, we denote by \(\mathbf{A}^{+}\) the pseudo-inverse of matrix \(\mathbf{A}\). Throughout the paper we assume the graphs to be undirected. A graph neural network (GNN) computes node representations \(\mathbf{h}_{v}\in\mathbb{R}^{H}\) via a deep neural network of \(L\) message-passing layers. Each layer \(k=1,...,L\) computes a new node representation \(\mathbf{h}_{v}^{(k)}\) by performing a permutation-invariant aggregation of messages computed from the previous layer representations of neighbor nodes \(\mathbf{h}_{u}^{(k-1)}\). Without much loss of generality, we assume the message-passing layers to have the form \[\mathbf{h}_{v}^{(k)}=\phi_{k}\left(\sum_{u\in\mathcal{V}}M_{vu}\,\psi_{k} \left(\mathbf{h}_{u}^{(k-1)}\right)\right),\quad\mathbf{h}_{v}^{(0)}=\mathbf{ x}_{v}, \tag{1}\] where local neighbors of \(v\) are implicitly defined as nodes \(u\) such that \(M_{vu}\neq 0\). By \(\mathbf{M}\) we denote the message-passing matrix, which is a graph shift operator. Such operator that can be e.g. either the adjacency \(\mathbf{A}\), the Laplacian \(\mathbf{L}\), or one of their normalizations. In this case, the aggregations are performed on graph neighborhoods \(\mathcal{N}_{v}\). Message-passing layers can thus represent the relationships induced by graph connectivity in an efficient manner by leveraging the graph structure sparsity. To capture long-range dependencies between nodes, GNNs must perform at least as many message-passing steps (i.e., have as many layers) as the distance between node pairs to not incur in under-reaching [3]. However, building deep GNNs presents an inherent challenge. As depth increases, the receptive field of nodes [34] grows exponentially, thus requiring more information to be encoded in the same fixed-size vectors. This problem is called over-squashing [3]. Topping et al. [52] have investigated this phenomenon via the analysis of node representations sensitivity to input features. Assuming there exist an \(L\)-path from node \(u\) to node \(v\), the sensitivity of \(\mathbf{h}_{v}^{(L)}\) to input features \(\mathbf{x}_{u}\) is upper bounded by \[\left\|\frac{\partial\mathbf{h}_{v}^{(L)}}{\partial\mathbf{x}_{u}}\right\|\ \leq\ \underset{\underset{\text{Lipschitz constants}}{ \underbrace{k=1}}}{\prod}\|\phi_{k}\|\|\psi_{k}\|\underset{\text{graph topology}}{\underbrace{\left(\mathbf{M}^{L}\right)_{vu}}}. \tag{2}\] Over-squashing arises when the derivative in (2) becomes too small, indicating that the representation of node \(v\) is mostly insensitive to the information initially present at node \(u\). While increasing the layer Lipschitz constants or the dimension \(H\) can mitigate the issue [54, 12], this may come at the expense of model generalization [32]. Therefore, different methods have been proposed to alter the graph topology in a more favorable way to message-passing. In this paper we focus on graph rewiring methods which change the initial graph--or equivalently, the message-passing matrix \(\mathbf{M}\)--as a pre-processing step, as opposed to e.g. the implicit rewiring done by attention mechanisms [55]. Diffusion processesGraph diffusion was originally proposed as a way of aggregating nodes beyond the immediate \(1\)-hop neighborhood [26], thus allowing a single message-passing layer to directly consider information from more distant nodes. The generalized graph diffusion matrix is computed by the power series \(\sum_{m=0}^{\infty}\theta_{m}\mathbf{A}^{m}\), where the choice of coefficients \(\theta_{m}\) defines the particular diffusion process and \(\mathbf{A}\) can be replaced by any other transition matrix. Two examples of graph diffusion are the heat kernel [27] with \(\theta_{m}^{\text{Heat}}=e^{-t}\frac{1}{m!},t>0\), and personalized PageRank [40] with \(\theta_{m}^{\text{ PageRank}}=\alpha(1-\alpha)^{m},0<\alpha<1\), which correspond respectively to the message-passing matrices \[\mathbf{M}_{\text{Heat}}=e^{-t\mathbf{A}}\quad\text{and}\quad\mathbf{M}_{ \text{PageRank}}=\alpha(1-(1-\alpha)\mathbf{A})^{+}. \tag{3}\] Diffusion-based rewiring was proposed exclusively for node-level tasks. Local bottlenecksIn their analysis of over-squashing, Topping et al. [52] have linked its causes to _bottlenecks_ in the graph topology that happen where the graph structure locally resembles a tree. Intuitively, for a tree the receptive field grows exponentially in the branching factor, while at the other opposite a complete graph has a constant receptive field. To provide a metric to quantify this local behavior, they have proposed the balanced Forman curvature, defined as \[\text{Ric}_{uv}=\underbrace{\frac{2}{d_{u}}+\frac{2}{d_{v}}}_{\text{tree-likeness}}-2 +2\underbrace{\frac{\sharp^{\triangle}_{uv}}{\max\{d_{u},d_{v}\}}+\frac{ \sharp^{\triangle}_{uv}}{\min\{d_{u},d_{v}\}}}_{\text{local similarity to a complete graph}}+\underbrace{\frac{\sharp^{\square}_{uv}+\sharp^{\square}_{vv}}{ \star^{\max}_{uv}}}_{\text{grid-likeness}}, \tag{4}\] where \(\sharp^{\triangle}_{uv}\) is the number of triangles on the edge \((u,v)\), \(\sharp^{\square}_{uv}\) is the number of neighbors of \(u\) forming a \(4\)-cycle based on the edge \((u,v)\) without diagonals inside, and \(\gamma^{\max}_{uv}\) is a normalization factor. For a graph having only positively curved edges (i.e. \(\text{Ric}_{uv}>0\) for all \((u,v)\)) it has been proved that the receptive field grows at most polynomially [52]. Therefore, rewiring algorithms that aim at increasing the graph curvature have been proposed. SDRF [52] iteratively samples an edge \((u,v)\) proportionally to how negatively curved it is, then adds the new edge \((u^{\prime},v^{\prime})\) able to provide the largest increase of \(\text{Ric}_{uv}\). (The algorithm optionally removes the most positively curved edges to avoid growing the graph excessively.) Global bottlenecksThe edge curvature defined in equation (4) is not the only way to measure the presence of bottlenecks in the graph topology. A more _global_ metric is the Cheeger constant \(\mathfrak{h}_{\mathcal{G}}\), which quantifies the minimum fraction of edges that need to be removed in order to make the graph disconnected. A small \(\mathfrak{h}_{\mathcal{G}}\) thus indicates that few edges act as a bridge between two otherwise disconnected communities. However, computing the Cheeger constant is an NP-hard problem, so the lower bound given by the spectral gap \(\lambda_{1}\) (i.e. the smallest positive Laplacian eigenvalue) is used as a proxy measure in practice: \(\mathfrak{h}_{\mathcal{G}}\geq\frac{1}{2}\lambda_{1}\)[36]. GRLEF [7] proposes to improve a graph spectral gap by working exclusively _locally_ via the triangle counts \(\sharp^{\triangle}_{uv}\), which are cheaper to compute as they require just neighborhood information. The algorithm iteratively samples an edge \((u,v)\) proportionally to the inverse of triangle count, that is from an area of the graph that is locally far away from being fully-connected. Then it chooses the pair of edges \((u,u^{\prime}),(v,v^{\prime})\) to flip into \((u,v^{\prime}),(v,u^{\prime})\) which provides the smallest net change in triangle count. This behavior can be interpreted as mitigating a very low local curvature (as suggested by the small term \(\sharp^{\triangle}_{uv}\) in \(\text{Ric}_{uv}\)) at the expense of a reduction in curvature of more positively curved neighboring edges. Banerjee et al. [7] supported the approach of their rewiring algorithm by empirically finding a correspondence between triangle count decrease and spectral gap increase. Expander propagationThere is a class of graphs that avoid global bottlenecks by construction: expander graphs are simultaneously sparse and highly connected [23]. Additionally, expander families of graphs are characterized by an uniform lower bound on the Cheeger constant [2], and for uniform maximal node degree their diameter is also logarithmic in the number of nodes [37; 1]. Deac et al. [11] have thus proposed to interleave the message propagation on the original graph with message-passing on an expander graph to provide for information propagation over bottlenecks. The expander graphs adopted for EGP [11] are derived from the Cayley graphs of finite groups \(\mathrm{SL}(2,\mathbb{Z}_{n})\), which are \(4\)-regular and thus guarantee sparsity. Interestingly, these graphs have all negatively curved edges with \(\mathsf{Ric}_{uv}=-\frac{1}{2}\). In our experiments we will thus use the message-passing matrix \(\mathbf{M}_{\mathrm{EGP}}=\mathbf{A}_{\mathrm{Cay}}\,\mathbf{A}\), where \(\mathbf{A}_{\mathrm{Cay}}\) is the adjacency matrix of said Cayley graphs. Effective resistanceEffective resistance [14] provides an additional way to measure bottlenecks in graph topology. The resistance \(\mathsf{Res}_{uv}\) between two nodes is proportional to the commute time \(\mathsf{Com}_{uv}\), which is the number of expected steps for a random walk to go back and forth between nodes \(u,v\). 1 An high resistance between two nodes is an indication of the difficulty for messages to pass from node \(u\) to node \(v\). Black et al. [9] proved a sensitivity bound similar to (2) relating high effective resistance \(\mathsf{Res}_{vu}\) between pairs of nodes to a reduced sensitivity of the representations \(\mathbf{h}_{v}^{(L)}\) to input features \(\mathbf{x}_{u}\). Furthermore, effective resistance is inversely related to the square of the Cheeger constant by the inequality \(\max_{(u,v)\in\mathcal{E}}\mathsf{Res}_{uv}\leq\frac{1}{b_{0}^{2}}\)[4]. Arnaiz-Rodriguez et al. [4] have proposed a layer for learning effective resistance to re-weight the original graph adjacency (hence 'DiffWire') in the perspective of sampling a spectrally similar but sparser graph which preserves the graph structural information [47]. The additional intuitive effect is to enlarge the relative capacity of high resistance edges, which correspond to bridges over more densely connected communities. In our experiments we implement the DiffWire approach by computing the effective resistance in exact form by \(\mathsf{Res}_{uv}=(\mathbf{1}_{u}-\mathbf{1}_{v})^{\top}\mathbf{L}^{+}(\mathbf{ 1}_{u}-\mathbf{1}_{v})\) with \(\mathbf{1}_{u}\) the indicator vector of node \(u\). The resulting message-passing matrix therefore is \(\mathbf{M}_{\mathrm{DiffWire}}=\mathsf{Res}\odot\mathbf{A}\), where '\(\odot\)' denotes the elementwise product. Footnote 1: Precisely, \(\mathsf{Res}_{uv}=\frac{1}{\sum_{v\in\mathcal{V}}d_{v}}\mathsf{Com}_{uv}\). ## 3 Training-free graph neural networks Since graph rewiring methods work as a pre-processing step on the input graph, the choice of GNN model is crucial to assess their benefits in downstream task accuracy. So far, only end-to-end trained models have been used, such as GCN [25] in [52]. This approach does not allow to consider the effects of over-squashing independently from the other issues that can affect training in message-passing models, such as gradient vanishing. By learning node and graph representations jointly with the task prediction readout, the experimental results become inextricably linked to how training is conducted. Therefore, we propose to apply GNNs that compute node and graph representation _without training_ in our experimental setting for assessing the actual contributions of graph rewiring. Indeed, rewiring methods aim to address issues connected with the model bias itself, that is local aggregation of messages computed from node structural neighbors, independently from whether message-passing is done in the forward or backward pass. Isolating the inductive bias of a model from Figure 1: The two different model architectures of SGC [58] and GESN [15]. training is not completely unprecedented, as it was previously employed for the analysis of recurrent neural networks [51, 50, 16]. For our experiments we adopt two training-free models with different architectural biases, SGC [58] and GESN [15]. In particular, the latter has achieved performances in line with or better than widely adopted end-to-end trained GNNs in node classification tasks [35], also significantly improving upon previous results that include rewiring as graph pre-processing [54]. This may suggest that the training process itself can pose serious challenges. As an example of how end-to-end training can egregiously fail, in Tab. 1 we report the accuracy of some common GNN models (GCN [25], GraphSAGE [22], GAT [55]) on two node classification tasks. Platonov et al. [42] have observed that both Squirrel and Chameleon present a large number of duplicated nodes--i.e., nodes sharing same local structure and features, resulting in a training-test leakage. Nevertheless, the accuracy of end-to-end trained models is significantly worse than the two training-free GNNs, and is actually much closer to a graph-agnostic baseline (MLP). This is an additional motivation for our choice of excluding end-to-end trained GNNs in our rewiring evaluation framework and rely on the models presented below. SgcA straightforward way to compute node representations without the need for training is to replace the functions \(\phi_{k},\psi_{k}\) in (1) with the identity, thus removing altogether parameters in layers. Such approach was previously proposed by [58] as a simplification of graph convolution by removing non-linearities, hence the name SGC. This model therefore is reduced to pure message-passing (Fig. 0(a)), with node representations computed after \(L\) message-passing steps as \[\mathbf{h}^{(L)}=\mathbf{M}^{L}\,\mathbf{x}. \tag{5}\] Notice that this model was proposed exclusively for node-level tasks [58]. GesnA different approach for training-free models is to follow the reservoir computing (RC) paradigm [39, 31, 56], where input representations (or embeddings) are computed by a dynamical system with randomly initialized parameters. Combining the recursive embedding approach of [44] with RC, Graph Echo State Networks [15] compute node representations by iterating up to \(L\) times the same message-passing function \[\mathbf{h}^{(k)}_{v}=\tanh\left(\mathbf{W}_{\text{in}}\,\mathbf{x}_{v}+\sum_{ u\in\mathcal{V}}M_{uv}\hat{\mathbf{W}}\,\mathbf{h}^{(k-1)}_{u}+\mathbf{b} \right),\quad\mathbf{h}^{(0)}_{v}=\mathbf{0}, \tag{6}\] where \(\mathbf{W}_{\text{in}}\in\mathbb{R}^{H\times X}\), \(\mathbf{b}\in\mathbb{R}^{H}\) and \(\hat{\mathbf{W}}\in\mathbb{R}^{H\times H}\) are respectively the input-to-reservoir, bias, and recurrent weights for a reservoir with \(H\) units. This can be interpreted as a form of parameter sharing between message-passing layers (Fig. 0(b)). Notice also that equation (6) slightly departs from (1) due to the presence of input skip connections. All reservoir weights are randomly initialized, with \(\mathbf{W}_{\text{in}}\) rescaled to accommodate the input features range, and \(\hat{\mathbf{W}}\) rescaled to control the Lipschitz constant of (6). For \(\|\hat{\mathbf{W}}\|\ \|\mathbf{M}\|<1\) the message-passing function is contractive [53], that is the iterations of (6) converge to a fixed point \(\mathbf{h}^{(\infty)}\) as \(L\to\infty\). While this regime has been shown to be optimal for graph-level tasks, node-level tasks instead benefit from a non-contractive initialization \(\|\hat{\mathbf{W}}\|\ \|\mathbf{M}\|>1\), as the upper bound on input sensitivity (2) intuitively suggests. In the non-contractive regime, a choice of \(L\) larger then the graph diameter is sufficient to ensure effective node representations [35]. To produce graph representations for graph-level tasks, we apply a parameter-free global pooling operation, such as sum or mean pooling, to the final node representations: \[\mathbf{h}^{\text{\tiny{SUM}}}_{\mathcal{G}}=\sum_{v\in\mathcal{V}}\mathbf{ h}^{(L)}_{v},\quad\mathbf{h}^{\text{\tiny{MEAN}}}_{\mathcal{G}}=\tfrac{1}{| \mathcal{V}|}\sum_{v\in\mathcal{V}}\mathbf{h}^{(L)}_{v}. \tag{7}\] ReadoutTo solve a downstream node (or graph) classification task, there still remains the need to train a predictor. For this purpose, we use a linear readout layer \(\mathbf{y}_{v}=\mathbf{W}_{\text{out}}\,\mathbf{h}^{(L)}_{v}+\mathbf{b}_{\text {out}}\), where the \begin{table} \begin{tabular}{l c c} \hline \hline & **Squirrel** & **Chameleon** \\ \hline MLP [62] & \(29.68\pm 1.81\) & \(46.36\pm 2.52\) \\ \hline GCN [62] & \(36.89\pm 1.34\) & \(59.82\pm 2.58\) \\ SAGE [62] & \(41.61\pm 0.74\) & \(58.73\pm 1.68\) \\ GAT [62] & \(30.62\pm 2.11\) & \(54.69\pm 1.95\) \\ \hline GCN [42] & \(39.06\pm 1.52\) & \(50.18\pm 3.29\) \\ SAGE [42] & \(35.83\pm 1.32\) & \(50.18\pm 1.78\) \\ GAT [42] & \(32.21\pm 1.63\) & \(45.02\pm 1.75\) \\ \hline SGC & \(72.88\pm 1.20\) & \(76.16\pm 1.87\) \\ GESN [35] & \(73.56\pm 1.62\) & \(77.05\pm 1.24\) \\ \hline \hline \end{tabular} \end{table} Table 1: Examples of underfitting in common end-to-end trained models on node classification (experimental setting of [41]). weights \(\mathbf{W}_{\mathrm{out}}\in\mathbb{R}^{C\times H},\mathbf{b}_{\mathrm{out}}\in \mathbb{R}^{C}\) are trained by ridge regression on one-hot encodings of target classes \(y_{v}\in 1,...,C\). This can be achieved efficiently in closed-form even on large data [61], thus removing also from the readout any issue connected to back-propagation learning. ## 4 Experiments and discussion We evaluate the graph rewiring methods of Sec. 2 jointly with the training-free GNNs presented in the previous section on several real-world classification tasks, many of whom were also adopted in previous rewiring literature [52; 4]. The aim of our experimental approach is to provide a tool for examining the effects of rewiring from a different perspective than previously pursued in literature, thanks to decoupling the inductive bias of GNNs from the training process. DatasetsFor node classification tasks, we adopt six graphs of up to \(20{,}000\) nodes. Cora [33], CiteSeer [20], PubMed [45] are paper citation networks, where input node features are bag-of-words representations of paper content, and the target is the research topic. Film [49] is a network induced by co-occurrences of actors in the same Wikipedia page, grouped into five categories [41]. TwitchDE [43; 30] is a social network of German garcer accounts from Twitch classified into suitable for work or adult profiles. Tolokers [29; 42] is a collaboration network of users extracted from the crowdsourcing platform Toloka, where the task is to determine whether an user is active or not (since the two classes are unbalanced, the evaluation metric in this case is area under the ROC curve instead of accuracy). The first three are homophilous node classification tasks, while the other three tasks present low homophily. For graph classification we adopt six tasks from the TUDataset collection [38]. NCI-1, NCI-109 [57; 46] are molecules to be classified as cancerogenous or not, where node input features are one-hot encodings of atom type, and edges correspond to chemical bounds. Reddit-B, Reddit-5K, Reddit-12K [60] are interaction networks between users in Reddit discussion threads, and the classification task is to identify the type of sub-reddit the discussions belong to. Collab [28; 60] is a collection of ego-networks belonging to three different scientific collaboration fields. Both Reddit tasks and Collab have no node input features. In all tasks we have consciously avoided adding structural input features to the graph nodes, such as node degrees or positional encodings [48]. Relevant dataset statistics are reported in Tab. 2. Experimental settingFor all classification tasks we have generated with class stratification \(5\)-fold selection/test splits with inner validation holdout, thus resulting in 60:20:20 training/validation/test set ratios. Both GNN and rewiring algorithm parameters are jointly selected on each validation fold. For SGC, we select the number of message-passing iterations \(L\in[1,15]\) and the type of \begin{table} \begin{tabular}{l r r r r r r} \hline \hline & **Cora** & **CiteSeer** & **PubMed** & **Film** & **TwitchDE** & **Tolokers** \\ \hline nodes & \(2{,}708\) & \(3{,}327\) & \(19{,}717\) & \(7{,}600\) & \(9{,}498\) & \(11{,}758\) \\ edges & \(10{,}556\) & \(9{,}104\) & \(88{,}648\) & \(53{,}504\) & \(153{,}138\) & \(519{,}000\) \\ average degree & \(3.90\) & \(2.74\) & \(4.50\) & \(7.03\) & \(16.14\) & \(88.28\) \\ diameter & \(19\) & \(28\) & \(18\) & \(12\) & \(7\) & \(11\) \\ node features & \(1{,}433\) & \(3{,}703\) & \(500\) & \(932\) & \(2{,}514\) & \(10\) \\ classes & \(7\) & \(6\) & \(3\) & \(5\) & \(2\) & \(2\) \\ edge homophily & \(0.81\) & \(0.74\) & \(0.80\) & \(0.22\) & \(0.63\) & \(0.59\) \\ \hline \hline \multicolumn{7}{c}{Graph Classification} \\ \hline & **NCI-1** & **NCI-109** & **Reddit-B** & **Reddit-5K** & **Reddit-12K** & **Collab** \\ \hline graphs & \(4{,}110\) & \(4{,}127\) & \(2{,}000\) & \(4{,}999\) & \(11{,}929\) & \(5{,}000\) \\ average nodes & \(30\) & \(30\) & \(430\) & \(509\) & \(391\) & \(75\) \\ average edges & \(32\) & \(32\) & \(498\) & \(595\) & \(457\) & \(2{,}458\) \\ average degree & \(2.16\) & \(2.16\) & \(2.34\) & \(2.25\) & \(2.28\) & \(37{,}37\) \\ average diameter & \(13{,}33\) & \(13{,}13{,}\) & \(9.72\) & \(11{,}96\) & \(10{,}91\) & \(1.86\) \\ node features & \(37\) & \(38\) & \(0\) & \(0\) & \(0\) & \(0\) \\ classes & \(2\) & \(2\) & \(2\) & \(5\) & \(11\) & \(3\) \\ \hline \hline \end{tabular} \end{table} Table 2: Dataset statistics. message-passing matrix (adjacency, Laplacian, or one of their normalizations, with or without the addition of self-loops). For GESN, we select the reservoir size (i.e. node representation dimension) \(H\in[2^{4},2^{12}]\), the input scaling factor in \([0,1]\), and the Lipschitz constant. For the latter, we actually follow the reservoir computing practice of selecting the spectral radius \(\rho(\hat{\mathbf{W}})\) instead of the spectral norm \(\|\hat{\mathbf{W}}\|\), as the radius is a lower bound on the norm [21] and it is cheaper to compute [19]. We select \(\rho(\hat{\mathbf{W}})\in[0.1/\rho(\mathbf{M}),30/\rho(\mathbf{M})]\), while the number of message-passing iterations is fixed at \(L=30\), which is comfortably larger than graph diameters in our datasets [35]. For graph-level tasks we also select the pooling function from the two defined in (7). As for graph rewiring algorithms, we select \(t\in[0.1,5]\) for heat diffusion, and \(\alpha\in[0.01,0.99]\) for PageRank diffusion. We run SDRF and GRLEF for a number of iterations corresponding to up to \(20\%\) of the graph edges, without performing edge removal in the former. Finally, the regularization for the closed-form ridge regression to train the readout classifier is selected in \([10^{-5},10^{3}]\). ResultsWe report the results of our experiments in Tab. 3-5. The baseline accuracy corresponds to the model applied on the original graph without any rewiring. We highlight better or worse accuracy with respect to the baseline when statistically significant (\(p<0.05\)), denoting no improvements otherwise. The experiments were executed on an NVIDIA A100 with 40GB of GPU RAM. For reference, a single complete model selection for GESN excluding rewiring took up to \(3.5\) hours. 'OOR' in Tab. 3-4 indicates that SDRF exceeded the limit of 10 days of computation for Tolokers. On node classification tasks the only rewiring methods able to achieve some significant improvements over the baseline both for SGC and GESN are the diffusion-based heat and PageRank rewiring. This improvement is present both on high and low homophily graphs, that is respectively PubMed and Film. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & **Cora** & **CiteSeer** & **PubMed** & **Film** & **TwitchDE** & **Tolokers** \\ \hline Baseline & \(87.70\pm 1.34\) & \(75.84\pm 0.93\) & \(89.53\pm 0.49\) & \(35.23\pm 0.70\) & \(68.62\pm 1.04\) & \(84.40\pm 1.02\) \\ \cline{2-7} Heat & \(87.86\pm 1.50\) & \(75.34\pm 0.88\) & \(89.22\pm 0.33\) & \(36.87\pm 1.05\) & \(68.26\pm 0.30\) & \(84.20\pm 1.17\) \\ PageRank & \(87.50\pm 1.30\) & \(75.20\pm 1.32\) & \(89.19\pm 0.42\) & \(35.91\pm 1.06\) & \(67.88\pm 0.49\) & \(82.63\pm 1.18\) \\ SDRF & \(86.60\pm 1.56\) & \(74.84\pm 1.66\) & \(89.20\pm 0.40\) & \(34.92\pm 0.55\) & \(68.54\pm 0.80\) & OOR \\ GRLEF & \(86.06\pm 1.56\) & \(74.74\pm 1.73\) & \(89.11\pm 0.74\) & \(35.05\pm 0.87\) & \(67.66\pm 0.70\) & \(82.64\pm 1.19\) \\ EGP & \(86.95\pm 2.51\) & \(74.62\pm 1.85\) & \(89.50\pm 0.42\) & \(35.06\pm 0.78\) & \(68.68\pm 0.98\) & \(84.50\pm 1.02\) \\ DiffWire & \(86.51\pm 1.74\) & \(74.03\pm 2.20\) & \(88.81\pm 0.49\) & \(35.01\pm 0.74\) & \(68.15\pm 0.33\) & \(84.77\pm 0.95\) \\ \hline \hline \end{tabular} \end{table} Table 4: Node classification with GESN. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & **NCI-1** & **NCI-109** & **Reddit-B** & **Reddit-5K** & **Reddit-12K** & **Collab** \\ \hline Baseline & \(78.09\pm 1.64\) & \(77.56\pm 0.83\) & \(87.23\pm 1.38\) & \(53.86\pm 1.49\) & \(44.02\pm 0.54\) & \(72.49\pm 0.77\) \\ \cline{2-7} SDRF & \(73.39\pm 0.63\) & \(72.35\pm 1.46\) & \(87.02\pm 1.30\) & \(53.84\pm 1.55\) & \(44.07\pm 0.47\) & \(71.25\pm 1.09\) \\ GRLEF & \(73.74\pm 1.40\) & \(71.76\pm 1.31\) & \(85.89\pm 2.02\) & \(53.17\pm 1.26\) & \(42.94\pm 1.23\) & \(72.23\pm 0.86\) \\ EGP & \(78.31\pm 1.63\) & \(77.49\pm 0.65\) & \(87.28\pm 1.29\) & \(53.78\pm 1.34\) & \(44.08\pm 0.48\) & \(72.17\pm 0.87\) \\ DiffWire & \(78.14\pm 1.61\) & \(77.48\pm 0.65\) & \(84.54\pm 2.44\) & \(53.58\pm 0.72\) & \(41.37\pm 0.68\) & \(66.31\pm 0.76\) \\ \hline \hline \end{tabular} \end{table} Table 5: Graph classification with GESN. This comes as a surprise, since these methods were dismissed in previous literature [52]. We may conjecture that by acting as low-pass filters on the graph spectra [26], diffusion methods can improve the spectral gap \(\lambda_{1}\) (i.e. the smallest positive Laplacian eigenvalue) in certain graphs, possibly resulting in a rewired graph with a larger Cheeger constant since \(\mathfrak{h}_{G}\geq\frac{\lambda_{1}}{2}\)[36]. The other rewiring methods do not provide significant improvements in accuracy, both on node and graph classification tasks. Actually, they can cause a significant degradation of accuracy. To investigate the effects of rewiring algorithms that explicitly act on local bottlenecks of graph topology, we analyze the distribution of edge curvature before and after rewiring (Fig. 1(a)). Notice that the overall curvature distribution is not improved; in particular, the one of SDRF appears to become even more skewed towards negatively curved edges. This is confirmed by observing the differences between initial and final edge curvature in the scatter plots of Fig. 1(b), where the predominant number of edges appears in red below the diagonal, denoting than edge curvature has actually become more negative instead of improving. This behavior can be explained by recalling that the algorithm acts _greedily_ on _local_ curvature information, without accounting for the effects on _global_ graph curvature when deciding where to add supporting edges [12]. As previously stated in Sec. 2, the spectral gap is a proxy measure of global graph bottlenecks. In Fig. 3 we analyze the effects of the two local rewiring algorithms SDRF and GRLEF on this global property. While a more positive curvature should also improve the spectral gap since \(\lambda_{1}\geq\min_{(u,v)\in\mathcal{E}}\text{Ric}_{uv}\)[52], the failure of SDRF in generally increasing edge curvature results in an unchanged \(\lambda_{1}\). On the other end, GRLEF is in some cases able to provide some increase in the spectral gap. However, this does not necessarily translates into an improvement of node classification accuracy, as the results on TwitchDE and Tolokers show. EGP seems to have little effect on accuracy both on graph and node classification tasks in general. As for DiffWire, the significant degradation of accuracy on Collab and Reddit-12K could be attributed to a magnification of spurious edges between network communities. While the scope of our empirical approach is to validate the effectiveness of graph rewiring purely on message-passing, to put the results of our experiments into perspective we recall that the addition of training to a message-passing model on the rewired graph has shown no improvements over training-free baselines [54]. As for a comparison with end-to-end trained GNNs, we refer to the results of rewiring algorithms in the respective original papers. LimitationsSince our experimental framework is based on training-free GNNs, we have necessarily left out models that learn the graph structure jointly with node representations, or that perform an implicit rewiring via attention mechanisms. We may also have left out from our evaluation rewiring Figure 3: Effects of SDRF and GRLEF on spectral gaps \(\lambda_{1}\). Figure 2: Effects of SDRF and GRLEF on graph curvature Ric for Cora.
2302.14294
Flocking to Mastodon: Tracking the Great Twitter Migration
The acquisition of Twitter by Elon Musk has spurred controversy and uncertainty among Twitter users. The move raised as many praises as concerns, particularly regarding Musk's views on free speech. As a result, a large number of Twitter users have looked for alternatives to Twitter. Mastodon, a decentralized micro-blogging social network, has attracted the attention of many users and the general media. In this paper, we track and analyze the migration of 136,009 users from Twitter to Mastodon. Our analysis sheds light on the user-driven pressure towards centralization in a decentralized ecosystem and identifies the strong influence of the social network in platform migration. We also characterize the activity of migrated users on both Twitter and Mastodon.
Haris Bin Zia, Jiahui He, Aravindh Raman, Ignacio Castro, Nishanth Sastry, Gareth Tyson
2023-02-28T03:59:19Z
http://arxiv.org/abs/2302.14294v1
# Flocking to Mastodon: Tracking the Great Twitter Migration ###### Abstract The acquisition of Twitter by Elon Musk has spurred controversy and uncertainty among Twitter users. The move raised as many praises as concerns, particularly regarding Musk's views on free speech. As a result, a large number of Twitter users have looked for alternatives to Twitter. Mastodon, a decentralized micro-blogging social network, has attracted the attention of many users and the general media. In this paper, we track and analyze the migration of 136,009 users from Twitter to Mastodon. Our analysis sheds light on the user-driven pressure towards centralization in a decentralized ecosystem and identifies the strong influence of the social network in platform migration. We also characterize the activity of migrated users on both Twitter and Mastodon. ## 1 Introduction In October 2022, Elon Musk, a self-declared "free speech absolutist" acquired Twitter -- the social network that he regarded as the "de facto town square" where public debate takes place. Musk's takeover has been controversial and highly publicized. Some users admire Musk and his takeover, regarding it as crucial for free speech; others have expressed concerns over increased misinformation and toxicity. Regardless of one's stance, it is undeniable that the acquisition has led to a series of noteworthy events. On November 04, 2022, Musk fired half of the 7,500 employees previously working at Twitter. Two weeks later (November 17, 2022), hundreds of employees resigned in response to an ultimatum to commit to "extremely hard-core" work or leave. These events and the associated public backlash, prompted many users to search for alternatives. Figure 0(a) presents a time series of Google trend search interest for "Twitter alternatives". We observe a large spike on October 28, 2022, the day after Musk's takeover. Similarly, Figure 0(b) shows equivalent search interest for other popular alternatives to Twitter, _e.g._ Koo (an Indian micro-blogging and social networking service), and Hive (a micro-blogging service that permits NSFW mature content). One platform that stands out as being particularly prominent is _Mastodon_, a decentralized micro-blogging platform. Although released in 2016, Mastodon has anecdotally gathered significant attention since October 2022. It is part of the wider _fediverse_, in which any person can create and operate their own Mastodon server (aka "instance"). Each Mastodon instance operates as an independent microblogging service, where users can create local accounts and enjoy similar functions to Twitter (_e.g._ posting, following). Importantly, these instances can also federate together, allowing users on one instance to follow users on another. This means that Mastodon operates in a decentralized fashion (with people joining independent instances), while retaining the ability to interact across the entire globe. This new paradigm has attracted significant attention and has made it an obvious candidate for users who are unhappy with the Musk acquisition (and the associated centralization of power in the hands of one individual). This sudden interest in Mastodon offers a unique opportunity to study the migration of users between social networks. This is particularly the case due to the differing value propositions of the two platforms, with clear contrasts in the governance and ownership of Twitter vs. Mastodon. The unusual circumstances of the migration create further dimensions of analysis. With this context in mind, we explore the following three research questions: * **RQ1** How are new users spread across Mastodon instances, and are there any consequences for decentralization? * **RQ2** How much (if at all) does a user's ego-centric Twitter follower network influence their migration to Mastodon? * **RQ3** What are usage patterns of migrated users across both platforms? To address these questions, we track 136,009 unique twitter users who moved to 2,879 unique Mastodon instances. The main findings related to our three RQs are as follows: * There is a user-driven pressure towards centralization on Mastodon (the top 25% most populous instances contain 96% of the users). This pressure is counterbalanced by the greater activity of the users on smaller instances. On average, users of single-user instances post 121% more statuses than users on bigger instances. * The social network of users on Twitter influences their choice of an instance on Mastodon _e.g._ 4.09% of users changed the instance on which they created an account (when they first migrated to Mastodon) and moved to the instance of choice of their Twitter followees who migrated to Mastodon as well. * Users tend to post different content across the two platforms. On average, only 1.53% of a user's Mastodon posts are identical to their Twitter posts. Twitter hosts more diverse topics ranging from Entertainment to Politics, whereas discussions around Fediverse and Migration dominate on Mastodon. ## 2 Mastodon Primer Mastodon is an open-source [15] federated server platform released in 2016. It offers micro-blogging functionality, allowing administrators to create their own independent Mastodon servers, aka **instances**. Each unique Mastodon instance works much like Twitter, allowing users to register new accounts and share statuses with their followers - equivalent to tweeting on Twitter. Users can also **boost** others' statuses - equivalent to retweeting on Twitter. Instances can work in isolation, only allowing locally registered users to follow each other. However, Mastodon instances can also **federate**, whereby users registered on one instance can follow users registered on another instance. This results in the instance **subscribing** to posts performed on the remote instance, such that they can be pushed across and presented to local users. For simplicity, we refer to users registered on the same instance as **local**, and users registered on different instances as **remote**. Note that a user registered on their local instance does _not_ need to register with the remote instance to follow the remote user. Instead, a user just creates a single account with their local instance; when the user wants to follow a user on a remote instance, the user's local instance performs the subscription on the user's behalf. This process is implemented using an underlying subscription protocol, ActivityPub [1]. This makes Mastodon compatible with other decentralised micro-blogging implementations (notably, Pleroma). The **Fediverse**, refers to the growing group of ActivityPub compatible, and therefore interconnected, applications. When a user logs in to their local instance, they are Figure 1: Interest over time for the search terms (a) Twitter alternatives and (b) Mastodon, Koo & Hive Social. presented with three timelines: (_i_) a _home_ timeline, with statuses shared by the accounts whom the user follows; (_ii_) a _local_ timeline, listing the statuses generated within the same instance; and (_iii_) a _federated_ timeline, with _all_ statuses that have been retrieved from remote instances. The latter is not limited to remote statuses that the user follows; rather, it is the union of remote statuses retrieved by all users on the instance. ## 3 Data Collection ### Mastodon Accounts from Twitter We collect a global list of Mastodon instances from instances.social, which contains a comprehensive index of Mastodon instances. We compile a set of 15,886 unique instances. We then collect all tweets containing a link to these Mastodon instances using Twitter's Search API.1 Additionally, we collect all tweets containing the following list of keywords related to the migration from Twitter:'mastodon', 'bye bye twitter', 'good byte twitter'; and hashtags #Mastodon, #MastodonMigration, #ByeByeTwitter, #GoodByeTwitter, #TwitterMigration, #MastodonSocial, #RIPTwitter. In total, we collect 2,090,940 tweets posted by 1,024,577 users between October 26, 2022 (_i.e._ a day before Musk's takeover) and November 21, 2022. Figure 2 shows the temporal distribution of these tweets. Footnote 1: [https://api.twitter.com/2/tweets/search/all](https://api.twitter.com/2/tweets/search/all) We next search for Mastodon usernames in these tweets and the accompanying metadata of any account that posted a tweet (_i.e._ display name, location, description, URLs, pinned tweet text). Mastodon usernames take the form @[email protected] and [https://example.com/@alice](https://example.com/@alice), where alice is a username and example.com is an instance. To map a Mastodon handle to a Twitter account, we do this search in a hierarchical fashion: We first look for Mastodon usernames in user metadata (_e.g._ bio) and create a mapping between Twitter account & Mastodon account if one is found. If the search is unsuccessful at the first step, we then look for Mastodon usernames in the tweet text. To ensure mapping accuracy, we only map a Twitter account to a Mastodon account identified from a tweet text if both the Twitter and Mastodon usernames are identical. Using this methodology, we identify the Mastodon accounts of 136,009 Twitter users, which are created across 2,879 unique Mastodon instances. We find that 72% of Twitter users that migrated created a Mastodon account with the same username that they use on Twitter. 4% of the Twitter users who create a Mastodon account, have a (legacy) verified status (_i.e._ authentic, notable, and active) on Twitter, suggesting that even well-established users have been migrating. While we track and analyze the migration of a large number of Twitter users (136,009), the takeover of Twitter by Musk, is likely to have pushed even more users to migrate who the above methodology cannot identify. Indeed, on November 12, 2022, Mastodon announced that over 1 million users had registered since October 27, Figure 3: Weekly activity on Mastodon instances. Figure 2: Temporal distribution of tweets containing (i) links to Mastodon instances and (ii) migration related keywords/ hashtags. 2022 [16], significantly more than our methodology identifies. To understand the wider activities on the Mastodon instances, we cross-check the new registrations on the 2,879 instances by crawling their weekly activity from Mastodon's Weekly Activity Endpoint.2 Figure 3 shows the weekly number of registrations, logins and statuses. We notice a large increase in all three activity metrics after the Twitter acquisition. Of course, we cannot confirm that all these users migrated directly from Twitter. However, given the timeline of registrations, we believe that it is very likely that a large share of these new users migrated from Twitter. Footnote 2: [https://docs.joinmastodon.org/methods/instance/#activity](https://docs.joinmastodon.org/methods/instance/#activity) ### Twitter and Mastodon Timelines. We next crawl both the Twitter and Mastodon timelines of the migrating users identified in the previous section. We use Twitter's Search API and Mastodon's Account's Statuses Endpoint.3 For each user, we crawl all tweets/s-tauses from October 01, 2022 to November 30, 2022. In total, we gather Twitter timelines for 94.88% of the users. The rest were suspended (0.08%), deleted/deactivated (2.26%), or the tweets were protected (2.78%). We crawl the timelines of 79.22% of Mastodon users: the rest have either not posted a single status (9.20%) or their instances were down at the time of crawl (11.58%). In total, we gather 16,163,600 tweets, and 5,746,052 Mastodon states. Footnote 3: [https://docs.joinmastodon.org/methods/accounts/#statuses](https://docs.joinmastodon.org/methods/accounts/#statuses) ### Followees We also crawl the user's followees for both Twitter and Mastodon accounts. We use the Twitter Follows API4 and the Mastodon Account's Following Endpoint5 respectively. Due to the rate limitations of the Twitter's API we crawl a sub-sample of 10% of the migrated users. For representativity, our sample relies on the followees distribution takes 5% from above the median value and 5% from below. In total, we gather followee data for 13,068 users. This covers 11,453,484 followee relationships. Footnote 4: [https://api.twitter.com/2/users/:id/following](https://api.twitter.com/2/users/:id/following) Footnote 5: [https://docs.joinmastodon.org/methods/accounts/#following](https://docs.joinmastodon.org/methods/accounts/#following) This covers 11,453,484 followee relationships. ### Ethical Considerations The datasets in this paper include both user and post information, and therefore might have privacy implications. To overcome any data mishandling, we have exclusively collected the publicly available data following well-established ethical procedures for social data. We have obtained a waiver from the ethics committee at the author's institution.6 We anonymize the data before use and store it in a secure silo. Upon acceptance of the paper, anonymized data will be made available to the public, which we hope will help further works. Footnote 6: anonymised for double-blind submission created after. Interestingly, not all the Mastodon accounts advertised on Twitter in response to Elon Musk's acquisition are new though. 21% of the Mastodon accounts were created before the Musk's takeover. Despite Mastodon's decentralization efforts, we observe a clear trend towards centralization: a large number of users migrate to a small set of instances. In particular, mastodon.social, a flagship Mastodon instance operated by Mastodon gGmbH, receives the largest fraction of migrated Twitter users. We next explore the pressure towards Mastodon centralization by comparing the percentage of migrated users with the percentage of instances they join. Figure 5 plots the distribution of users across the top % of instances. We find that nearly 96% of users join the top 25% of largest instances (w.r.t number of users). This centralization trend implies that a small number of instance owners, administrators and moderators have an impact on a large fraction of migrated users. Arguably, this means that Mastodon is at risk of becoming another (semi-)centralized service. One possible explanation for this trend is that users intend to build a large social network by joining large and well-known instances. We therefore examine the relationship between the size of instances and social networks by analyzing the number of followers and followees of users joining different-sized instances. We analyze all migrated users who join Mastodon after the Twitter acquisition and have 30 days old account (to ensure a fair comparison). This covers 50.59% of all migrated users. We divide the instances based on quantiles w.r.t number of users. Figure 6 presents the distribution of instances by the number of users, CDFs of the number of followers, followees, and statuses of users on different-sized instances. Contrary to our hypothesis, users in the bigger instances tend to have smaller social networks. 13.16% of instances have just one user, who tends to have more followers, followees, and statuses than users in more populated instances. Paradoxically, the single user instances, have 64.88% more followers, follows 99.04% more users, and posts 121.14% more statuses (on average) than the users of the bigger instances. This implies that the size of an instance has a limited impact on the size of a user's social network. Rather it mainly depends on the user's activeness, engagement and networking. Hence, while large instances have more users, small instances attract more active users. Manual inspection suggests that this is because smaller instances have more dedicated and proactive user bases (whereas larger ones accumulate more experimental users). ## 5 RQ 2: Social Network Influence There are at least two possible reasons for platform migration from Twitter to Mastodon, particularly after the Musk takeover: (_i_) A user might have decided to move for ideological reasons, if they disagree with Musk's actions after he gained control of Twitter; and (_ii_) A user might have decided to move because a large fraction of the accounts they follow moved (and therefore Twitter has become irrelevant as a platform for them). Of course, these two reasons are not contradictory or mutually exclusive. In this section, we attempt to distinguish between these reasons based on the observation that if a user moves because their immediate social network moves, a large proportion of their ego network neighbourhood would also have moved with them. We argue this offers an interesting example of social contagion. ### Twitter vs. Mastodon Social Network We first analyze the size of the social network (_i.e._ number of followers & followees) that the migrated users have on both Twitter and Mastodon. Figure 7 plots the CDF of the number of followers and followees of migrated users on both platforms. The median followers and followees that migrated users have on Twitter are 744 and 787, respectively. Just 152 users (0.11% of total migrated) have no Twitter followers, and 465 (0.35% of total migrated) have Figure 5: Percentage of users on top 25% instances (w.r.t number of users). no Twitter followees. In contrast, on Mastodon, 6.01% of users have no followers, and 3.6% do not follow anyone. The median followers and followees on Mastodon were 38 and 48, respectively. Interestingly, 1.65% of migrated users gained a median of 33 _more_ followers on Mastodon than their Twitter followers. This confirms that these new users are yet to bootstrap a significant social network on Mastodon. However, we emphasize that the median age of migrated accounts on Twitter is 11.5 years, in contrast to just 35 days on Mastodon. Hence, due to these disproportionate ages, the size of the social networks on the two platforms are not directly comparable. ### Social Network Driven Migration We next conjecture that a user's (Twitter) social network may have an impact on their likelihood of migration. For example, if a user's friends migrate to Mastodon, this may encourage them to do the same. To inspect this, we analyze the followees data from both Twitter and Mastodon for 10% of the migrated users (see SS3.3). Figure 8 shows CDFs of the fraction of Twitter followees of each migrated user that (_i_) moved to Mastodon (blue); (_ii_) moved to Mastodon before the user (orange); and (_iii_) moved to the same Mastodon instances as the user (green). We notice that just 5.99% of each user's followees also migrate (on average). In fact, for 3.94% of the migrated users, none of their Twitter followees move to Mastodon. Thus, the majority of the social network of the migrated users seems indeed reluctant to migrate, and sometimes they are the first in taking this Figure 8: CDFs of the fraction of Twitter followees of each migrated user that (i) moved to Mastodon (blue) (ii) moved to Mastodon before the user (orange) and (iii) moved to the same instances on Mastodon as the user (green). Figure 6: (a) Distribution of instances w.r.t to number of users. (b) CDF of number of followers of users on different-sized instances. (c) CDF of number of followees of users on different-sized instances. (d) CDF of number of statuses of users on different-sized instances. Figure 7: CDF of number of followers and followees of migrated users on Twitter and Mastodon. step. To better understand this, we compare the date on which each migrated user joined Mastodon with that of their Twitter followees who migrated as well. We find that, out of their social network (_i.e._ their followees), 4.98% of the migrated users were the first and 4.58% were the last to migrate from Twitter to Mastodon. On average, 45.76% of the followees of a user migrated to Mastodon before the user actually did. We are also curious to understand if users select the same Mastodon instance as their social network. We therefore compare the instance of each migrated user with that of its Twitter followees. On average, 14.72% of each migrated user's followees (that move to Mastodon) join the same instance. With 15K+ Mastodon instances, this is a considerable proportion, suggesting a clear network effect. However, we also notice that this average is highly impacted by one flagship instance: mastodon.social. This is the largest instance available, and is probably the best known. Of all the migrated users whose Twitter followees move to the same instance, 30.68% are on mastodon. social. That said, we also find small instances that attract significant proportions of a given user's Twitter followers. For example, 4.5% of the migrated users whose Twitter followees join them on the same instance are on mastodon.gamedev.place (a Mastodon server focused on game development and related topics) ### Instance Switching A unique feature of Mastodon is that users can easily'switch' instance. This involves migrating their data from one instance to another. We are curious to see if this is also driven by network effects. Overall, 4.09% of the users have switched from the Mastodon instance they initially created an account on (hereinafter first instance) to a new instance (hereinafter second instance). Curiously, 97.22% of these switches happened after Musk's Twitter takeover. This suggests that users may join initial instances, but migrate to a more suitable one once they are more experienced. Figure 9 shows the chord plot of switches from each user's first Mastodon instance to their second. A common pattern across these switches is that users move from general purpose/ flagship instances (_e.g._ mastodon.social, mastodon.online) to more topic specific instances, _e.g._ sigmoid.social (a Mastodon instance for people researching and working in Artificial Intelligence) and historians. social (a Mastodon server for people interested in history). Interestingly, we notice a strong social network influence behind these switches. Figure 10 shows the CDFs of the fraction of Twitter followees of each switched user that (_i_) moved to the first instance (blue); (_ii_) moved to the second instance (orange); and (_iii_) moved to second instance before the user (green). On average, 46.98% of each user's followees (who moved to Mastodon) at some point also join the second instance. In contrast to just 11.4% who join the first instance. Interestingly, 77.42% of each switching user's followees (on average) joined the second instance before the user. This suggests that the users switched from the first instance because a large fraction of their Twitter followees moved to the second one. ## 6 RQ3: Timelines Analysis We are next curious to understand how people use their (two) accounts after migration. Figure 9: Chord plot of switching within Mastodon instances. ### Twitter vs. Mastodon Activity We first analyze the timelines of migrated users from both Twitter and Mastodon. Figure 11 shows the number of tweets on Twitter and the number of statuses on Mastodon posted by migrated users each day from October 01, 2022 to November 30, 2022. We observe a continuous growth in user activity on Mastodon after the acquisition of Twitter. However, the activity of migrated users on Twitter do not decrease in parallel, _i.e._ our migrated users are using both their Twitter and Mastodon accounts simultaneously. We next check if people are generating identical content across both platforms or are, instead, projecting multiple 'personas'. Figure 14 plots the CDFs of the fraction of each migrated user's Mastodon statuses that are identical or similar to its tweets. We consider the Mastodon status similar to a tweet if the cosine-similarity of their sentence embeddings [20] is greater than 0.7. Surprisingly, just 1.53% of each migrated user's Mastodon statuses are identical. On average, just 16.57% of each user's Mastodon status are similar to their tweets. Instead, 84.45% of the migrated users use the two platforms to post completely different content. This suggests a mix of users, some of whom create different personas on the two platforms, and a smaller subset who mirror all their content. A potential explanation for the latter is the use of cross-posting tools. Such tools allow users to automatically mirror their Mastodon status on Twitter, and vice versa. To examine this, we compare the number of tweets posted via different sources before and after Musk's takeover in Figure 12. Naturally, the majority are posted by official Twitter clients such as the Twitter Web App. The two sources that increase most dramatically, however, are two well-known cross-posters, Mastodon-Twitter Crosposter and Moa Bridge -- by 1128.95% and 1732.26%, respectively. Of all migrated users, 5.73% use one of the two cross-posters at least once. This suggested such users see both Twitter and Mastodon as vi Figure 11: Temporal distribution of tweets and statuses posted by migrated users on Twitter and Mastodon respectively. Figure 12: Top 30 sources of tweets. Note the log scale on the y-axis. Figure 10: CDFs of the fraction of Twitter followees of each switched user that (i) moved to first instance (blue) (ii) moved to second instance (orange) and (iii) moved to second instance before the user (green). limited intention of creating multiple 'personas'. Figure 13 also plots the number of users using cross-posters over time. We see that their usage increases rapidly after Musk's takeover. The downward trend towards the end of November is likely a result of the posting issues that cross-posters faced after their posting rate limit was revoked by Twitter [21]. ### Hashtags Given that 84.45% of the migrated users post completely different content on the two platforms, we next inspect the hashtags used. This gives us a flavour of the parallel discussions taking place on Mastodon and Twitter. Figure 15 presents the top 30 most frequent hashtags used over the two platforms by the migrated users. We notice that users discuss more diverse topics on Twitter such as Entertainment (#NowPlaying, #BBC6Music), Celebrities (#BarbaraHolzer), and Politics (#StandWithUkraine, #GeneralElectionNow), whereas Mastodon seems dominated by Fediverse related discussion (#fediverse) and the migration to it (#TwitterMigration). We conjecture that we might see more diverse discussions on Mastodon once the migrated users make themselves familiar with the platform. Figure 16: CDFs of fraction of each migrated user’s toxic posts on Twitter and Mastodon. Figure 14: CDFs of fraction of each migrated user’s Mastodon statuses that are identical or similar to its tweets. Figure 13: Number of users that use cross-posting tools daily. Figure 15: Top 30 hashtags along with their frequencies on Twitter and Mastodon. ### Toxicity Analysis Moderation on Mastodon has received significant attention in recent months [5, 4]. This is because the administrators of Mastodon instances do not universally have the resources to moderate malicious content. To shed light on this, we study the extent to which toxic content is shared by migrated users on both platforms. To do this, we label all tweets and statuses using Google Jigsaw's Perspective API.7 For a given post, Perspective returns a score between 0 and 1 for its toxicity (\(0=\) non-toxic). Specifically, we use the API's TOXICITY attribute that defines toxicity as "a rude, disrespectful, or unreasonable comment that is likely to make people leave a discussion". In the literature, 0.5 is the most common choice to threshold the perspective scores [5, 22, 17], however, higher values such as 0.8 are also used [2]. Here, we use 0.5 as a threshold and consider a post to be toxic if its toxicity score is greater than 0.5 (and vice versa). Footnote 7: [https://www.perspectiveapi.com](https://www.perspectiveapi.com) Figure 16 shows the CDFs of the fraction of each migrated user's toxic posts on Twitter and Mastodon. Overall, just 5.49% of tweets are toxic. Mastodon is substantially less toxic, with just 2.80%. On average, each user posts 4.02% toxic tweets on Twitter vs. just 2.07% toxic statuses on Mastodon. Even though the discourse is non-toxic over both platforms, we notice that 14.26% of migrated users post at least one toxic post on both the platforms. While this may not be problematic for Twitter which has its own moderation team, it might present challenges for Mastodon, where volunteer administrators are responsible for content moderation [11]. ## 7 Related Work _Decentralised Social Networks._ Many previous efforts have been made to build decentralized online social platforms. In the earliest days, there were many peer-to-peer online social networks, such as Safebook [7], PeerSoN [6], LotusNet [3], and LifeSocial.KOM [10]. However, performance and security limitations [18], limited their adoption and success. New decentralized social networks, such as Mastodon, Pleroma, Pixelfed, and PeerTube, have since emerged. In sum, these platforms are referred to as the _Fediverse_. These social network applications use ActivityPub, a W3C protocol, to implement server federation. Some recent work has looked into these new decentralized social networks. For instance, a large-scale measurement study of Mastodon [19], found centralization trends in Mastodon. Paradoxically, we found that while centralization occurs in terms of how many users are attracted to an instance, smaller instances attract more active users. Other works focus on user behavior across instances [13, 12]. Our work also touches upon the need for decentralised moderation. This has been investigated in prior work on Pleroma (another Fediverse microblog. Hassan et al identify novel challenges [11] and propose a strawman solution. Zia et al. [5] also propose model sharing solution to help automate moderation. Our work confirms the presence of toxic content in Mastodon, though the numbers identified do not show a trend towards greater toxicity than Twitter. _Social Network Migration._ There have been a number of measurement studies on social network migration. For example, [8] measured the migration activity of random, tracking migrating users and the reasons behind their migration. The authors find that policy and value-based aspects are determinant in the migration. Gerhart et al. [9] analyze user migration from traditional social networks to anonymous social networks perspective. They identify that social norms drive migration. Otala et al. [14] study the migration of Twitter users to Parler. The results show that, although Parler is not widely used, it has a significant impact on political polarization. Our work also studies the migration of Twitter users. However, to the best of our knowledge, it is the first to systematically measure and analyze the migration of users from centralised Twitter to a decentralised platform. ## 8 Conclusion In this paper, we have explored the migration of users from Twitter to Mastodon, prompted by Elon Musk's acquisition of Twitter. We have focused on three RQs: (_i_) How are new users spread across Mastodon instances, and are there any consequences for decentralization? (_ii_) How much (if at all) does a user's ego-centric Twitter network influence their migration to Mastodon? (_iii_) What are usage patterns of migrated users across both plat forms? To answer **RQ1**, we have found that 2.26% of users completely left Twitter, deleting their account. Despite Mastodon's decentralized architecture, we found that the 25% largest instance on Mastodon contains 96% of the users. Paradoxically, while larger instances attract more users, smaller ones attract more active users, reinforcing Mastodon's decentralization. To answer **RQ2**, we showed that the size of the Mastodon instance had limited effect on the size of the user's social network. We observed the impact of social network in migration, with an average of 14.72% of Twitter followees per user migrating to the exact same Mastodon instance as user. To answer **RQ3**, we found that users tend to post _different_ content across platforms. On average, only 1.53% of Mastodon posts per user were identical to Twitter. In terms of toxicity, most of the user's content on both platforms was non-toxic. Mastodon appears to be less toxic than Twitter though. Overall, just 5.49% of tweets and 2.80% of statuses posted by migrated users on Twitter and Mastodon respectively were toxic. There are a number of lines of future works. We would like to further investigate whether migrating users retain their Mastodon accounts or return to Twitter, and whether new users are joining the migration wave. It will be interesting to see what the future holds for these user-driven centralized Mastodon instances. This study provides the first step in the migration of Twitter to Mastodon. We hope that it will inspire further exploration of follow-up work.
2309.07824
A Floer-theoretic interpretation of the polynomial representation of the double affine Hecke algebra
We construct an isomorphism between the wrapped higher-dimensional Heegaard Floer homology of $\kappa$-tuples of cotangent fibers and $\kappa$-tuples of conormal bundles of homotopically nontrivial simple closed curves in $T^*\Sigma$ with a certain braid skein group, where $\Sigma$ is a closed oriented surface of genus $> 0$ and $\kappa$ is a positive integer. Moreover, we show this produces a (right) module over the surface Hecke algebra associated to $\Sigma$. This module structure is shown to be equivalent to the polynomial representation of DAHA in the case where $\Sigma=T^2$ and the cotangent fibers and conormal bundles of curves are both parallel copies.
Eilon Reisin-Tzur
2023-09-14T16:17:08Z
http://arxiv.org/abs/2309.07824v1
A Floer-theoretic interpretation of the polynomial representation of the double affine Hecke algebra ###### Abstract. We construct an isomorphism between the wrapped higher-dimensional Heegaard Floer homology of \(\kappa\)-tuples of cotangent fibers and \(\kappa\)-tuples of conormal bundles of homotopically nontrivial simple closed curves in \(T^{*}\Sigma\) with a certain braid skein group, where \(\Sigma\) is a closed oriented surface of genus \(>0\) and \(\kappa\) is a positive integer. Moreover, we show this produces a (right) module over the surface Hecke algebra associated to \(\Sigma\). This module structure is shown to be equivalent to the polynomial representation of DAHA in the case where \(\Sigma=T^{2}\) and the cotangent fibers and conormal bundles of curves are both parallel copies. This work partially supported by NSF grant DMS-2003483. ###### Contents * 1 Introduction * 2 Review of HDHF, wrapped HDHF, and conormal boundary conditions * 2.1 Review of HDHF * 2.2 Wrapped Floer theory of conormal bundles * 2.3 Wrapped HDHF * 2.4 Wrapped HDHF example * 3 Path space and wrapped HDHF * 3.1 Dual Lagrangian formulation and perturbed geodesics * 3.2 Path space and wrapped Floer homology * 3.3 Unordered configuration space and wrapped HDHF * 4 HDHF with conormal boundary conditions as a braid skein algebra module * 4.1 The braid skein algebra of a surface * 4.2 Hecke algebra realization in HDHF * 4.3 The parameter \(c\) * 4.4 The proof of Theorem 1.1, \(\kappa=1\) case * 4.5 The proof of Theorem 1.1, general case * 5 Geometric realization of \(CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma,\sqcup_{i=1}^{\kappa}N^{*}\alpha_ {i})_{c}\) as a module over the braid skein algebra * 5.1 Algebraic action * 5.2 Geometric action * 5.3 Equivalence of DAHA modules * 6 The enhanced polynomial representation * 6.1 Double affine Hecke algebra and its polynomial representation * 6.2 The enhanced polynomial representation ## 1. Introduction Higher-dimensional Heegaard Floer homology (HDHF) was developed by Colin, Honda, and Tian in [13] to analyze symplectic fillability questions in higher-dimensional contact topology. As its name suggests, it is very closely related to Heegaard Floer homology introduced by Ozsvath and Szabo in [11] to study closed oriented 3-manifolds. HDHF models the Fukaya category of the Hilbert scheme of points on a Liouville domain and has been used to produce an invariant of links in \(S^{3}\). In [12], Honda, Tian, and Yuan constructed isomorphisms between the wrapped HDHF of cotangent fibers of cotangent bundles of closed oriented surfaces \(\Sigma\) with positive genus to Hecke algebras (\(H_{\kappa}(\Sigma)\)) associated with the \(\Sigma\). In particular, they showed that the wrapped HDHF of cotangent fibers of \(T^{*}T^{2}\) is isomorphic to the double affine Hecke algebra (DAHA) \(\vec{H}_{\kappa}\) introduced by Cherednik in [1] for his proof of Macdonald's conjectures. Using results by Morton and Samuelson in [15], Honda, Tian, and Yuan were able to provide a symplectic geometry (Floer-theoretic) interpretation of DAHA and various other Hecke algebras. The goal of this paper is to build on [12] to provide a symplectic geometry interpretation of the polynomial representation of DAHA, closely related to Cherednik's basic representation. Let \(\kappa\) be a positive integer and fix \(\kappa\) distinct points \(q_{1},\ldots,q_{\kappa}\in\Sigma\). Given the isomorphism between the wrapped HDHF \(CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma)_{c}\) of cotangent fibers of \(T^{*}\Sigma\) with an additional parameter \(c\) and the surface Hecke algebra tensor product \(H_{\kappa}(\Sigma)\otimes\mathbb{Z}[[\hbar]]\), there exists a functor from the HDHF Fukaya category (with parameter \(c\)) to the category of (right) \(\hbar\)-deformed \(H_{\kappa}(\Sigma)\)-modules which sends mutually disjoint Lagrangians \(L_{1},\ldots,L_{\kappa}\) to \(\operatorname{Hom}(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma,\sqcup_{i=1}^{ \kappa}L_{i})\). This paper aims to shed more light on this functor by giving it a more explicit topological interpretation. Abbondandolo, Portaluri, and Schwarz proved in [1] that Floer homology with conormal boundary conditions is isomorphic to the singular homology of the natural path space associated to the boundary conditions. In the case of cotangent fibers, Abouzaid improved this to an \(A_{\infty}\)-equivalence on the chain level in [1]. We discuss the generalization of these results to HDHF. More precisely, we define the wrapped HDHF cochain complex between \(\kappa\)-tuples of mutually disjoint conormal bundles. Restricting to the manifold \(T^{*}\Sigma\) and conormal bundles \(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma,\sqcup_{i=1}^{\kappa}N^{*}\alpha_{i}\), where \(\alpha_{1},\ldots,\alpha_{\kappa}\) is a mutually disjoint collection of homotopically nontrivial simple closed curves in \(\Sigma\), we find that the wrapped HDHF is concentrated in degree zero. Our generalization of the path space in [1] is the path space of the unordered configuration space \(\operatorname{UConf}_{\kappa}(\Sigma)\) of \(\kappa\) points on \(\Sigma\) satisfying boundary conditions to ensure the paths go between our conormal bundles. Following [1] and [12], we define an evaluation map \[\mathcal{E}:CW(\sqcup_{i=1}^{\kappa}\phi_{H_{V}}^{1}(T_{q_{i}}^{*}\Sigma), \sqcup_{i=1}^{\kappa}N^{*}\alpha_{i})\longrightarrow C_{0}(\Omega( \operatorname{UConf}_{\kappa}(\Sigma),\boldsymbol{q},\boldsymbol{\alpha})) \otimes\mathbb{Z}[[\hbar]],\] where \(\boldsymbol{q}=\{q_{1},\ldots,q_{\kappa}\}\in\operatorname{UConf}_{\kappa}(\Sigma)\), \(\boldsymbol{\alpha}=\alpha_{1}\times\cdots\times\alpha_{\kappa}\), and \(\Omega(\operatorname{UConf}_{\kappa}(\Sigma),\boldsymbol{q},\boldsymbol{ \alpha})\) is the space of paths in \(\operatorname{UConf}_{\kappa}(\Sigma)\) starting at \(\boldsymbol{q}\) and ending in \(\boldsymbol{\alpha}\). Here we are viewing an element \((x_{1},\ldots,x_{\kappa})\in\boldsymbol{\alpha}\) as an unordered tuple; this is possible since the \(\alpha_{1},\ldots,\alpha_{\kappa}\) are mutually disjoint. Taking homotopy classes of paths in \(C_{0}(\Omega(\operatorname{UConf}_{\kappa}(\Sigma),\boldsymbol{q},\boldsymbol {\alpha}))\) and quotienting by the HOMFLY skein relation produces a map \[\mathcal{F}:CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma,\sqcup_{i=1}^{\kappa}N^ {*}\alpha_{i})\longrightarrow BSk_{\kappa}(\Sigma,\boldsymbol{q},\boldsymbol {\alpha}),\] where \(BSk_{\kappa}(\Sigma,\boldsymbol{q},\boldsymbol{\alpha})\) is a variant of the braid skein algebra introduced by Morton and Samuelson in [15]. Informally, \(BSk_{\kappa}(\Sigma,\boldsymbol{q},\boldsymbol{\alpha})\) consists of homotopy classes of braids which start at \(\{q_{1},\ldots,q_{\kappa}\}\) and end in \(\alpha_{1}\times\cdots\times\alpha_{\kappa}\), modulo the HOMFLY skein relation. Morton and Samuelson also define the braid skein algebra on a punctured surface, leading to our variant \(BSk_{\kappa}(\Sigma,\boldsymbol{q},\boldsymbol{\alpha},*)\), defined in Section 4.3. The puncture \(*\in\Sigma\) gives rise to a marked point relation which in turn is interpreted as a \(c\)-deformed homotopy relation on our braids. Adding this marked point into our formulation, we get a map \[\mathcal{F}:CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma,\sqcup_{i=1}^{\kappa}N^{* }\alpha_{i})_{c}\longrightarrow BSk_{\kappa}(\Sigma,\boldsymbol{q},\boldsymbol{ \alpha},*).\] The first main result of this paper is Theorem 1.1, proved in Section 4.5 : **Theorem 1.1**.: \(\mathcal{F}\) _is an isomorphism._ With this in mind, we show that the action of \(CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma)_{c}\) on \(CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma,\sqcup_{i=1}^{\kappa}N^{*}\alpha_ {i})_{c}\) agrees with the action of the braid skein algebra \(BSk_{\kappa}(\Sigma,\boldsymbol{q},*)\) on \(BSk_{\kappa}(\Sigma,\boldsymbol{q},\boldsymbol{\alpha},*)\). **Lemma 1.2**.: _Let \(\alpha_{1},\dots,\alpha_{\kappa}\) be parallel copies of the meridian on \(T^{2}\). Then_ \[BSk_{\kappa}(T^{2},\boldsymbol{q},\boldsymbol{\alpha},*)\simeq(\mathbb{Z}[a_{ 1}^{\pm 1},\dots,a_{\kappa}^{\pm 1}]\otimes\mathbb{Z}[S_{\kappa}])\otimes \mathbb{Z}[c^{\pm 1}]\otimes\mathbb{Z}[[\hbar]].\] For the configuration of points \(q_{i}\) and curves \(\alpha_{i}\) as in Lemma 1.2, we introduce an extra homological variable \(d\) which keeps track of sliding the ends of the braids past each other on the \(\alpha_{i}\); see Definition 6.6. The next main result is obtained after setting \(d=s\) and relating \((\mathbb{Z}[a_{1}^{\pm 1},\dots,a_{\kappa}^{\pm 1}]\otimes\mathbb{Z}[S_{\kappa}]) \otimes\mathbb{Z}[c^{\pm 1}]\otimes\mathbb{Z}[[\hbar]]\otimes\mathbb{Z}[d^{\pm 1}]\) to \(\mathbb{Z}[[s]][c^{\pm 1}][X_{1},\dots,X_{\kappa}]\) by averaging over the permutation components: **Theorem 1.3**.: _The action of \(CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}T^{2})_{c}\) on \(CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}T^{2},\sqcup_{i=1}^{\kappa}N^{*}\alpha_ {i})_{c,d}\) agrees with the polynomial representation of \(\tilde{H}_{\kappa}\) on \(\mathbb{Z}[[s]][c^{\pm 1}][X_{1},\dots,X_{\kappa}]\) after setting \(d=s\) and averaging the permutation components of \(CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}T^{2},\sqcup_{i=1}^{\kappa}N^{*}\alpha_ {i})_{c,d}\)._ _Organization_: In Section 2, we give a brief review of HDHF, wrapped HDHF, and conormal boundary conditions. In Section 3, we introduce the results of Abbondandolo and Schwarz before generalizing to the case \(\kappa\geq 1\). We give a summary of [HTY] in Section 4, along with the proof of Theorem 1.1. Section 5 discusses the action of \(CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma)_{c}\) on \(CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma,\sqcup_{i=1}^{\kappa}N^{*}\alpha_ {i})_{c}\) in the general case, before specializing to \(T^{2}\) and the curves \(\alpha_{i}\) in Section 6, where we prove Lemma 1.2 and Theorem 1.3. **Acknowledgements.** I would like to thank Ko Honda for nearly countless discussions and guidance through this project. I would also like to thank Tianyu Yuan for helpful discussions and suggestions and Peter Samuelson for discussions which motivated this study. ## 2. Review of HDHF, wrapped HDHF, and conormal boundary conditions We give a brief summary of HDHF in Section 2.1 before defining the specific wrapped version of interest in Section 2.2. ### Review of HDHF We refer the reader to [1] for more details regarding HDHF. **Definition 2.1**.: Let \((X,\alpha)\) be a \(2n\)-dimensional completed Liouville domain and let \(\omega=d\alpha\) be the exact symplectic form on \(X\). The objects of the \(A_{\infty}\)-category \(\mathcal{F}_{\kappa}(X)\) are \(\kappa\)-tuples of disjoint exact Lagrangians. The morphisms \(\operatorname{Hom}_{\mathcal{F}_{\kappa}(X)}(L_{0},L_{1})=CF(L_{0},L_{1})\) between two such objects \(L_{i}=L_{i1}\sqcup\dots\sqcup L_{\kappa}\), \(i=0,1\), with mutually transverse components is the free abelian group generated by \(\kappa\)-tuples of intersections where each component is used exactly once. That is, the generators are \(\boldsymbol{y}=\{y_{1},\dots,y_{\kappa}\}\) where \(y_{j}\in L_{0j}\cap L_{1\sigma(j)}\) for some permutation \(\sigma\) of \(\{1,\dots,\kappa\}\). The coefficient ring is \(\mathbb{Z}[[\hbar]]\) and the \(A_{\infty}\)-operations \(\mu^{m}\) will be defined by (2.2). Similarly to the cylindrical reformulation of Heegaard Floer homology by Lipschitz [11], we introduce an extra direction to keep track of points in the symmetric product of \(X\). Let \(D\) be the unit disk in \(\mathbb{C}\) and \(D_{m}=D-\{p_{0},\dots,p_{m}\}\) be the disk with \(m\) boundary punctures arranged counterclockwise. Let \(\partial_{i}D_{m}\) be the boundary component from \(p_{i}\) to \(p_{i+1}\), with \(\partial_{m}D_{m}\) going from \(p_{m}\) to \(p_{0}\). We choose representatives of the moduli space of \(D_{m}\) modulo automorphisms and label these representatives \(D_{m}\), for lack of a better name. The \(A_{\infty}\)_-base direction_\(D_{m}\) is shown in Figure 1. Consider the manifold \(\tilde{X}=(D_{m}\times X,\omega_{m}+\omega)\), where \(\omega_{m}\) is an area form on \(D_{m}\) which restricts to \(ds_{i}\wedge dt_{i}\) on each strip-like end \(e_{i}\) near \(p_{i}\). We take \(s_{0}\to-\infty\) as we approach the _negative end_\(p_{0}\) and \(s_{i}\to+\infty\) for the other punctures refered to as the _positive ends_. Given \(m+1\) objects \(L_{0},\ldots,L_{m}\), we let \(\tilde{L}_{i}=\partial_{i}D_{m}\times L_{i}\). We denote by \(\pi_{X}:D_{m}\times X\to X\) the projection onto X and by \(\pi_{D_{m}}:D_{m}\times X\to D_{m}\) the symplectic fibration of the base. There is a smooth assignment \(D_{m}\mapsto J_{D_{m}}\) of almost complex structures \(J_{D_{m}}\) that are close to a split almost complex structure \(j_{m}\times J_{X}\) and which project holomorphically onto \(D_{m}\). For more details, the reader is referred to [1] or [14]. _Remark 2.2_.: We call an assignment of almost complex structures _sufficiently generic_ if all of the moduli spaces under consideration are transversely cut out. Let \(\mathcal{M}(\mathbf{y}_{1},\ldots,\mathbf{y}_{m},\mathbf{y}_{0})\) be the moduli space of maps \[u:(\dot{F},j)\longrightarrow(D_{m}\times X,J_{D_{m}}),\] where \((F,j)\) is a compact Riemann surface with boundary, \(\mathbf{p}_{0}\),..., \(\mathbf{p}_{m}\) are disjoint \(\kappa\)-tuples of boundary punctures of \(F\), and \(\dot{F}=F\setminus\cup_{i}\mathbf{p}_{i}\), such that \(u\) satisfies: \[\begin{cases}du\circ j=J_{D_{m}}\circ du;\\ \text{each component of }\partial\dot{F}\text{ is mapped to a unique }\tilde{L}_{ij};\\ \pi_{X}\circ u\text{ approaches }\mathbf{y}_{i}\text{ as }s_{i}\to+\infty\text{ for }i=1,\ldots,m;\\ \pi_{x}\circ u\text{ tends to }\mathbf{y}_{0}\text{ as }s_{0}\to-\infty;\\ \pi_{D_{m}}\circ u\text{ is a }\kappa\text{-fold branched cover of }D_{m}.\end{cases} \tag{2.1}\] Letting the boundary punctures \(\mathbf{p}_{0},\ldots,\mathbf{p}_{m}\) vary, the \(A_{\infty}\) composition map \[\mu^{m}:CF(L_{m-1},L_{m})\otimes\cdots\otimes CF(L_{0},L_{1})\longrightarrow CF (L_{0},L_{m})\] is then defined as \[\mu^{m}(\mathbf{y}_{1},\ldots,\mathbf{y}_{m})=\sum_{\mathbf{y}_{0},\chi\leq\kappa}\# \mathcal{M}^{ind=0,\chi}(\mathbf{y}_{1},\ldots,\mathbf{y}_{m},\mathbf{y}_{0})\cdot\hbar^{ \kappa-\chi}\cdot\mathbf{y}_{0}, \tag{2.2}\] where \(\chi\) is the Euler characteristic of \(\dot{F}\) and \(\#\) is the signed count of the moduli space. **Theorem 2.3**.: _The Fredholm index of \(\mathcal{M}^{\chi}(\mathbf{y}_{1},\ldots,\mathbf{y}_{m},\mathbf{y}_{0})\) with a varying complex structure on \(D_{m}\) is_ \[ind(u)=(n-2)\chi+\mu+2\kappa-m\kappa n+m-2,\] _where \(\mu\) is the Maslov index of \(u\). We refer the reader to [1] for details on a similar formula._ Figure 1. The \(A_{\infty}\) base _Remark 2.4_.: In the case where \(2c_{1}(TX)=0\) and the Maslov classes of the involved Lagrangians vanish, we have that \(|\hbar|=2-n\). ### Wrapped Floer theory of conormal bundles We give a quick summary of wrapped Floer homology of conormal bundles; more details can be found in [1]. Let \(M\) be a closed manifold and \(T^{*}M\) be its cotangent bundle. Denoting the elements of \(T^{*}M\) as pairs \((q,p)\in M\times T^{*}_{q}M\), let \(\omega=dp\wedge dq\) be the the standard symplectic form on \(T^{*}M\). Furthermore, let \(\eta\) be the Liouville vector field satisfying \(\mathcal{L}_{\eta}\omega=\omega\). Given a time-dependent Hamiltonian \(H:[0,1]\times T^{*}M\to\mathbb{R}\), let \(X_{H}\) be the unique vector field such that \(-dH(Y)=\omega(X_{H},Y)\) for any vector field \(Y\) on \(T^{*}M\). We look for solutions \(x:[0,1]\to T^{*}M\) of the non-local boundary value Hamiltonian equation \[x^{\prime}(t)=X_{H}(t,x(t)),\;x(0)\in L_{0},\;x(1)\in L_{1}, \tag{2.3}\] where \(L_{0}\) and \(L_{1}\) are Lagrangian submanifolds of \(T^{*}M\), not of \(M\). With this in mind, we consider smooth Hamiltonians \(H\) on \([0,1]\times T^{*}M\) such that: 1. every solution \(x\) of the non-local boundary value Hamiltonian problem is nondegenerate 2. there exist \(h_{0}>0\) and \(h_{1}\geq 0\) such that \[DH(t,q,p)[\eta]-H(t,q,p)\geq h_{0}|p|^{2}-h_{1},\] for every \((t,q,p)\in[0,1]\times T^{*}M\) 3. there exists an \(h_{2}\geq 0\) such that \[|\nabla_{q}H(t,q,p)|\leq h_{2}(1+|p|^{2}),\;|\nabla_{p}H(t,q,p)\leq h_{2}(1+|p |)|,\] for every \((t,q,p)\in[0,1]\times T^{*}M\), where \(\nabla_{q}\) and \(\nabla_{p}\) denote the horizontal and vertical components of the gradient, respectively. Condition (H0) holds for a generic choice of \(H\) in the space that we are considering. Conditions (H1) and (H2) ensure that \(H\) grows quadratically on the fibers of \(T^{*}M\) and is radially convex for large \(|p|\). We reframe our non-local boundary value Hamiltonian problem (2.3) as is done in [1]. Let \(Q\) be a submanifold of \(M\times M\) which is either compact or has cylindrical ends, meaning it is the union of a compact submanifold and submanifolds of the form \(K\times[0,\infty)\), where \(K\) is a Legendrian. Let \(N^{*}Q\) denote the _conormal bundle_ of \(Q\), i.e. the set of covectors \((q,p)\) in \(T^{*}(M\times M)\) such that \(p\in T^{*}_{q}(M\times M)\) vanishes identically on \(T_{q}Q\). The conormal bundle \(N^{*}Q\) is a Lagrangian submanifold of \(T^{*}(M\times M)\) and so is a natural candidate for investigation. _Remark 2.5_.: Given two Lagrangians \(L_{0}\) and \(L_{1}\) in \(T^{*}M\), it is not necessarily true that \(L_{0}\times L_{1}\) is the conormal bundle of some submanifold \(Q\subset M\times M\). On the other hand, not every \(N^{*}Q\) can be written as \(L_{0}\times L_{1}\) for Lagrangians \(L_{0},L_{1}\in T^{*}M\). Therefore, the problem we are tackling neither contains nor is contained in the original formulation above. For the purposes of this paper, we will be interested in Lagrangians \(L_{i}\) of \(T^{*}M\) which are conormal bundles of submanifolds \(Q_{i}\) in \(M\). Under these conditions, this new formulation is equivalent to the first one since \(N^{*}(Q_{1}\times Q_{2})=L_{1}\times L_{2}\). Specifically, we will be interested in the case where \(M\) is a closed oriented surface of positive genus and \(Q=\{\text{pt}\}\times\{\text{pt}\}\) or \(Q=\{\text{pt}\}\times\alpha\) for a homotopically nontrivial simple closed curve \(\alpha\). Let \(H\) be a time-dependent Hamiltonian on \(T^{*}M\) satisfying (H0), (H1), and (H2) and consider the set of solutions, \(\mathcal{P}^{Q}(H)\), to the Hamiltonian equation with _conormal boundary conditions_. That is, \(\mathcal{P}^{Q}(H)\) is the set of \(x:[0,1]\to T^{*}M\) satisfying \[x^{\prime}(t)=X_{H}(t,x(t)),\] subject to the boundary conditions \[(x(0),\mathcal{C}x(1))\in N^{*}Q,\] where \(\mathcal{C}\) is the anti-symplectic involution \(T^{*}M\to T^{*}M\), \((q,p)\mapsto(q,-p)\). Note that if \(Q=Q_{1}\times Q_{2}\in M\times M\), then \(\mathcal{P}^{Q}(H)\) is the set of trajectories from \(Q_{1}\) to \(Q_{2}\) along our Hamiltonian vector field \(X_{H}\). Let \(\theta\) be the Liouville one-form on \(T^{*}M\) and consider the Hamiltonian action functional given by \[\mathbb{A}_{H}(x):=\int x^{*}(\theta-Hdt).\] The first variation of \(\mathbb{A}_{H}(x)\) on the space of free paths is \[d\mathbb{A}_{H}(x)[\xi]=\int_{0}^{1}\omega(\xi,x^{\prime}(t)-X_{H}(t,x))dt+ \theta(x(1))[\xi(1)]-\theta(x(0))[\xi(0)],\] where \(\xi\) is a section of \(x^{*}(TT^{*}M)\). Since \(\theta\) vanishes on the conormal bundle of every submanifold of \(M\), the extremal curves of \(\mathbb{A}_{H}(x)\) are precisely the elements of \(\mathcal{P}^{Q}(H)\). This is the reason we set this up with conormal bundles. The conditions (H0), (H1), and (H2) that we imposed on our Hamiltonian \(H\) imply that the set of solutions \(x\in\mathcal{P}^{Q}(H)\) such that \(\mathbb{A}_{H}(x)\leq A\) is finite. Given a smoothly time-dependent \(\omega\)-compatible almost complex structure \(J\) on \(T^{*}M\), we consider the Floer equation \[\partial_{s}u+J(t,u)(\partial_{t}u-X_{H}(t,u))=0, \tag{2.4}\] where \(u:\mathbb{R}\times[0,1]\to T^{*}M\). Let \(x^{-},x^{+}\in\mathcal{P}^{Q}(H)\) and denote by \(\mathcal{M}(x^{-},x^{+})\) the set of all solutions of the Floer equation (2.4) with the non-local boundary condition such that \[\lim_{s\to\pm\infty}u(s,t)=x^{\pm}(t),\;\forall t\in[0,1].\] Then one can show that we have an energy identity \[E(u):=\mathbb{A}_{H}(x^{-})-\mathbb{A}_{H}(x^{+}). \tag{2.5}\] Furthermore, \(\mathcal{M}(x^{-},x^{+})\) is empty whenever \(\mathbb{A}_{H}(x^{-})\leq\mathbb{A}_{H}(x^{+})\) and \(x^{-}\neq x^{+}\) and it consists of only the element \(u(s,t)=x(t)\) when \(x^{-}=x^{+}=x\). By perturbing the almost complex structure \(J\), we can give \(\mathcal{M}(x^{-},x^{+})\) a smooth structure and its dimension is the difference of Maslov indices, \[\text{dim}\mathcal{M}(x^{-},x^{+})=\mu^{Q}(x^{-})-\mu^{Q}(x^{+}).\] It follows that when \(\mu^{Q}(x^{-})-\mu^{Q}(x^{+})=1\), we get an oriented one-dimensional manifold. Moreover, we have a free \(\mathbb{R}\)-action given by translation of the \(s\) variable and so we arrive at a compact zero-dimensional manifold \(\mathcal{M}(x^{-},x^{+})/\mathbb{R}\). Let \(\epsilon([u])\in\{-1,1\}\) be \(+1\) if the \(\mathbb{R}\)-action is orientation-preserving on the component of \(\mathcal{M}(x^{-},x^{+})\) containing \(u\), and \(-1\) otherwise. Define \[n_{F}(x^{-},x^{+}):=\sum_{[u]\in\mathcal{M}(x^{-},x^{+})/\mathbb{R}}\epsilon( [u]),\] and denote by \(F_{k}^{Q}(H)\) the free Abelian group generated by the elements \(x\in\mathcal{P}^{Q}(H)\) with Maslov index \(k\). The boundary morphism \[\partial_{k}:F_{k}^{Q}(H)\longrightarrow F_{k-1}^{Q}(H)\] is defined by \[\partial_{k}x^{-}:=\sum_{x^{+}\in\mathcal{P}^{Q}(H),\mu^{Q}(x^{+})=k-1}n_{F}( x^{-},x^{+})x^{+}.\] Since the set of elements with an upper bound on the action is finite, the above sum is finite. It can be shown that \(\partial_{k-1}\circ\partial_{k}=0\), and so \(\{F^{Q}_{*}(H),\partial_{*}\}\) is a complex of free Abelian groups, called the Floer complex of \((T^{*}M,Q,H,J)\). The Floer homology is then defined as usual from the complex. As usual, different choices of the Hamiltonian \(H\) produce chain homotopy equivalent complexes (provided the Hamiltonians are close enough). ### Wrapped HDHF We offer a different but equivalent formulation of the wrapped Floer homology defined in the previous subsection and extend it to wrapped HDHF. Let \((M,g)\) be a compact Riemmanian manifold of dimension \(n\) with the induced norm \(|\cdot|\) on \(T^{*}M\). Choose a time-dependent Hamiltonian \(H_{V}:[0,1]\times T^{*}M\to\mathbb{R}\): \[H_{V}(t,q,p)=\frac{1}{2}|p|^{2}+V(t,q),\] where \(q\in M,p\in T^{*}_{q}M\) and \(V\) is a perturbation term with small \(W^{1,2}\)-norm. This Hamiltonian satisfies the conditions mentioned in the previous subsection. Taking the standard symplectic form \(\omega=dq\wedge dp\) on \(T^{*}M\), let \(X_{H_{V}}\) be the Hamiltonian vector field and \(\phi^{t}_{H_{V}}\) be the time-\(t\) flow of \(X_{H_{V}}\). Let \(L_{0},L_{1}\) be Lagrangian submanifolds of \(T^{*}M\) with cylindrical ends. The time-1 flow, \(\phi^{1}_{H_{V}}(L_{0})\), is again a Lagrangian submanifold of \(T^{*}M\) with cylindrical ends. We define the _wrapped Floer chain complex_\(CW(L_{0},L_{1})\) of \(L_{0}\) and \(L_{1}\) to be the Floer chain complex \(CF(\phi^{1}_{H_{V}}(L_{0}),L_{1})\) of \(\phi^{1}_{H_{V}}(L_{0})\) and \(L_{1}\). **Proposition 2.6**.: _Let \(Q=Q_{1}\times Q_{2}\in M\times M\) be a product of submanifolds whose conormal bundles have cylindrical ends. Then given a Hamiltonian \(H_{V}\) satisfying conditions as before,_ \[CW(N^{*}Q_{1},N^{*}Q_{2})=F^{Q}(H_{V}).\] Proof.: In both cases, the generators are time-1 Hamiltonian chords from \(N^{*}Q_{1}\) to \(N^{*}Q_{2}\). Moreover, the differentials for both count the same 0-dimensional moduli space of pseudoholomorphic curves between generators. We now generalize the definition of wrapped Floer theory to wrapped HDHF. Consider disjoint \(\kappa\)-tuples of Lagrangians \(L_{i}=\sqcup_{i=j}^{\kappa}L_{ij}\) for \(i=1,2\). We can ensure that the Hamiltonian chords between all of the Lagrangians involved are non-degenerate by choosing \(g\) and \(V\) generically. **Definition 2.7**.: The _wrapped higher dimensional Heegaard Floer chain complex_ is given by \[CW(L_{1},L_{2}):=CF(\sqcup_{i=1}^{\kappa}\phi^{1}_{H_{V}}(L_{1i}),\sqcup_{i= 1}^{\kappa}L_{2i}).\] In the case of \(L_{1}=L_{2}\), we let \(CW(L_{1}):=CW(L_{1},L_{1})=CF(\sqcup_{i=1}^{\kappa}\phi^{1}_{H_{V}}(L_{1i}), \sqcup_{i=1}^{\kappa}L_{1i})\). The \(A_{\infty}\)-operation \[\mu^{m}:CW(L_{1})\otimes\cdots\otimes CW(L_{1})\longrightarrow CW(L_{1})\] does not immediately follow from the \(A_{\infty}\)-operations in the non-wrapped HDHF. Writing the Lagrangians out carefully, we see that the map is actually \[\mu^{m}:CF(\sqcup_{i}\phi^{1}_{H_{V}}(L_{1i}),\sqcup_{i}L_{1i})\otimes\cdots \otimes CF(\sqcup_{i}\phi^{m}_{H_{V}}(L_{1i}),\sqcup_{i}\phi^{m-1}_{H_{V}}( L_{1i}))\to CF(\sqcup_{i}\phi^{m}_{H_{V}}(L_{1i}),\sqcup_{i}L_{1i}),\] where \(i=1,\cdots,\kappa\). The subtlety is that while \(CF(\sqcup_{i=1}^{\kappa}\phi^{l}_{H_{V}}(L_{1i}),\sqcup_{i=1}^{\kappa}\phi^{ l-1}(L_{1i}))\) is naturally isomorphic to \(CF(\sqcup_{i=1}^{\kappa}\phi^{1}_{H_{V}}(L_{1i}),\sqcup_{i=1}^{\kappa}L_{1i})\) for all \(l\in\mathbb{Z}\), it is not the case for the chain complex \(CF(\sqcup_{i=1}^{\kappa}\phi^{m}_{H_{V}}(L_{1i}),\sqcup_{i=1}^{\kappa}L_{1i})\). Luckily, this is resolved by a rescaling argument outlined in [11], following [1]. Taking this one step further, given \(\kappa\)-tuples of Lagrangians \(L_{1}\) and \(L_{2}\) with cylindrical ends, we can give \(CW(L_{1},L_{2})\) the structure of a (right) \(A_{\infty}\)-module over \(CW(L_{1})\). Using a similar rescaling argument for \(d\geq 2\) we can define the \(A_{\infty}\)-maps \[\mu^{d}:CW(L_{1},L_{2})\otimes CW(L_{1})\otimes\cdots\otimes CW(L_{1})\longrightarrow CW (L_{1},L_{2}),\] giving us a (right) \(A_{\infty}\)-module structure. ### Wrapped HDHF example We perform a model calculation with \(\kappa=2\). Identify \(M=T^{2}\) with \(S^{1}\times S^{1}=\mathbb{R}/\mathbb{Z}\times\mathbb{R}/\mathbb{Z}\). Fix points \(q_{1}=(\frac{1}{6},\frac{1}{6})\), \(q_{2}=(\frac{2}{6},\frac{2}{6})\) and curves \(\alpha_{1}=\{\frac{4}{6}\}\times S^{1}\) and \(\alpha_{2}=\{\frac{5}{6}\}\times S^{1}\). Let \(L_{1}=\sqcup_{i=1}^{2}T_{q_{i}}^{*}T^{2}\) and \(L_{2}=\sqcup_{i=1}^{2}N^{*}\alpha_{i}\). Note that \(N^{*}\alpha_{i}=\alpha_{i}\times(\mathbb{R}\times\{0\})\subset T^{*}T^{2}\). The perturbation term \(V(t,q)\) can be chosen arbitrarily small and we will disregard it for the sake of this model computation. Then taking \(H_{V}(t,q,p)=\frac{1}{2}|p|^{2}\), we have that \(X_{H}=-p_{1}\partial_{q_{i}}-p_{2}\partial_{q_{2}}\) and the flow is given by \(\phi_{H}^{t}(q_{1},q_{2},p_{1},p_{2})=(q_{1}-p_{1}t,q_{2}-p_{2}t,p_{1},p_{2})\), where \((q_{i},p_{i})\) are viewed as coordinates on \(T^{*}S^{1}\). Since the cotangent fibers are based at \((\frac{i}{6},\frac{i}{6})\), the time-1 flow is given by \(\phi_{H}^{1}(\frac{i}{6},\frac{i}{6},p_{1},p_{2})=(\frac{i}{6}-p_{1},\frac{i}{ 6}-p_{2},p_{1},p_{2})\). We are interested in intersections of \(\phi_{H}^{1}(T_{q_{i}}^{*}T^{2})\) with \(N^{*}\alpha_{j}\), so we want solutions to \((\frac{i}{6}-p_{1},\frac{i}{6}-p_{2},p_{1},p_{2})\) = \((\frac{1}{2}+\frac{j}{6},a,p,0)\) for some \(a\in\mathbb{R}/\mathbb{Z},\ p\in\mathbb{R}\). Then \(p_{2}\) must be 0. (Note that with a perturbation term, this is no longer necessarily true since there will be some flow in the fiber direction, but \(p_{2}\) would be very small.) Continuing with this example, we see that \(\frac{i}{6}-p_{1}=\frac{1}{2}+\frac{j}{6}\), and so \(p_{1}=-\frac{1}{2}+\frac{i-j}{6}\). Given one such \(p_{1}\), we see that adding or subtracting 1 from \(p_{1}\) gives another intersection. Let \(\pi:T^{*}T^{2}\to T^{2}\), \((q,p)\mapsto q\), be the projection onto the zero section. Then adding (subtracting) 1 along the fiber direction \(p_{1}\) simply wraps clockwise (counterclockwise) once more around the \(S^{1}\times\{\frac{i}{6}\}\) direction on the torus before intersecting \(\alpha_{j}\). The wrapping, along with some Hamiltonian chords, is shown in Figure 2 for \(\kappa=1\). _Claim 2.8_.: The generators of \(CF(\sqcup_{i=1}^{2}\phi_{H_{V}}^{1}(T_{q_{i}}^{*}T^{2}),\sqcup_{i=1}^{2}N^{*} \alpha_{i})\) are elements of the form \((a_{1}^{n_{1}}a_{2}^{n_{2}},\sigma)\) where \(\sigma\in S_{2}\) indicates intersections of \(\phi_{H_{V}}^{1}(T_{q_{i}}^{*}T^{2})\) and \(N^{*}\alpha_{\sigma(i)}\) for \(i=1,2\) and the exponent of \(a_{i}\) indicates the number of times the intersection point, viewed as a Hamiltonian chord, wraps around the \((-1,0)\)-direction when projected onto the zero section. More generally we have the following: For \(n_{i}\in\mathbb{Z}\), let \((a_{1}^{n_{1}}\cdots a_{s}^{n_{\kappa}},\sigma)\) denote intersections between \(\phi_{H_{V}}^{1}(T_{q_{i}}^{*}T^{2})\) and \(N^{*}\alpha_{\sigma(i)}\) for \(i=1,\ldots,\kappa\) whose corresponding Hamiltonian chords wrap \(n_{i}\) times around the torus in the \((-1,0)\)-direction when projected onto the zero section. **Lemma 2.9**.: _The generators of \(CF(\sqcup_{i=1}^{\kappa}\phi_{H_{V}}^{1}(T_{q_{i}}^{*}T^{2}),\sqcup_{i=1}^{ \kappa}N^{*}\alpha_{i})\) for points \(q_{i}\) and curves \(\alpha_{i}\) as in the model case are given by_ \[\mathbb{Z}[(\prod_{i=1}^{\kappa}a_{i}^{n_{i}},\sigma)\mid\sigma\in S_{\kappa}, \;n_{i}\in\mathbb{Z}].\] Proof.: We choose coordinates \((q_{i1},q_{i2},p_{i1},p_{i2})\) on \(T_{q_{i}}^{*}T^{2}\). Ignoring the perturbation term \(V(t,q)\) for now, let \(p_{i1}>0\) be the smallest such number such that \(\phi_{H_{V}}^{1}(q_{i1},q_{i2},p_{i1},p_{i2})\cap N^{*}\alpha_{j}\neq\emptyset\). Following the calculation done in our model example, we see that every such intersection occurs at \(\phi^{1}(q_{i1},q_{i2},p_{i1}+n,0)\), for some \(n\in\mathbb{Z}\). Then we can describe the set of intersections by the form in the lemma. Adding the perturbation term has little effect on the Hamiltonian chords, and it can be chosen small enough and such that \(\frac{\partial V}{\partial q}\) is small. It follows that each intersection will occur at \(\phi^{1}(q_{i1},q_{i2},p_{i1}+n,p_{i2}(n))\) where \(p_{i2}(n)\) will be small for all \(n\). While \(n\) may not be an integer, it will be very close to one. Alternatively, one could order all such \(n\)'s, giving a bijection with \(\mathbb{Z}\). _Remark 2.10_.: One way to visualize the wrapping is to view the image of the cotangent fibers projected onto \(T^{2}\). If the \(\alpha\) curves lie in only one component of \(S^{1}\times S^{1}\), the Hamiltonian chords look like geodesics from \(q_{i}\) to \(\alpha_{j}\) in the other \(S^{1}\) component; see Figures 2 and 3. This will be made more explicit in the next section. ## 3. Path space and wrapped HDHF We restrict to the case \(\kappa=1\) and make use of an isomorphism of the wrapped Floer homology with the singular homology of a certain path space. We will review the Morse complex in this context in general before specializing to the surface \(\Sigma\) and Lagrangian submanifolds of interest. For more details, the reader is encouraged to look at [1, Sections 2, 3, and 4]. ### Dual Lagrangian formulation and perturbed geodesics We recall some facts about Legendre transforms and perturbed geodesics that will help establish a dual Lagrangian formulation from which to do Morse theory. Let \(H\in C^{\infty}([0,1]\times T^{*}M)\) be a Hamiltonian satisfying the classical Tonelli assumptions. The Fenchel transform defines a smooth, time-dependent Lagrangian on \(TM\), \[L(t,q,v):=\max_{p\in T_{q}^{*}M}(\langle p,v\rangle)-H(t,q,p),\;(t,q,v)\in[0,1] \times TM.\] We call this Lagrangian the _Fenchel dual_ of our Hamiltonian \(H\). Similarly, given a Lagrangian satisfying equivalent assumptions, we can dualize to get the Hamiltonian \[H(t,q,p)=\max_{p\in T_{q}M}(\langle p,v\rangle)-L(t,q,v),\;(t,q,p)\in[0,1] \times T^{*}M.\] Moreover, we have a diffeomorphism known as the Legendre transform \[\mathcal{L}:[0,1]\times TM\longrightarrow[0,1]\times T^{*}M,\;(t,q,v)\mapsto(t,q,D_{v}L(t,q,v)),\] such that \(\mathcal{L}(t,q,v)=(t,q,p)\) if and only if \(L(t,q,v)=\langle p,v\rangle-H(t,q,p)\). We now specialize to the Lagrangian function \(L_{V}:[0,1]\times TM\to\mathbb{R}\) given by \[L_{V}(t,q,v)=\frac{1}{2}|v|^{2}-V(t,q).\] Its Fenchel dual Hamiltonian is then given by \(H(t,q,p)=\frac{1}{2}|p|^{2}+V(t,q)\), the same Hamiltonian we considered earlier when defining the Floer complex. With these functions in mind, there is an equivalence between Hamiltonian orbits \(x:[0,1]\to T^{*}M\) solving \(x^{\prime}(t)=X_{H}(t,x(t))\) and curves \(\gamma:[0,1]\to M\) which are extremals of the Lagrangian action functional \(\mathcal{A}_{V}(\gamma)=\int_{0}^{1}L_{V}(t,\gamma,\dot{\gamma})dt\). More precisely, \(x\) is a Hamiltonian orbit if and only if \(\gamma:=\pi_{M}\circ x\) is an absolutely continuous extremal of \(\mathcal{A}_{V}\). We conclude this subsection with a further equivalence, recalling the definition of a perturbed geodesic. **Definition 3.1**.: A \(V\)_-perturbed geodesic_\(\gamma\) is a map \([0,1]\to M\) such that \[\nabla_{\dot{\gamma}}\dot{\gamma}=-\nabla V,\] where \(\nabla V\) denotes the gradient of \(V\) with respect to the metric \(g\). The critical points of \(\mathcal{A}_{V}\) are precisely the \(V\)-perturbed geodesics. Putting the two equivalences together, we have that the Hamiltonian orbits are in bijection with the \(V\)-perturbed geodesics. We can impose similar boundary conditions to the perturbed geodesics as we do the Hamiltonian chords in the case of our wrapped Floer theory, giving us a map \(\mathcal{L}:\mathcal{P}^{Q}(H)\to\mathcal{P}^{Q}(L)\), where \(\mathcal{P}^{Q}(H)\) are the Hamiltonian chords with the boundary condition and \(\mathcal{P}^{Q}(L)\) are the \(V\)-perturbed geodesics satisfying the same boundary condition. Moreover, condition (H0) placed on the Hamiltonian \(H\) will ensure that all critical points will be non-degenerate. **Definition 3.2**.: We call \(\mathcal{L}\) the Legendre transform. If we let \(x\in\mathcal{P}^{Q}(H)\), then \[\mathcal{L}(x)(t):=\pi_{M}\circ x(t).\] Figure 3 below shows the perturbed geodesics corresponding to the generator \((a_{2},(12))\) of our model calculation in Section 2.4. ### Path space and wrapped Floer homology Let \(Q\subset M\times M\) be a closed submanifold as before. Consider the path space \[\Omega_{Q}(M)=\{\gamma\in C^{0}([0,1],M)\mid(\gamma(0),\gamma(1))\in Q\}.\] We restrict to a subset consisting of paths in the class \(W^{1,2}\), and wish to do Morse theory on this new path space, denoted \(\Omega_{Q}^{1,2}(M)\). Given the same Hamiltonian \(H_{V}\), Hamiltonian vector field \(X_{H_{V}}\), and flow \(\phi_{H_{V}}^{t}\) as in the previous section, we define the function \(L:[0,1]\times TM\to\mathbb{R}\) satisfying second derivative conditions: Figure 3. Perturbed geodesics (in blue) representing the generator \((a_{2},(12))\). 1. There exists \(l_{1}>0\) such that \(\nabla_{vv}L(t,q,v)\geq l_{1}I\) 2. There exists \(l_{2}>0\) such that \[|\nabla_{qq}L(t,q,v)|\leq l_{2}(1+|v|^{2}),\;|\nabla_{qv}L(t,q,v)|\leq l_{2}(1+| v|),\;|\nabla_{vv}L(t,q,v)|\leq l_{2},\] ensuring that \(L\) grows quadratically in the tangent direction. These conditions are equivalent to conditions (H1), (H2) imposed on our Hamiltonian, and will guarantee that \(H_{V}\), the Fenchel dual of \(L_{V}\), satisfies those. We can take \[L_{V}(t,q,v)=\frac{1}{2}|v|^{2}-V(t,q),\] where \(t\in[0,1],q\in M\), and \(v\in T_{q}M\). Considering the Morse function \[\mathcal{A}_{V}(\gamma)=\int_{0}^{1}L_{V}(t,\gamma,\dot{\gamma})dt,\] defined for \(\gamma\in\Omega_{Q}^{1,2}(M)\), we define the Morse complex generated by its critical points. This is well defined and the Morse homology \(HM_{*}(\mathcal{A}_{V})\) is isomorphic to the singular homology of \(\Omega_{Q}^{1,2}(M)\). Since the inclusion \(\Omega_{Q}^{1,2}(M)\hookrightarrow\Omega_{Q}(M)\) is a homotopy equivalence, the isomorphism extends to the singular homology of \(\Omega_{Q}(M)\). **Theorem 3.3** (Abbondandolo-Schwarz).: _There is an isomorphism between the wrapped Floer homology \(HF_{Q}^{*}(T^{*}M)\) and the singular homology of the path space \(\Omega_{Q}(M)\)._ **For the rest of the paper, we specialize \(M\) to a closed oriented surface \(\Sigma\) of genus \(>0\).** Let \(Q=\{q\}\times\alpha\in\Sigma\times\Sigma\), where \(q\in\Sigma\) is a point and \(\alpha\) is a homotopically nontrivial simple closed curve in \(\Sigma\). **Lemma 3.4**.: \(H_{*}(\Omega_{Q}(\Sigma))\) _is supported in degree 0._ Proof.: If \(\Sigma\) is a torus, we can assume that \(g\) is the flat metric, where all \(V\)-perturbed geodesics with sufficiently small perturbation \(V\) are minimal and isolated. If the genus of \(\Sigma\) is greater than 1, then we can assume that \(g\) is the hyperbolic metric with constant curvature \(-1\). It is well known that on a hyperbolic surface, there is a unique \(V\)-perturbed geodesic in each homotopy class of paths for \(V\) sufficiently small. Hence the Morse indices of all critical points are 0. Next let \(q_{1},\dots,q_{\kappa}\) be \(\kappa\) distinct points on \(\Sigma\) and \(\alpha_{1},\dots,\alpha_{\kappa}\) be \(\kappa\) mutually disjoint homotopically nontrivial simple closed curves on \(\Sigma\). We use \(T_{\mathbf{q}}^{*}\Sigma\) to denote \(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma\) and \(N^{*}\mathbf{\alpha}\) to denote \(\sqcup_{i=1}^{\kappa}N^{*}\alpha_{i}\). **Corollary 3.5**.: \(HF_{Q}^{*}(T^{*}\Sigma)\) _is supported in degree 0. In particular, the grading \(|\mathbf{y}|=0\) for every generator \(\mathbf{y}\in CW(T_{\mathbf{q}}^{*}\Sigma,N^{*}\mathbf{\alpha})\)._ Proof.: Each \(y_{i}\in\phi_{H_{V}}^{1}(T_{q_{i}}^{*}\Sigma)\cap N^{*}\alpha_{j}\) corresponds to a time-1 Hamiltonian chord from \(T_{q_{i}}^{*}\Sigma\) to \(N^{*}\alpha_{j}\). Its Legendre transform gives a \(V\)-perturbed geodesic \(\gamma\) on \(\Sigma\). The Conley-Zehnder index of \(y_{i}\) is equal to the Morse index of \(\gamma\) with respect to its Lagrangian action. Lemma 3.4 above implies that \(|y_{i}|=0\) for all \(i\), and so \(|\mathbf{y}|=0\). ### Unordered configuration space and wrapped HDHF Let \[\text{UConf}_{\kappa}(\Sigma)=\{\{q_{1},\dots,q_{\kappa}\}\;|\;q_{i}\in\Sigma, q_{i}\neq q_{j}\text{ for }i\neq j\}\] be the configuration space of \(\kappa\) unordered (distinct) points on \(\Sigma\). We wish to generalize Theorem 3.3 to the case where \(\kappa>1\). Let \(\mathbf{q}=\{q_{1},\dots,q_{\kappa}\}\in\text{UConf}_{\kappa}(\Sigma)\) and \(\mathbf{\alpha}=\alpha_{1}\times\dots\times\alpha_{\kappa}\), where we view \(\mathbf{\alpha}\) as a subset of \(\text{UConf}_{\kappa}(\Sigma)\) by viewing \((x_{1},\dots,x_{\kappa})\) as an unordered tuple. The natural analog to consider on the path space side is the path space on \(\text{UConf}_{\kappa}(\Sigma)\), which we denote by \[\Omega(\text{UConf}_{\kappa}(\Sigma),\boldsymbol{q},\boldsymbol{\alpha})=\{ \gamma\in C^{0}([0,1],\text{UConf}_{\kappa}(\Sigma))\mid\gamma(0)=\boldsymbol{q },\;\gamma(1)\in\boldsymbol{\alpha}\}.\] We can identify each generator of \(CF(\sqcup_{i=1}^{\kappa}\phi^{1}_{H_{V}}(T_{q_{i}}^{\ast}\Sigma),\sqcup_{i=1}^ {\kappa}N^{\ast}\alpha_{i})\) with a \(\kappa\)-tuple of \(V\)-perturbed geodesics from the \(q_{i}\) to the \(\alpha_{\sigma(i)}\) using the duality established in Section 3.2. This is an element of \(\Omega(\text{UConf}_{\kappa}(\Sigma),\boldsymbol{q},\boldsymbol{\alpha})\). To make this more precise, we construct an evaluation map \[\mathcal{E}:CF(\sqcup_{i=1}^{\kappa}\phi^{1}_{H_{V}}(T_{q_{i}}^{\ast}\Sigma), \sqcup_{i=1}^{\kappa}N^{\ast}\alpha_{i})\longrightarrow C_{0}(\Omega(\text{ UConf}_{\kappa}(\Sigma),\boldsymbol{q},\boldsymbol{\alpha}))\otimes\mathbb{Z}[[ \hbar]],\] where \(C_{0}(\Omega(\text{UConf}_{\kappa}(\Sigma),\boldsymbol{q},\boldsymbol{\alpha}))\) is the space of 0-chains of the path space \(\Omega(\text{UConf}_{\kappa}(\Sigma),\boldsymbol{q},\boldsymbol{\alpha})\). The map \(\mathcal{E}\) counts pseudo-holomorphic curves between the conormal Lagrangians and the zero section. Parametrizing the boundary of the curves along the zero section produces a path in the unordered configuration space. We keep the parameter \(\hbar\) around to track the Euler characteristic of the map, which will later relate to the HOMFLY skew relation on braids. Let \(T_{1}:=D_{2}\) be our \(A_{\infty}\)-base where \(\partial_{t}T_{1}=\partial_{i}D_{2}\); shown in Figure 4. Let \(\mathcal{T}_{1}\) be the moduli space of \(T_{1}\) modulo automorphisms, and choose representatives \(T_{1}\) of equivalence classes in a smooth manner. Let \(\pi_{T^{\ast}\Sigma}\) be the projection \(T_{1}\times T^{\ast}\Sigma\to T^{\ast}\Sigma\) and choose a sufficiently generic consistent collection of compatible almost complex structures such that they are close to a split almost complex structure projecting holomorphically to \(T_{1}\), as in Section 2.1. We denote by \(\mathcal{H}(\boldsymbol{q}^{\prime},\boldsymbol{y},\;\boldsymbol{x})\) the moduli space of maps \[u:(\dot{F},j)\longrightarrow(T_{1}\times T^{\ast}\Sigma,J_{T_{1}}),\] where \((F,j)\) is a compact Riemann surface with boundary, \(\boldsymbol{p}_{0}\), \(\boldsymbol{p}_{1}\), \(\boldsymbol{p}_{2}\) are disjoint tuples of boundary punctures of \(F\) and \(\dot{F}=F\setminus\cup_{i}p_{i}\), satisfying: \[\begin{cases}du\circ j=J_{T_{1}}\circ du;\\ \pi_{T^{\ast}\Sigma}\circ u(z)\in\phi^{1}_{H_{V}}(\sqcup_{i=1}^{\kappa}T_{q_{i }}^{\ast}\Sigma)\text{ if }\pi_{T_{1}}\circ u(z)\subset\partial_{0}T_{1};\\ \text{each component of }\partial\dot{F}\text{ that projects to }\partial_{0}T_{1}\text{ maps to a distinct }\phi^{1}_{H_{V}}(T_{q_{i}}^{\ast}\Sigma);\\ \pi_{T^{\ast}\Sigma}\circ u(z)\in\sqcup_{i=1}^{\kappa}N^{\ast}\alpha_{i}\text { if }\pi_{T_{1}}\circ u(z)\subset\partial_{1}T_{1};\\ \text{each component of }\partial\dot{F}\text{ that projects to }\partial_{1}T_{1}\text{ maps to a distinct }N^{\ast}\alpha_{i};\\ \pi_{T^{\ast}\Sigma}\circ u(z)\in\Sigma\text{ if }\pi_{T_{1}}\circ u(z)\subset \partial_{2}T_{1};\\ \pi_{T^{\ast}\Sigma}\circ u\text{ tends to }\boldsymbol{q}^{\prime}, \boldsymbol{y},\;\boldsymbol{x}\text{ as }s_{0},s_{1},s_{2}\rightarrow+\infty;\\ \pi_{T_{1}}\circ u\text{ is a }\kappa\text{-fold branched cover of a fixed }T_{1}\in\mathcal{T}_{1}.\end{cases} \tag{3.1}\] Figure 4. The \(A_{\infty}\)-base \(T_{1}\) In simpler terms, we look at the moduli space of holomorphic curves between the Lagrangians involved and the zero section of \(T^{*}\Sigma\) in the framework of HDHF with only positive punctures. _Remark 3.6_.: While the intersections \(\boldsymbol{y}\) and \(\boldsymbol{q}^{\prime}\) are discrete, we have an \(S^{1}\)-worth of choices for each \(x_{i}\in\alpha_{i}\cap\Sigma\). To resolve this issue, we can either use the Morse-Bott (clean intersection) formalism or perturb the zero section near the \(N^{*}\alpha_{i}\). This results in two intersection points \(x_{1},x_{2}\in N^{*}\alpha_{i}\cap(\Sigma\times\{0\})\). If we are interested in index 0 curves in our moduli space then only one of these intersections will have the right grading. This is shown in Figure 5 for our model \(\Sigma=T^{2},\kappa=1\) case by counting pseudo-holomorphic triangles bounded by the conormal Lagrangians and the perturbed zero section. Let \(\mathcal{H}^{\chi}(\boldsymbol{q}^{\prime},\boldsymbol{y},\boldsymbol{x})\) be the subset of \(\mathcal{H}(\boldsymbol{q}^{\prime},\boldsymbol{y},\boldsymbol{x})\) such that \(\chi(\dot{F})=\chi\). Moreover, let \[\mathcal{H}^{\chi}(\boldsymbol{q}^{\prime},\boldsymbol{y},\boldsymbol{ \alpha})=\sqcup_{\boldsymbol{x}\in\boldsymbol{\alpha}}\mathcal{H}^{\chi}( \boldsymbol{q}^{\prime},\boldsymbol{y},\boldsymbol{x}).\] **Lemma 3.7**.: _For fixed generic \(J_{T_{1}}\), \(\mathcal{H}^{\chi}(\boldsymbol{q}^{\prime},\boldsymbol{y},\boldsymbol{x})\) is of dimension 0 and consists of discrete regular curves for all \(\boldsymbol{q}^{\prime},\ \boldsymbol{y},\) and \(\boldsymbol{x}\) such that \(\boldsymbol{x}\) is a tuple of bottom generators for the Morse-Bott intersections \(N^{*}\alpha_{i}\cap(\Sigma\times\{0\})\)._ Proof.: By Corollary 3.5, we have \(|\boldsymbol{y}|=0\). Computing the grading for intersections of cotangent fibers with the zero section gives that \(|\boldsymbol{q}^{\prime}|=0\); see [11]. Since the \(\boldsymbol{x}\) are bottom generators for our Morse-Bott intersection, it follows that \(|\boldsymbol{x}|=0\) and so the virtual dimension of \(\mathcal{H}^{\chi}(\boldsymbol{q}^{\prime},\boldsymbol{y},\boldsymbol{x})\) is 0. The rest follows from standard transversality arguments. **Lemma 3.8**.: _Given \(\boldsymbol{q}^{\prime},\boldsymbol{y},\boldsymbol{\alpha}\), the moduli space \(\mathcal{H}^{\chi}(\boldsymbol{q}^{\prime},\boldsymbol{y},\boldsymbol{\alpha})\) consists of finitely many curves for each Euler characteristic \(\chi\)._ Proof.: Each \(\boldsymbol{y}\) determines a unique \(\boldsymbol{q}^{\prime}\) and \(\boldsymbol{x}\in\boldsymbol{\alpha}\). The energy bound along with Gromov compactness gives the result. Fix a parametrization of the arc \(\partial_{2}T_{1}\) from \(p_{0}\) to \(p_{2}\) by \(\tau:[0,1]\to\partial_{2}T_{1}\). There exists a sufficiently generic consistent collection of almost complex structures such that for all \(u\in\mathcal{H}(\boldsymbol{q}^{\prime},\boldsymbol{y},\boldsymbol{\alpha})\), \((\pi_{\Sigma}\circ u)\circ(\pi_{T_{1}\circ u})^{-1}\circ\tau(t)\) consists of \(\kappa\) distinct points on \(\Sigma\) for each \(t\in[0,1]\) and hence gives a path in \(\text{UConf}_{\kappa}(\Sigma)\): \[\gamma(u):[0,1]\longrightarrow\text{UConf}_{\kappa}(\Sigma),\] \[t\mapsto(\pi_{\Sigma}\circ u)\circ(\pi_{T_{1}\circ u})^{-1}\circ\tau(t).\] Since \(\gamma(0)=\boldsymbol{q}^{\prime}\) and \(\gamma(1)\in\boldsymbol{\alpha}\), it follows that \(\gamma(u)\in\Omega(\text{UConf}_{\kappa}(\Sigma),\boldsymbol{q}^{\prime}, \boldsymbol{\alpha})\). Figure 5. Perturbed zero section in one of the \(T^{*}S^{1}\) directions. Only \(x_{2}\in N^{*}\alpha\cap\phi_{H^{\prime}}(T^{2})\) has the right Maslov index. Define the evaluation map \[\mathcal{E}:CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma,\sqcup_{i=1}^{ \kappa}N^{*}\alpha_{i})\longrightarrow C_{0}(\Omega(\text{UConf}_{\kappa}(\Sigma),\mathbf{q}^{\prime},\mathbf{\alpha}))\otimes\mathbb{Z}[[\hbar]]\] \[\mathbf{y}\mapsto\sum_{u\in\mathcal{H}(\mathbf{q}^{\prime},\mathbf{y},\mathbf{ \alpha})}(-1)^{\natural(u)}\cdot\hbar^{\kappa-\chi(u)}\cdot\gamma(u),\] where \((-1)^{\natural(u)}\) is the sign assigned to \(u\). Since the perturbation term \(V\) has small \(W^{1,2}\)-norm, the Hamiltonian vector field \(X_{H_{V}}\) has small norm near the zero section \(\Sigma\) and hence \(\mathbf{q}^{\prime}\) is close to \(\mathbf{q}\). We choose non-intersecting short paths \(\gamma_{i}\) on \(\Sigma\) from \(q_{i}\) to \(q_{i}^{\prime}\) for \(i=1,\ldots,\kappa\). Pre-concatenating with \(\{\gamma_{i}\}\) allows us to identify \(\Omega(\text{UConf}_{\kappa}(\Sigma),\mathbf{q}^{\prime},\mathbf{\alpha})\) with \(\Omega(\text{UConf}_{\kappa}(\Sigma),\mathbf{q},\mathbf{\alpha})\). We make this identification whenever possible. Next, we have a projection \[\mathcal{P}:C_{0}(\Omega(\text{UConf}_{\kappa}(\Sigma),\mathbf{q},\mathbf{\alpha})) \otimes\mathbb{Z}[[\hbar]]\rightarrow(H_{0}(\Omega(\text{UConf}_{\kappa}( \Sigma),\mathbf{q},\mathbf{\alpha}))\otimes\mathbb{Z}[[\hbar]])/\text{HOMFLY skein}\] given by first taking the homotopy class of the path \(\gamma(u)\) and then, viewing \(\gamma(u)\) as a braid, quotienting by the HOMFLY skein relation (given in Definition 4.1). Composing the evaluation map and the projection, we arrive at the map \(\mathcal{F}=\mathcal{P}\circ\mathcal{E}\). Let \(BSk_{\kappa}(\Sigma,\mathbf{q},\mathbf{\alpha})\) denote the free \(\mathbb{Z}[[\hbar]]\)-module generated by homotopy classes of braids \(\gamma\in\Omega(\text{UConf}_{\kappa}(\Sigma),\mathbf{q},\mathbf{\alpha})\) modulo the HOMFLY skein relation. Then \[\mathcal{F}:CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma,\sqcup_{i=1}^{\kappa} N^{*}\alpha_{i})\longrightarrow BSk_{\kappa}(\Sigma,\mathbf{q},\mathbf{\alpha})\] is given by \[\mathbf{y}\mapsto\sum_{u\in\mathcal{H}(\mathbf{q}^{\prime},\mathbf{y},\mathbf{\alpha})}(-1)^{ \natural(u)}\cdot\hbar^{\kappa-\chi(u)}\cdot[\gamma(u)],\] where \([\gamma(u)]\) is viewed as an equivalence class of braids modulo the HOMFLY skein relation. ## 4. HDHF with conormal boundary conditions as a braid skein algebra module In this section we recall the equivalence between wrapped HDHF for cotangent fibers and the braid skein algebra of a surface. The reader is encouraged to look at [11] and [12] for a more detailed exposition of this section's content. Let \(\Sigma\) be a closed oriented surface of genus \(>0\) and \(q_{1},\ldots,q_{\kappa}\in\Sigma\) be distinct points. ### The braid skein algebra of a surface Consider the braid group \(B_{\kappa}(\Sigma\setminus\{*\},\mathbf{q})\) of \(\kappa\)-braids in the punctured surface \(\Sigma\setminus\{*\}\) based at \(\mathbf{q}=\{q_{1},\ldots,q_{\kappa}\}\). One way to view this is to take the thickened surface \(\Sigma\times I\) with a fixed base string \(\{*\}\times I\). In this case, the elements are made up of \(\kappa\) strings oriented monotonically from \(\Sigma\times\{0\}\) to \(\Sigma\times\{1\}\) which do not intersect each other or the base string. Two braids are equivalent if they are isotopic to each other, with the isotopy avoiding the base string. **Definition 4.1**.: The _braid skein algebra \(BSk_{\kappa}(\Sigma,\mathbf{q},*)\) (or the surface Hecke algebra) of the surface \(\Sigma\)_ is the free \(\mathbb{Z}[s^{\pm 1},c^{\pm 1}]\)-module generated by \(\kappa\)-braids in the punctured surface \(\Sigma\setminus\{*\}\) based at \(\mathbf{q}\), up to isotopy which does not intersect \(\{*\}\times[0,1]\), subject to the local relations: 1. the HOMFLY skein relation 2. the marked point relation \[\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{images/1-crop.pdf}}\quad= \quad c^{2}\] where the string in blue corresponds to the marked point strand \(\{*\}\times[0,1]\). Using the marked point relation, we define a slightly more useful \(c\)-_deformed homotopy relation_. **Definition 4.2**.: The _\(c\)-deformed braid group \(B_{\kappa}(\Sigma,\mathbf{q})_{c}\) of \(\Sigma\) based at \(\mathbf{q}\)_ is generated by \(B_{\kappa}(\Sigma\setminus\{*\},\mathbf{q})\) and a central element \(c\), subject to the following \(c\)-deformed homotopy relation: \[[\gamma_{2}]=c^{2\langle H,*\rangle}[\gamma_{1}], \tag{4.1}\] where \(\gamma_{1},\gamma_{2}\in\Omega(\operatorname{UConf}_{\kappa}(\Sigma\setminus \{*\}),\mathbf{q})\), \(H\) is a homotopy between \(\gamma_{1}\) and \(\gamma_{2}\), \(\langle H,*\rangle\) is the algebraic intersection number for the homotopy \(H\) defined in [11], and \([\gamma_{i}]\) is the homology class of the braid \(\gamma_{i}\). The quotient of \(\mathbb{Z}[s^{\pm 1}][B_{\kappa}(\Sigma,\mathbf{q})_{c}]\) by the HOMFLY skein relation gives the braid skein algebra \(BSk_{\kappa}(\Sigma,\mathbf{q},*)\). ### Hecke algebra realization in HDHF We briefly review the connection between the wrapped HDHF of contangent fibers on surfaces and the surface Hecke algebras defined in [11]. **Definition 4.3**.: The _wrapped HDHF chain complex of disjoint cotangent fibers with parameter \(c,CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma)_{c}\)_, is given by \(CF(\phi^{1}_{H_{V}}(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma),\sqcup_{i=1}^{ \kappa}T_{q_{i}}^{*}\Sigma)\otimes\mathbb{Z}[c^{\pm 1}]\) as a \(\mathbb{Z}\)-module and has enhanced \(A_{\infty}\)-operations to include \(c\)-coefficients. Specifically, \[\mu^{m}(\mathbf{y}_{1},\dots,\mathbf{y}_{m})=\sum_{u\in\mathcal{M}^{\text{all}=0}(\bm {y}_{1},\dots,\mathbf{y}_{m},\mathbf{y}_{0})}(-1)^{\natural(u)}\cdot c^{2\langle u,* \rangle}\cdot h^{\kappa-\chi(u)}\cdot\mathbf{y}_{0}.\] Given a map \(u\in\mathcal{M}(\mathbf{y}_{1},\dots,\mathbf{y}_{m},\mathbf{y}_{0})\), \(\langle u,*\rangle\) is the intersection number of \(\pi_{\Sigma}(u)\) and \(*\), after some modifications to \(\pi_{\Sigma}(u)\) to ensure this is well defined. **Proposition 4.4** (Honda-Tian-Yuan).: _The \(A_{\infty}\)-algebra \(CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma)_{c}\) is supported in degree zero, and hence is an ordinary algebra._ With this on hand, HTY go on to prove the following theorem connecting the wrapped HDHF to the surface Hecke algebra \(H_{\kappa}(\Sigma,\mathbf{q})\). **Theorem 4.5** (Honda-Tian-Yuan).: _There is an isomorphism of algebras_ \[\mathcal{F}:CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma)_{c} \longrightarrow H_{\kappa}(\Sigma,\mathbf{q})\otimes\mathbb{Z}[[h]],\] _where the surface Hecke algebra \(H_{\kappa}(\Sigma,\mathbf{q})\) is naturally isomorphic to the braid skein algebra \(BSk_{\kappa}(\Sigma,\mathbf{q},*)\)._ We run through an (informal) overview of the proof as the main ideas will be modified and applied in the proof of Theorem 1.1 in Section 4.5. The map \(\mathcal{F}\) is an evaluation map similar to that in Section 3.3. It is constructed by considering a moduli space of pseudo-holomorphic curves bounded by \(\phi^{1}_{H_{V}}(T_{\mathbf{q}}^{*}\Sigma),\ T_{\mathbf{q}}^{*}\Sigma,\) and \(\Sigma\). The authors then make use of the construction of the braid skein algebra coming from the unordered configuration space to relate this to the surface Hecke algebra. The next step is to show that \(\mathcal{F}\) is indeed a homomorphism of algebras. This is done by taking the moduli space of index \(1\) curves projecting to a 4-punctured disk with boundary conditions given by \(\phi^{2}_{H_{V}}(T_{\mathbf{q}}^{*}\Sigma),\) \(\phi^{1}_{H_{V}}(T^{*}_{\mathbf{q}}\Sigma),\,T^{*}_{\mathbf{q}}\Sigma,\) and \(\Sigma\). An inspection of the boundary of the compactification of this space concludes that \(\mathcal{F}\) respects the algebra structure. More specifically, the only breakings that occur are ones that correspond to \(\mathcal{F}(\mu^{2}(\mathbf{y},\mathbf{y}^{\prime}))\) or \(\mathcal{F}(\mathbf{y})\mathcal{F}(\mathbf{y}^{\prime})\). The final step is to show that \(\mathcal{F}|_{\hbar=0}\) is an isomorphism and then use the algebra homomorphism properties to prove it is a bijection when \(\hbar\) is reintroduced. We will repeat and modify much of the argument in our proof of Theorem 1.1. ### The parameter \(c\) We enhance \(CW(\sqcup_{i=1}^{\kappa}T^{*}_{q_{i}}\Sigma,\sqcup_{i=1}^{\kappa}N^{*}\alpha _{i})\) to include the parameter \(c\). Let \[CW(\sqcup_{i=1}^{\kappa}T^{*}_{q_{i}}\Sigma,\sqcup_{i=1}^{\kappa}N^{*}\alpha _{i})_{c}:=CW(\sqcup_{i=1}^{\kappa}T^{*}_{q_{i}}\Sigma,\sqcup_{i=1}^{\kappa}N ^{*}\alpha_{i})\otimes\mathbb{Z}[c^{\pm 1}].\] Consider the updated evaluation map: \[\mathcal{E}:CW(\sqcup_{i=1}^{\kappa}T^{*}_{q_{i}}\Sigma,\sqcup_{i=1}^{\kappa }N^{*}\alpha_{i})_{c}\longrightarrow C_{0}(\Omega(\text{UConf}_{\kappa}(( \Sigma),\mathbf{q},\mathbf{\alpha}))\otimes\mathbb{Z}[c^{\pm 1}]\otimes\mathbb{Z}[[\hbar]]\] given by \[\mathbf{y}\mapsto\sum_{u\in\mathcal{H}(\mathbf{q}^{\prime},\mathbf{y},\mathbf{\alpha})}(-1)^{ \sharp(u)}\cdot c^{2\langle u,*\rangle}\cdot\hbar^{\kappa-\chi(u)}\cdot\gamma (u),\] where \(\langle u,*\rangle:=\langle[\pi_{\Sigma}(u)]^{\prime},*\rangle\) is defined in [HTY a, Section 5]. **Definition 4.6**.: The _braid skein group \(BSk_{\kappa}(\Sigma,\mathbf{q},\mathbf{\alpha},*)\) from \(\mathbf{q}\) to \(\mathbf{\alpha}\) on \(\Sigma\setminus\{*\}\)_ is the free \(\mathbb{Z}[[\hbar]]\)-module generated by \(c\)-deformed homotopy classes of paths in \[\Omega(\text{UConf}_{\kappa}(\Sigma),\mathbf{q},\mathbf{\alpha})=\{\gamma\in C^{0}([ 0,1],\text{UConf}_{\kappa}(\Sigma))\mid\gamma(0)=\mathbf{q},\;\gamma(1)\in\mathbf{ \alpha}=\alpha_{1}\times\cdots\times\alpha_{\kappa}\}\] modulo the HOMFLY skein relation. _Remark 4.7_.: Given a braid \(\gamma\), its \(c\)-deformed homotopy class \([\gamma]_{c}\) is the set of braids equivalent to it under the local relations given in Definition 4.1. Using these relations, every braid is equivalent to a \(\mathbb{Z}[\hbar,c^{\pm 1}]\)-combination of \(\kappa\)-tuples of perturbed geodesics. In the spirit of [AS10], we can flow the braid \(\gamma\) by the Morse function defined in Section 5.2. Every time a crossing switches from positive to negative, or vice versa, the flow will bifurcate according to the HOMFLY skein relation. Similarly, crossing the strand over the puncture \(*\) will pick up factors of \(c\). Continuing this flow, we arrive at a \(\mathbb{Z}[\hbar,c^{\pm 1}]\)-combination of \(\kappa\)-tuples of perturbed geodesics. Since the relations of our \(c\)-deformed homotopy classes are the same as those during the Morse flow, the two equivalence classes will be the same. Composing \(\mathcal{E}\) with the projection which takes the \(c\)-deformed homotopy class and quotients out by the HOMFLY skein relation gives the map \[\mathcal{F}:CW(\sqcup_{i=1}^{\kappa}T^{*}_{q_{i}}\Sigma,\sqcup_{i=1}^{\kappa} N^{*}\alpha_{i})_{c}\longrightarrow BSk_{\kappa}(\Sigma,\mathbf{q},\mathbf{\alpha},*). \tag{4.2}\] ### The proof of Theorem 1.1, \(\kappa=1\) case In this subsection we prove Theorem 1.1 for \(\kappa=1\). Let \(\mathcal{F}_{0}\) denote the specialization \(\mathcal{F}_{\hbar=0}\). We start by briefly reviewing the chain map \[\Theta:CM_{*}(\Omega^{1,2}(\Sigma,q,\alpha),\mathcal{A}_{V})\longrightarrow CW (T^{*}_{q}\Sigma,N^{*}\alpha)\] constructed by Abbondandolo and Schwarz in [AS10, Theorem 3.3], where the domain \(CM_{*}(\Omega^{1,2}(\Sigma,q,\alpha),\mathcal{A}_{V})\) is the Morse complex of the function \(\mathcal{A}_{V}\) defined by: \[\mathcal{A}_{V}(\gamma)=\int_{0}^{1}L_{V}(t,\gamma,\dot{\gamma})dt,\] for \(\gamma\in\Omega^{1,2}(\Sigma,q,\alpha)\). Both complexes are concentrated at grading 0, so \(\Theta\) is an isomorphism from the group generated by index 0 critical points of \(\mathcal{A}_{V}\) to the wrapped Floer group \(CW(T^{*}_{q}\Sigma,N^{*}\alpha)\). In what follows, we identify \(CM_{0}(\Omega^{1,2}(\Sigma,q,\alpha),\mathcal{A}_{V})\) with \(CM_{0}(\Omega(\Sigma,q,\alpha),\mathcal{A}_{V})\). Given \(y\in CM_{0}(\Omega(\Sigma,q,\alpha),\mathcal{A}_{V}),x\in CW(T^{*}_{q}\Sigma,N^{*}\alpha)\), AS construct the space \(\mathcal{M}(y,x)\) of maps \[u:(-\infty,0]\times[0,1]\longrightarrow T^{*}\Sigma\] solving the Floer equation (2.4), which converge to the Hamiltonian chord representing \(x\) at \(-\infty\), satisfy boundary conditions \(T^{*}_{q}\Sigma\) along \((-\infty,0]\times\{0\}\), \(N^{*}\alpha\) along \((-\infty,0]\times\{1\}\), and such that the image of \(u(\{0\}\times[0,1])\) under the projection from \(T^{*}\Sigma\) to \(\Sigma\) is a path on \(\Sigma\) lying in the descending manifold of \(y\) with respect to the negative gradient flow of \(\mathcal{A}_{V}\). Letting \(\#\mathcal{M}(y,x)\) be the count of such maps, we have \[\Theta(y)=\sum\#\mathcal{M}(y,x)x.\] Let \[\Theta_{c}:CM_{*}(\Omega(\Sigma,q,\alpha),\mathcal{A}_{V})\otimes\mathbb{Z}[ c^{\pm 1}]\longrightarrow CW(T^{*}_{q}\Sigma,N^{*}\alpha)_{c}\] be the \(\mathbb{Z}[c^{\pm 1}]\)-linear extension of \(\Theta\). We make an additional identification which allows us to apply results of [1]. **Proposition 4.8**.: \(BSk_{\kappa=1}(\Sigma,q,\alpha,*)\simeq CM_{0}(\Omega(\Sigma,q,\alpha), \mathcal{A}_{V})\otimes\mathbb{Z}[c^{\pm 1}]\)__ Proof.: Elements in \(BSk_{\kappa=1}(\Sigma,\boldsymbol{q},\boldsymbol{\alpha},*)\) are \(c\)-deformed homotopy classes of paths starting at \(q\) and ending on \(\alpha\). Every such path is homotopic to a unique perturbed geodesic on \(\Sigma\) with the same boundary conditions. Let \(\gamma_{1}\) be a path homotopic to a perturbed geodesic \(\gamma_{2}\) and let \(n\) be the signed intersection number of this homotopy with the marked point \(*\). The map sending \([\gamma_{1}]_{c}\mapsto c^{2n}\gamma_{2}\) is the desired isomorphism. We prove the following proposition, recreating a proof similar to Abouzaid's [1] in the case of cotangent fibers. Let \(\tilde{\mathcal{F}}_{0}\) be the composition of \(\mathcal{F}_{0}\) with the isomorphism described in Proposition 4.8. **Proposition 4.9**.: _Let \(\kappa=1\). The map_ \[\tilde{\mathcal{F}}_{0}:CW(T^{*}_{q}\Sigma,N^{*}\alpha)_{c}\longrightarrow CM _{0}(\Omega(\Sigma,q,\alpha),\mathcal{A}_{V})\otimes\mathbb{Z}[c^{\pm 1}]\] _is an isomorphism of chain complexes. Moreover, \(\tilde{\mathcal{F}}_{0}\) is an inverse to \(\Theta_{c}\)._ Proof.: Since \(\Theta\) is an isomorphism of chain complexes, it suffices to show that the composition \[CM_{0}(\Omega(\Sigma,q,\alpha),\mathcal{A}_{V})\otimes\mathbb{Z}[c^{\pm 1}] \stackrel{{\Theta_{c}}}{{\longrightarrow}}CW(T^{*}_{q}\Sigma,N^{* }\alpha)_{c}\stackrel{{\tilde{\mathcal{F}}_{0}}}{{\longrightarrow }}CM_{0}(\Omega(\Sigma,q,\alpha),\mathcal{A}_{V})\otimes\mathbb{Z}[c^{\pm 1}]\] is homotopic to the identity map on \(CM_{0}(\Omega(\Sigma,q,\alpha),\mathcal{A}_{V})\otimes\mathbb{Z}[c^{\pm 1}]\). We show that it is not just homotopic to the identity, but is in fact the identity map on the nose. It follows then that \(\tilde{\mathcal{F}}_{0}=\Theta_{c}^{-1}\). Let \(y\) and \(z\) be critical points of the Lagrangian rational \(\mathcal{A}_{V}\) which is Fenchel dual to \(H_{V}\) and let \(a\) be a positive real number. Define \(\mathcal{C}(y,z;a)\) to be the moduli space of maps \[u:[0,a]\times[0,1]\longrightarrow T^{*}\Sigma\] which solve the Floer equation (2.4) with boundary conditions \(T^{*}_{q}\Sigma\) along \([0,a]\times\{0\}\), \(N^{*}\alpha\) along \([0,a]\times\{1\}\), and such that \(u(\{0\}\times[0,1])\) is contained in the zero section and, considered as a path on \(\Sigma\), lies on the ascending manifold of \(z\), while the image of \(u(\{a\}\times[0,1])\) under the projection from \(T^{*}\Sigma\) to \(\Sigma\) is a path on \(\Sigma\) lying on the descending manifold of \(y\) with respect to the the negative gradient flow of \(\mathcal{A}_{V}\). This is shown in the central part of Figure 6. Write \(\mathcal{C}(y,z):=\sqcup_{a\in[0,\infty)}\mathcal{C}(y,z;a)\) and let \(\overline{\mathcal{C}}(y,z)\) be its Gromov compactification. When \(a=0\), any solution in \(\mathcal{C}(y,z;a)\) is necessarily constant. Thus, the count of rigid elements of \(\mathcal{C}(y,z;0)\) gives the identity map on \(CM_{0}(\Omega(\Sigma,q,\alpha),\mathcal{A}_{V})\otimes\mathbb{Z}[c^{\pm 1}]\). Letting \(a\) go to \(+\infty\), a family of maps \(u_{a}\) defined on finite strips \([0,a]\times[0,1]\) breaks into two maps \(u_{-},u_{+}\) defined on semi-infinite strips, as shown in the bottom right part of the figure below. This boundary component is precisely the composition \(\tilde{\mathcal{F}}_{0}\circ\Theta\). The remaining boundary strata occur at finite \(a\) when the projection of \(u(\{a\}\times[0,1])\) to \(\Sigma\) escapes to the ascending manifold of a critical point \(y^{\prime}\) which differs from \(y\). Similarly, the image of \(u(\{0\}\times[0,1])\) may converge to the descending manifold of a critical point \(z^{\prime}\neq z\). Such boundary strata are in bijective correspondence with \[\mathcal{T}(y,y^{\prime})\times\overline{\mathcal{C}}(y^{\prime},z)\cup \overline{\mathcal{C}}(y,z^{\prime})\times\mathcal{T}(z^{\prime},z),\] where \(\mathcal{T}(y,y^{\prime})\) is the moduli space of gradient trajectories from \(y\) to \(y^{\prime}\). For a general manifold this would give a chain homotopy. However, all critical points of \(\mathcal{A}_{V}\) have grading \(0\) and so there are no gradient trajectories between critical points. Thus there are no boundary strata of this form. It follows then that, up to signs, \(\mathrm{id}=\tilde{\mathcal{F}}_{0}\circ\Theta\), where \(\mathrm{id}\) is the identity map on \[CM_{0}(\Omega(\Sigma,q,\alpha),\mathcal{A}_{V})\otimes\mathbb{Z}[c^{\pm 1}].\] Moreover, identifying \(x\in CW(T_{q}^{*}\Sigma,N^{*}\alpha)_{c}\) with a time-1 Hamiltonian chord, \(\tilde{\mathcal{F}}_{0}\) maps \(x\) to the homotopy class of its Legendre transform \([\mathcal{L}(x)]\). Composing \(\tilde{\mathcal{F}}_{0}\) with the isomorphism \[CM_{0}(\Omega(\Sigma,q,\alpha),\mathcal{A}_{V})\otimes\mathbb{Z}[c^{\pm 1}] \hookrightarrow BSk_{1}(\Sigma,q,\alpha,*)\] given by viewing a geodesic as a path with the same boundary conditions proves that \(\mathcal{F}_{0,\kappa=1}\) is an isomorphism. ### The proof of Theorem 1.1, general case We first extend Proposition 4.9 to \(\kappa\geq 1\). **Lemma 4.10**.: _Let \(\kappa\geq 1\). Then \(\mathcal{F}_{0}\) is an isomorphism._ Proof.: Let \(\kappa\geq 1\) and \(\hbar=0\). \(BSk_{\kappa}(\Sigma,\boldsymbol{q},\boldsymbol{\alpha},*)_{c}|_{\hbar=0}\) no longer keeps track of braid crossings and thus is generated over \(\mathbb{Z}[c^{\pm 1}]\) by elements of the form \(\otimes_{i=1}^{\kappa}H_{0}(\Omega(\Sigma,q_{i},\alpha_{\rho(i)}))_{c}= \otimes_{i=1}^{\kappa}BSk_{1}(\Sigma,q_{i},\alpha_{\rho(i)},*)_{c}\), where \(\rho\in S_{\kappa}\). Each homotopy class of paths in \(H_{0}(\Omega(\Sigma,q_{i},\alpha_{\rho(i)}))_{c}\) contains a unique \(V\)-perturbed geodesic from \(q_{i}\) to \(\alpha_{\rho(i)}\). Applying the Fenchel duality in Section 3.1 gives us a bijection between \(V\)-perturbed geodesics and their dual Hamiltonian chords. By Proposition 4.9, this bijection is given by \(\mathcal{F}_{0,\kappa=1}\) after identifying generators of \(CW(\Sigma,q_{i},\alpha_{\rho(i)})\) with Hamiltonian chords from \(q_{i}\) to \(\alpha_{\rho(i)}\). Since \(\hbar=0\), the only maps \(u\) contributing to \(\mathcal{F}_{0}\) are such that \(\chi(u)=\kappa\). The domain is then \(\kappa\) pseudoholomorphic disks, each of which is counted in the map \(\mathcal{F}_{0,\kappa=1}\). Given a generator \[\boldsymbol{x}=\{x_{1\rho(1)},\ldots,x_{\kappa\rho(\kappa)}\}\in CW(\sqcup_{i=1 }^{\kappa}T_{q_{i}}^{*}\Sigma,\sqcup_{i=1}^{\kappa}N^{*}\alpha_{i})_{c},\] where \(x_{i\rho(i)}\in CF(\phi^{1}_{H_{V}}(T_{q_{i}}^{*}\Sigma),N^{*}\alpha_{\rho(i)})\), \[\mathcal{F}_{0}(\boldsymbol{x})=\otimes_{i=1}^{\kappa}\mathcal{F}_{0,\kappa=1} (x_{i\rho(i)})=\otimes_{i=1}^{\kappa}[\mathcal{L}(x_{i\rho(i)})]\in\otimes_{i= 1}^{\kappa}BSk_{1}(\Sigma,q_{i},\alpha_{\rho(i)},*)_{c}.\] We construct an inverse \[\mathcal{F}_{0}^{-1}:BSk_{\kappa}(\Sigma,\boldsymbol{q},\boldsymbol{\alpha},* )_{c}|_{\hbar=0}\longrightarrow CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma, \sqcup_{i=1}^{\kappa}N^{*}\alpha_{i})_{c}\] given by \[\boldsymbol{\gamma}=\{\gamma_{1\rho(1)},\ldots,\gamma_{\kappa\rho(\kappa)}\} \mapsto\{\mathcal{F}_{0,\kappa=1}^{-1}(\gamma_{1\rho(1)}),\ldots,\mathcal{F}_ {0,\kappa=1}^{-1}(\gamma_{\kappa\rho(\kappa)})\}.\] Thus \(\mathcal{F}_{0}\) is an isomorphism. We use Lemma 4.10 and reintroduce \(\hbar\). It suffices to show that \(\mathcal{F}\) is a bijection. We repeat the argument in [HTY a]. Proof of Theorem 1.1.: _Injectivity of \(\mathcal{F}\)_: Suppose that there exists \(\boldsymbol{a}\neq 0\) such that \(\mathcal{F}(\boldsymbol{a})=0\). We can write \(\boldsymbol{a}=\sum_{i\geq 0}\hbar^{i}\boldsymbol{a}_{i}\), where \(\boldsymbol{a}_{i}\in CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma,\sqcup_{i=1 }^{\kappa}N^{*}\alpha_{i})_{c}|_{\hbar=0}\). Since the codomain \(BSk_{\kappa}(\Sigma,\boldsymbol{q},\boldsymbol{\alpha},*)\) has no \(\hbar\)-torsion, it follows that \(\boldsymbol{a}_{0}\neq 0\). Then setting \(\hbar=0\), we have \(\mathcal{F}(\boldsymbol{a}_{0})=\mathcal{F}(\boldsymbol{a})=0\). This implies that \(\mathcal{F}_{0}(\boldsymbol{a}_{0})=0\), and thus \(\boldsymbol{a}_{0}=0\), a contradiction. Therefore, \(\mathcal{F}\) is injective. _Surjectivity of \(\mathcal{F}\)_: Let \(\boldsymbol{b}\in BSk_{\kappa}(\Sigma,\boldsymbol{q},\boldsymbol{\alpha},*)\). By Lemma 4.10, there exists \(\boldsymbol{a}_{0}\) such that \(\mathcal{F}(\boldsymbol{a}_{0})\equiv\boldsymbol{b}\,(\text{mod}\,\,\hbar)\). Let \[\boldsymbol{b_{1}}=\frac{\boldsymbol{b}-\mathcal{F}(\boldsymbol{a}_{0})}{ \hbar}|_{\hbar=0}.\] Then there exists an \(\boldsymbol{a}_{1}\) such that \(\mathcal{F}(\boldsymbol{a}_{1})\equiv\boldsymbol{b}_{1}\,(\text{mod}\,\,\hbar)\). Repeating this procedure, we get \(\mathcal{F}(\sum_{i\geq 0}\hbar^{i}\boldsymbol{a}_{i})=\boldsymbol{b}\). Thus \(\mathcal{F}\) is surjective. Geometric realization of \(CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma,\sqcup_{i=1}^{\kappa}N^{*}\alpha_{ i})_{c}\) as a module over the braid skein algebra ### Algebraic action We describe the (right) braid skein-module structure on \(BSk_{\kappa}(\Sigma,\boldsymbol{q},\boldsymbol{\alpha},*)\). Specifically, we give the action of the braid skein algebra \(BSk_{\kappa}(\Sigma,\boldsymbol{q},*)\) on our proposed module \(BSk_{\kappa}(\Sigma,\boldsymbol{q},\boldsymbol{\alpha},*)\). Let \([\gamma_{1}]\) be an element of the braid skein algebra \(BSk_{\kappa}(\Sigma,\boldsymbol{q},*)\) and \([\gamma_{2}]\) be an element of \(BSk_{\kappa}(\Sigma,\boldsymbol{q},\boldsymbol{\alpha},*)\). Then \([\gamma_{1}]\) and \([\gamma_{2}]\) represent \(c\)-deformed homotopy classes of paths in \(\Omega(\text{UConf}_{\kappa}((\Sigma\backslash\{*\}),\boldsymbol{q})\) and \(\Omega(\text{UConf}_{\kappa}((\Sigma\backslash\{*\}),\boldsymbol{q},\boldsymbol {\alpha})\), respectively. Suppose \(\gamma_{i}\) is a representative for \([\gamma_{i}]\), where \(\gamma_{i}\) is an element of \(\Omega(\text{UConf}_{\kappa}((\Sigma\setminus*))\otimes\mathbb{Z}[c^{\pm 1}] \otimes\mathbb{Z}[[\hbar]]\) satisfying the appropriate boundary conditions for the configuration space. **Proposition 5.1**.: \(BSk_{\kappa}(\Sigma,\boldsymbol{q},*)\) _acts on \(BSk_{\kappa}(\Sigma,\boldsymbol{q},\boldsymbol{\alpha},*)\) by the map_ \[\rho:BSk_{\kappa}(\Sigma,\boldsymbol{q},\boldsymbol{\alpha},*)\otimes BSk_{ \kappa}(\Sigma,\boldsymbol{q},*)\longrightarrow BSk_{\kappa}(\Sigma, \boldsymbol{q},\boldsymbol{\alpha},*)\] _given by_ \[\rho([\gamma_{2}]_{c},[\gamma_{1}]_{c})\mapsto[\gamma_{1}\gamma_{2}]_{c}.\] Proof.: This follows directly from the product defined on the braid skein algebra. _Notation_: We will denote the product as a left multiplication to be more compatible with the composition of braids. Specifically, given \(\gamma_{1}\in BSk_{\kappa}(\Sigma,\mathbf{q},*),\ \gamma_{2}\in BSk_{\kappa}(\Sigma,\mathbf{q},\mathbf{ \alpha},*)\), we write \(\gamma_{1}\gamma_{2}:=\rho(\gamma_{2},\gamma_{1})\). ### Geometric action We make use of the bijection between elements in wrapped HDHF and \(\kappa\)-tuples of perturbed geodesics. Consider the Hamiltonian \(H_{V}(t,q,p)=\frac{1}{2}|p|^{2}+V(t,q)\) and its Fenchel dual \(L_{V}(t,q,v)=\frac{1}{2}|v|^{2}-V(t,q)\). Then, as in Section 3, we consider the Morse function \(\mathcal{A}_{V}\) on the path space \(\Omega^{1,2}(\Sigma,q,q^{\prime})\) defined by \[\mathcal{A}_{V}(\gamma)=\int_{0}^{1}L_{V}(t,\gamma(t),\dot{\gamma}(t))dt.\] We generalize this to \(\kappa\) strands by considering the path spaces \[\Omega_{\rho}(\Sigma,\mathbf{q},\mathbf{q}):=\prod_{i=1}^{\kappa}\Omega( \Sigma,q_{i},q_{\rho(i)}), \Omega_{\rho}^{1,2}(\Sigma,\mathbf{q},\mathbf{q}):=\prod_{i=1}^{\kappa} \Omega^{1,2}(\Sigma,q_{i},q_{\rho(i)}),\] \[\Omega^{1,2}(\Sigma,\mathbf{q},\mathbf{q}):=\bigsqcup_{\rho\in S_{\kappa} }\Omega_{\rho}^{1,2}(\Sigma,\mathbf{q},\mathbf{q}),\] where \(\rho\in S_{\kappa}\) is a permutation. Then given \(\mathbf{\gamma}\in\Omega_{\rho}^{1,2}(\Sigma,\mathbf{q},\mathbf{q})\), we define \[\mathcal{A}_{V}(\mathbf{\gamma})=\sum_{i}\mathcal{A}_{V}(\gamma_{i}).\] For generic \(V\), the action functional \(\mathcal{A}_{V}\) on \(\Omega_{\rho}^{1,2}(\Sigma,\mathbf{q},\mathbf{q})\) is a Morse function which satisfies the Palais-Smale condition. The critical points of \(\mathcal{A}_{V}\) are exactly the \(\kappa\)-tuples of perturbed geodesics which are in bijection with the elements of our wrapped HDHF. We define the path space \(\Omega^{1,2}(\Sigma,\mathbf{q},\mathbf{\alpha})\) in a similar manner, the only difference being the end points of the paths are confined to \(\alpha_{i}\) instead of \(q_{i}^{\prime}\). Let \(\mathbf{y}\in CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma)\) and \(\mathbf{x}\in CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma,\sqcup_{i=1}^{\kappa} N^{*}\alpha_{i})\). We want to define the product \(\mathbf{y}\cdot\mathbf{x}\). On one hand, this has already been done in the framework of HDHF ([1]), where a \(\mu^{2}\) map gives us the product \(\mathbf{y}\cdot\mathbf{x}=\mu^{2}(\mathbf{x},\mathbf{y})\in CW(\sqcup_{i=1}^{\kappa}T_{q_{i} }^{*}\Sigma,\sqcup_{i=1}^{\kappa}N^{*}\alpha_{i})\). We aim to give a more geometrically intuitive interpretation of the product using the bijection with perturbed geodesics, making it more compatible with the algebraic action given in Section 5.1. We define a similar Morse function where the endpoint of the curve is allowed to move along our curves \(\alpha_{1},\dots,\alpha_{\kappa}\). With this setup, the geometric action of \(CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma)_{c}\) on \(CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma,\sqcup_{i=1}^{\kappa}N^{*}\alpha _{i})_{c}\) is given by translating elements to \(V\)-perturbed geodesics viewed as paths, concatenating the paths, then performing the Morse gradient flow discussed above. The result is a \(V\)-perturbed geodesic from \(\sqcup_{i=1}^{\kappa}q_{i}\) to \(\sqcup_{i=1}^{\kappa}\alpha_{i}\), which is then identified with an element of \(CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma,\sqcup_{i=1}^{\kappa}N^{*}\alpha _{i})_{c}\). When performing the Morse gradient flow after concatenation, we must consider passings through the marked point as well as any crossings of the perturbed geodesics. Similarly as before, to account for the marked point, we have the homotopy \(H\) from the concatenated curves \(\gamma_{1}\gamma_{2}\) to the resulting geodesic \(\gamma\) and we impose on this the \(c\)-deformed homotopy relation \([\gamma]=c^{2(H,*)}[\gamma_{1}\gamma_{2}]\). We account for any crossings that occur on \(\Sigma\) during our flow by applying the HOMFLY skein relation at all crossings. We can view each \(\kappa\)-tuple of perturbed geodesics as a braid in \([0,1]\times\Sigma\) by mapping \(\gamma(t)\mapsto(t,\gamma(t))\). Then each crossing will either be a positive crossing \(\sigma_{i}\) or a negative crossing \(\sigma_{i}^{-1}\), and our HOMFLY skein relation is \(\sigma_{i}-\sigma_{i}^{-1}=\hbar e\), where \(e\) is the resolution of the crossing. We call any time a positive crossing changes to a negative crossing (or vice versa) a switching. In the Morse theory view, this results in a bifurcated trajectory; one trajectory is a continuation of the switching and the other is a resolution of the crossing with a factor of \(\hbar\). As a braid, we get a sum of the same braid with the crossing reversed and a resolved crossing with an extra \(\hbar\) factor. Using this relation, we can resolve all of the crossings into \(\mathbb{Z}[\hbar,c^{\pm 1}]\)-linear combinations of braids which are tuples of \(V\)-perturbed geodesics. ### Equivalence of DAHA modules Our main goal of this section is to show that the following diagram commutes: (5.1) We recall each map of the diagram: The top maps \[\mathcal{F}_{1}:CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma,\sqcup_{i=1}^{ \kappa}N^{*}\alpha_{i})_{c}\longrightarrow BSk_{\kappa}(\Sigma,\boldsymbol{q},\boldsymbol{\alpha},*)\] and \[\mathcal{F}_{2}:CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma)_{c}\longrightarrow H _{\kappa}(\Sigma,\boldsymbol{q})\otimes\mathbb{Z}[[\hbar]]\] are the isomorphisms defined in Sections 4.3 and 4.2, respectively. The left map \[\mu^{2}:CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma,\sqcup_{i=1}^{\kappa}N^{ *}\alpha_{i})_{c}\otimes CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma)_{c} \longrightarrow CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma,\sqcup_{i=1}^{ \kappa}N^{*}\alpha_{i})_{c}\] is the \(A_{\infty}\)-map defined by Equation (2.2) with \(m=2\). It is given by a count of holomorphic maps which project onto a thrice-punctured disk satisfying the usual boundary conditions; see [1] for more details. The right map \[\rho:BSk_{\kappa}(\Sigma,\boldsymbol{q},\boldsymbol{\alpha},*)\otimes(H_{ \kappa}(\Sigma,\boldsymbol{q})\otimes\mathbb{Z}[[\hbar]])\longrightarrow BSk_ {\kappa}(\Sigma,\boldsymbol{q},\boldsymbol{\alpha},*)\] is given in Section 5.1 after identifying \(H_{\kappa}(\Sigma,\boldsymbol{q})\otimes\mathbb{Z}[[\hbar]]\) with \(BSk_{\kappa}(\Sigma,\boldsymbol{q},*)\). The bottom map is again the evaluation map from Section 4.3. **Lemma 5.2**.: _Diagram 5.1 commutes._ Before we prove the lemma, we introduce a moduli space of holomorphic curves similar to that of [11] a, Section 6.2]. Let \(T_{2}:=D_{3}\) be our \(A_{\infty}\)-base where \(\partial_{i}T_{2}=\partial_{i}D_{3}\). Let \(\mathcal{T}_{2}\) be the moduli space of \(T_{2}\) modulo automorphisms, and choose representatives \(T_{2}\) of equivalence classes in a smooth manner. Let \(\pi_{T^{*}\Sigma}\) be the projection \(T_{2}\times T^{*}\Sigma\to T^{*}\Sigma\) and choose a sufficiently generic consistent collection of compatible almost complex structures such that they are close to a split almost complex structure projecting holomorphically to \(T_{2}\), as in Section 2.1. Perturb the 0-section near the \(\alpha_{i}\) and let \(\boldsymbol{x}\) be the tuple of intersections \(\alpha_{i}\cap\phi_{H^{\prime}}(\Sigma)\) corresponding to the bottom generators, where \(\phi_{H^{\prime}}(\Sigma)\) is the perturbed 0-section. We denote by \(\mathcal{H}(\boldsymbol{q}^{\prime\prime},\boldsymbol{y}^{\prime},\boldsymbol {y},\boldsymbol{x})\) the moduli space of maps \[u:(\dot{F},j)\longrightarrow(T_{2}\times T^{*}\Sigma,J_{T_{2}}),\] where \((F,j)\) is a compact Riemann surface with boundary, \(\boldsymbol{p}_{0}\), \(\boldsymbol{p}_{1}\), \(\boldsymbol{p}_{2}\), \(\boldsymbol{p}_{3}\) are disjoint tuples of boundary punctures of \(F\) and \(\dot{F}=F\setminus\cup_{i}\boldsymbol{p}_{i}\), satisfying: \[\begin{cases}du\circ j=J_{T_{2}}\circ du;\\ \pi_{T^{*}\Sigma}\circ u(z)\in\phi^{2}_{H_{V}}(\sqcup_{i=1}^{\kappa}T^{*}_{q_{i} }\Sigma)\text{ if }\pi_{T_{2}}\circ u(z)\subset\partial_{0}T_{2};\\ \text{ each component of }\partial\dot{F}\text{ that projects to }\partial_{0}T_{2}\text{ maps to a distinct }\phi^{2}_{H_{V}}(T^{*}_{q_{i}}\Sigma);\\ \pi_{T^{*}\Sigma}\circ u(z)\in\phi^{1}_{H_{V}}(\sqcup_{i=1}^{\kappa}T^{*}_{q_{i }}\Sigma)\text{ if }\pi_{T_{2}}\circ u(z)\subset\partial_{1}T_{2};\\ \text{ each component of }\partial\dot{F}\text{ that projects to }\partial_{1}T_{2}\text{ maps to a distinct }\phi^{1}_{H_{V}}(T^{*}_{q_{i}}\Sigma);\\ \pi_{T^{*}\Sigma}\circ u(z)\in\sqcup_{i=1}^{\kappa}N^{*}\alpha_{i}\text{ if }\pi_{T_{2}}\circ u(z)\subset\partial_{2}T_{2};\\ \text{ each component of }\partial\dot{F}\text{ that projects to }\partial_{2}T_{2}\text{ maps to a distinct }N^{*}\alpha_{i};\\ \pi_{T^{*}\Sigma}\circ u(z)\in\Sigma\times\{0\}\subset T^{*}\Sigma\text{ if }\pi_{T_{2}}\circ u(z)\subset\partial_{3}T_{2};\\ \pi_{T^{*}\Sigma}\circ u\text{ tends to }\boldsymbol{q}^{\prime\prime},\ \boldsymbol{y}^{ \prime},\ \boldsymbol{y},\ \boldsymbol{x}\text{ as }s_{0},s_{1},s_{2},s_{3}\to+\infty;\\ \pi_{T_{1}}\circ u\text{ is a }\kappa\text{-fold branched cover of a fixed }T_{2}\in\mathcal{T}_{2}.\end{cases}\] In simpler terms, we look at the moduli space of holomorphic curves between the Lagrangians involved and the zero section of \(T^{*}\Sigma\) in the framework of HDHF. **Lemma 5.3**.: _There exists a sufficiently generic consistent collection of almost complex structures such that the moduli space \(\mathcal{H}(\boldsymbol{q}^{\prime\prime},\boldsymbol{y}^{\prime},\boldsymbol {y},\boldsymbol{x})\) is of dimension 1 and is transversely cut out for all \(\boldsymbol{x},\ \boldsymbol{y},\ \boldsymbol{y}^{\prime}\) and \(\boldsymbol{q}^{\prime\prime}\). Moreover, \(\mathcal{H}(\boldsymbol{q}^{\prime\prime},\boldsymbol{y}^{\prime},\boldsymbol {y},\boldsymbol{x})\) admits a compactification \(\overline{\mathcal{H}}(\boldsymbol{q}^{\prime\prime},\boldsymbol{y}^{\prime}, \boldsymbol{y},\boldsymbol{x})\) such that its boundary \(\partial\overline{\mathcal{H}}(\boldsymbol{q}^{\prime\prime},\boldsymbol{y}^ {\prime},\boldsymbol{y},\boldsymbol{x})\) is of dimension 0 and contains discrete broken or nodal curves._ Proof.: This is identical to Lemma 6.4 in [11]. As in Section 3.3, we define a map from \(\mathcal{H}(\boldsymbol{q}^{\prime\prime},\boldsymbol{y}^{\prime},\boldsymbol {y},\boldsymbol{x})\times[0,1]\longrightarrow(\Sigma)^{\kappa}\) by \[\gamma(u)(t)=(\pi_{T^{*}\Sigma}\circ u)\circ(\pi_{\Sigma}\circ u)^{-1}\circ \tau(t), \tag{5.2}\] where \(\tau:[0,1]\to\partial_{3}T_{2}\) parametrizes the boundary arc from \(p_{0}\) to \(p_{3}\). Let \[\mathcal{H}_{0}(\boldsymbol{q}^{\prime\prime},\boldsymbol{y}^{\prime}, \boldsymbol{y},\boldsymbol{x})=\{u\in\mathcal{H}(\boldsymbol{q}^{\prime\prime},\boldsymbol{y}^{\prime},\boldsymbol{y},\boldsymbol{x})\mid\gamma(u)(t)\in \text{UCConf}_{\kappa}(\Sigma\setminus\{*\})\text{ for all }t\}.\] As before, we define the evaluation map \[\mathcal{G}:\mathcal{H}_{0}(\boldsymbol{q}^{\prime\prime}, \boldsymbol{y}^{\prime},\boldsymbol{y},\boldsymbol{x})\longrightarrow BSk_{ \kappa}(\Sigma,\boldsymbol{q},\boldsymbol{\alpha},*),\] \[u\mapsto(-1)^{\natural(u)}\cdot c^{2(u,*)}\cdot\hbar^{\kappa- \chi(u)}\cdot[\gamma(u)].\] Figure 7. The \(A_{\infty}\)-base \(T_{2}\) with boundary conditions Proof of Lemma 5.2.: We analyze the boundary of the index \(1\) moduli space \(\overline{\mathcal{H}}(\boldsymbol{q}^{\prime\prime},\boldsymbol{y}^{\prime}, \boldsymbol{y},\boldsymbol{x})\) by considering the possible degenerations. Let \(\overline{\mathcal{H}}^{X}(\boldsymbol{q}^{\prime\prime},\boldsymbol{y}^{ \prime},\boldsymbol{y},\boldsymbol{x})\) be the subset of \(\overline{\mathcal{H}}(\boldsymbol{q}^{\prime\prime},\boldsymbol{y}^{\prime},\boldsymbol{y},\boldsymbol{x})\) consisting of maps with \(\chi(u)=\chi\). For a generic \(u\), \(u\in\mathcal{H}_{0}(\boldsymbol{q}^{\prime\prime},\boldsymbol{y}^{\prime}, \boldsymbol{y},\boldsymbol{x})\). However, for a \(1\)-parameter family \(u_{t}\in\mathcal{H}(\boldsymbol{q}^{\prime\prime},\boldsymbol{y}^{\prime}, \boldsymbol{y},\boldsymbol{x})\), \(\gamma(u_{t})\) may intersect the marked point \(*\) at some \(t\in(0,1)\). However, since we are taking \(c\)-deformed homotopy classes, we are guaranteed that \(\mathcal{G}(u_{0})=\mathcal{G}(u_{1})\). Hence we will not worry about intersections with \(*\). All codimension-\(1\) degenerations occur in the \(A_{\infty}\)-base direction, giving us a nice characterization of the possible breakings. The three types of boundary degenerations are: 1. \(\bigsqcup_{\boldsymbol{y}^{\prime\prime},x^{\prime}+x^{\prime\prime}-\kappa= \chi}\mathcal{M}^{\text{ind}=0,\chi^{\prime}}(\boldsymbol{y}^{\prime},\ \boldsymbol{y},\ \boldsymbol{y}^{\prime\prime})\times \mathcal{H}^{\text{ind}=0,\chi^{\prime\prime}}(\boldsymbol{q}^{\prime\prime}, \ \boldsymbol{y}^{\prime\prime},\ \boldsymbol{x})\); 2. \(\bigsqcup_{\boldsymbol{q}^{\prime},\chi^{\prime}+x^{\prime\prime}-\kappa= \chi}\mathcal{H}^{\text{ind}=0,\chi^{\prime}}(\boldsymbol{q}^{\prime\prime}, \ \boldsymbol{y}^{\prime},\ \boldsymbol{q}^{\prime})\times\mathcal{H}^{\text{ind}=0,\chi^{\prime\prime}}( \boldsymbol{q}^{\prime},\ \boldsymbol{y},\ \boldsymbol{x})\); 3. the set \(\partial_{n}\overline{\mathcal{H}}^{\text{ind}=1,\chi}_{n}(\boldsymbol{q}^{ \prime\prime},\boldsymbol{y}^{\prime},\boldsymbol{y},\boldsymbol{x})\) with a nodal degeneration along \(\Sigma\). The first type is shown on the left-hand side of Figure 8 and contributes \(\mathcal{F}_{1}(\mu^{2}(\boldsymbol{y},\ \boldsymbol{y}^{\prime}))\). The second type is shown on the right-hand side of Figure 8 and contributes \(\rho((\mathcal{F}_{1}\otimes\mathcal{F}_{2})(\boldsymbol{y},\ \boldsymbol{y}^{\prime}))\). In fact, all contributions to \(\mathcal{F}_{1}(\mu^{2}(\boldsymbol{y},\ \boldsymbol{y}^{\prime}))\) and \(\rho((\mathcal{F}_{1}\otimes\mathcal{F}_{2})(\boldsymbol{y},\ \boldsymbol{y}^{\prime}))\) come from such degenerations. The proof of Proposition 6.5 in [11] shows that the total contribution of the third type over all Euler characteristics \(\chi\) is zero. Hence it follows that \(\mathcal{F}_{1}(\mu^{2}(\boldsymbol{y},\ \boldsymbol{y}^{\prime}))=\rho(( \mathcal{F}_{1}\otimes\mathcal{F}_{2})(\boldsymbol{y},\ \boldsymbol{y}^{\prime}))\) and so the diagram commutes. We have thus shown that the map \(\mathcal{F}_{1}:CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}\Sigma,\sqcup_{i=1}^{ \kappa}N^{*}\alpha_{i})_{c}\longrightarrow BSk_{\kappa}(\Sigma,\boldsymbol{q}, \boldsymbol{\alpha},*)\) realizes the wrapped HDHF of \(\kappa\) cotangent fibers and \(\kappa\) conormal bundles of simple closed curves as a DAHA-module. ## 6. The enhanced polynomial representation In this section, we specialize to \(\Sigma=T^{2}\). We introduce the double affine Hecke algebra (DAHA) along with its polynomial representation. After fixing a configuration of points \(q_{1},\ldots,q_{\kappa}\) and curves \(\alpha_{1},\ldots,\alpha_{\kappa}\) in \(T^{2}\), we define the enhanced polynomial representation of DAHA on \(CW(\sqcup_{i=1}^{\kappa}T_{q_{i}}^{*}T^{2},\sqcup_{i=1}^{\kappa}N^{*}\alpha_{ i})_{c}\) and prove Theorem 1.3. ### Double affine Hecke algebra and its polynomial representation We briefly review the DAHA and its skein-theoretic realization using braids in the punctured torus. For more details, refer to [10], where these results are proven and discussed at length. Viewing \(T^{2}\) as a square \(I\times I\) with opposite sides identified, we choose \(*=(\frac{1}{2},\frac{1}{2})\) and let the \(\kappa\) points \(q_{1},\ldots,q_{\kappa}\) line up in increasing fashion along the lower part of the diagonal from \((0,0)\) to \(*\). We choose a convenient basis for the braids in \(B_{\kappa}(T^{2}\setminus\{*\},\boldsymbol{q})\). Let \(x_{i}\) (respectively, \(y_{i}\)) be the braid which consists of the point \(q_{i}\) moving uniformly around the \((-1,0)\) (respectively, \((0,1)\)) curve. Let \(\sigma_{i}\) for \(1\leq i\leq\kappa-1\) be the braid which locally exchanges the strings from \(q_{i}\) and \(q_{i+1}\) in a counterclockwise direction when looking down onto \(T^{2}\), as shown in Figure 7.1(B) below. Figure 8. \(T_{2}\) degenerations The key element in this geometric realization of DAHA is the next theorem due to Morton and Samuelson. **Theorem 6.1** (Morton-Samuelson).: _The braid skein algebra \(BSk_{\kappa}(T^{2},\boldsymbol{q},\ast)\) is isomorphic to the double affine Hecke algebra \(\breve{H}_{\kappa}\)._ We now fix the presentation for the skein algebra, and therefore the DAHA, that we will be using. **Theorem 6.2**.: _The double affine Hecke algebra \(\breve{H}_{\kappa}\) can be presented by the braids \(\sigma_{1},\ldots,\sigma_{\kappa-1},x_{1},y_{1}\) with relations:_ 1. \(\sigma_{i}\sigma_{j}=\sigma_{j}\sigma_{i},\;|i-j|>1,\)__ 2. \(\sigma_{i}\sigma_{i+1}\sigma_{i}=\sigma_{i+1}\sigma_{i}\sigma_{i+1},\)__ 3. \(\sigma_{i}x_{1}=x_{1}\sigma_{i},\;i>1,\)__ 4. \(\sigma_{i}y_{1}=y_{1}\sigma_{i},\;i>1,\)__ 5. \(x_{1}\sigma_{1}x_{1}\sigma_{1}=\sigma_{1}x_{1}\sigma_{1}x_{1},\)__ 6. \(y_{1}\sigma_{1}y_{1}\sigma_{1}=\sigma_{1}y_{1}\sigma_{1}y_{1},\)__ 7. \(x_{1}\sigma_{1}y_{1}\sigma_{1}^{-1}=\sigma_{1}y_{1}\sigma_{1}x_{1},\)__ 8. \((\sigma_{1}-s)(\sigma_{1}+s^{-1})=0,\)__ 9. \(x_{1}^{-1}y_{1}x_{1}y_{1}^{-1}=c^{2}\sigma_{1}\sigma_{2}\cdots\sigma_{\kappa- 1}\sigma_{\kappa-1}\cdots\sigma_{2}\sigma_{1}.\)__ _Remark 6.3_.: The relations stated above are slightly different than those in [13]. Specifically, we replace \(x_{1}\) with \(x_{1}^{-1}\) since the generator \(x_{i}\) in [13] corresponds to a loop based at \(q_{1}\) in the \((1,0)\) direction whereas our generator goes in the \((-1,0)\) direction. The relations have been adjusted with this in mind. Although \(\breve{H}_{\kappa}\) can be generated by \(\sigma_{1},\cdots,\sigma_{\kappa-1},\;x_{1},\) and \(y_{1}\), it will be convenient to make explicit the expression for the braids \(x_{i}\) and \(y_{i}\). Using the relations \(\sigma_{i}x_{i}\sigma_{i}=x_{i+1}\) and \(\sigma_{i}y_{i}\sigma_{i}=y_{i+1}\), it follows that \(x_{i}=\sigma_{i-1}\cdots\sigma_{1}x_{1}\sigma_{1}\cdots\sigma_{i-1}\) and \(y_{i}=\sigma_{i-1}\cdots\sigma_{1}y_{1}\sigma_{1}\cdots\sigma_{i-1}\). The DAHA \(\breve{H}_{\kappa}\) has a representation on \(\mathbb{Z}[[s]][c^{\pm 1}][X_{1}^{\pm 1},\cdots,X_{\kappa}^{\pm 1}]\) called the _polynomial representation_. Note that in this paper we are concerned with the polynomial representation of the DAHA \(\breve{H}_{\kappa}\) instead of the more common spherical DAHA; we will use the presentation given in [10]. **Definition 6.4**.: The _polynomial representation_ of \(\breve{H}_{\kappa}\) on \(\mathbb{Z}[[s]][c^{\pm 1}][X_{1}^{\pm 1},\cdots,X_{\kappa}^{\pm 1}]\) is defined by the following: \[x_{i} \mapsto X_{i},\] \[\sigma_{i} \mapsto s\tau_{i}+\frac{s-s^{-1}}{X_{i}X_{i+1}^{-1}-1}(\tau_{i}-1),\] \[y_{1} \mapsto\sigma_{1}^{-1}\cdots\sigma_{\kappa-1}^{-1}\omega,\] where \(\tau_{i}\) permutes \(X_{i}\) and \(X_{i+1}\) and for any \(f\in\mathbb{Z}[[s]][c^{\pm 1}][X_{1}^{\pm 1},\cdots,X_{\kappa}^{\pm 1}]\), \[(\omega f)(X_{1},\cdots,X_{\kappa})=f(c^{2}X_{\kappa},X_{1},\cdots,X_{\kappa-1 }).\] Figure 9. Generators for the braid group on the punctured torus Denote the action above by \[p:\ddot{H}_{\kappa}\times\mathbb{Z}[[s]][c^{\pm 1}][X_{1}^{\pm 1},\cdots,X_{\kappa}^{ \pm 1}]\longrightarrow\mathbb{Z}[[s]][c^{\pm 1}][X_{1}^{\pm 1},\cdots,X_{\kappa}^{\pm 1}].\] _Remark 6.5_.: Observe that in the definition of the DAHA, the variable \(s\) does not appear on its own. The HOMFLY skein relation in the definition of the braid skein algebra uses \(s-s^{-1}\); in the presentation of Theorem 6.2, expanding the relation (8) gives the term \(s-s^{-1}\). For our purposes, we let \(\hbar=s-s^{-1}\) and change the coefficient ring from \(\mathbb{Z}[[s]][c^{\pm 1}]\) to \(\mathbb{Z}[[\hbar]][c^{\pm 1}]\) when we are not dealing with the polynomial representation. ### The enhanced polynomial representation In this section, we compute the DAHA-module \(BSk_{\kappa}(T^{2},\mathbf{q},\mathbf{\alpha},*)\) for the configuration of points \(q_{1},\ldots,q_{\kappa}\) as in Section 6.1 and simple closed curves \(\alpha_{1},\ldots,\alpha_{\kappa}\) similar to Section 2.4. Viewing \(T^{2}\) as \(I\times I\) with opposite sides identified, let \(q_{i}=(\frac{i}{2(\kappa+1)},\frac{i}{2(\kappa+1)})\) and \(\alpha_{i}=\{\frac{1}{2}+\frac{i}{2\kappa+2}\}\times I\). Moreover, let \(*=(\frac{1}{2},\frac{1}{2})\). Choose a perturbation term \(V(t,q)\) such that \(q_{i}^{\prime}=\phi_{H_{V}}^{1}(T_{q_{i}}^{*}T^{2})\cap T^{2}\) is to the left of \(q_{i}\) when viewed on \(I\times I\) and \(|q_{i}-q_{i}^{\prime}|>|q_{j}-q_{j}^{\prime}|\) whenever \(i<j\). We introduce an additional parameter \(d\) which keeps track of the ends of the braids sliding along the \(\alpha_{i}\). Specifically, consider the projection \[\pi_{\alpha}:\alpha_{1}\times\cdots\times\alpha_{\kappa}\longrightarrow T^{\kappa}\] of the \(\alpha_{i}\) to \(T^{\kappa}=(S^{1})^{\kappa}\) given by dropping the \(x\) coordinate for each \(\alpha_{i}\). Let \[\Delta=\{(x_{1},\ldots,x_{\kappa})\mid x_{i}=x_{j}\text{ for some }i\neq j\}\] be the big diagonal in \(T^{\kappa}\). The parameter \(d\) counts signed intersections with \(\Delta\) as the braids are isotoped. **Definition 6.6**.: The \(d\)_-deformed braid skein group \(BSk_{\kappa}(T^{2},\mathbf{q},\mathbf{\alpha},*)_{d}\)_ is the free \(\mathbb{Z}[[\hbar]][c^{\pm 1},d^{\pm 1}]\)-module generated by elements of the braid skein group \(BSk_{\kappa}(T^{2},\mathbf{q},\mathbf{\alpha},*)\) subject to the _ends-slide relation_: \[\begin{array}{|c| a simple closed curve between the set of points \(q_{i}\) and curves \(\alpha_{i}\). We define the signed intersection number \(n_{i}=\langle\gamma_{i},\alpha\rangle\) of each perturbed geodesic with \(\alpha\). The sign of \(n_{i}\) is set to be positive if \(\gamma_{i}\) is in the \((-1,0)\) direction and negative if it is in the \((1,0)\) direction. This intersection number is similar to the process described in the proof of Lemma 2.9. The following is a slight enhancement of Lemma 1.2: **Lemma 1.2**.: \(BSk_{\kappa}(T^{2},\mathbf{q},\mathbf{\alpha},*)_{d}\simeq(\mathbb{Z}[a_{1}^{\pm 1}, \ldots,a_{\kappa}^{\pm 1}]\otimes\mathbb{Z}[S_{\kappa}])\otimes\mathbb{Z}[c^{ \pm 1}]\otimes\mathbb{Z}[[h]]\otimes\mathbb{Z}[d^{\pm 1}]\)_._ We denote the module with this presentation by \(PR_{\kappa}\). Proof.: Every element in \(BSk_{\kappa}(T^{2},\mathbf{q},\mathbf{\alpha},*)_{d}\) can be identified with a \(\mathbb{Z}[[h]][c^{\pm 1},d^{\pm 1}]\)-linear combination of \(\kappa\)-tuples of perturbed geodesics viewed as a braid. This is done by homotoping the strands to perturbed geodesics while keeping track of intersections within the strands, with the marked point, and with the big diagonal \(\Delta\) of \(T^{\kappa}\). Let \[f:BSk_{\kappa}(T^{2},\mathbf{q},\mathbf{\alpha},*)_{d}\longrightarrow(\mathbb{Z}[a_{1 }^{\pm 1},\ldots,a_{\kappa}^{\pm 1}]\otimes\mathbb{Z}[S_{\kappa}])\otimes \mathbb{Z}[c^{\pm 1}]\otimes\mathbb{Z}[[h]]\otimes\mathbb{Z}[d^{\pm 1}]\] be the \(\mathbb{Z}[[h]][c^{\pm 1},d^{\pm 1}]\)-linear map which sends \[\mathbf{\gamma}=\{\gamma_{1},\ldots,\gamma_{\kappa}\}\mapsto(a_{1}^{n_{1}}\cdots a _{\kappa}^{n_{\kappa}},\sigma),\] where \(\mathbf{\gamma}\) is a \(\kappa\)-tuple of perturbed geodesics and \(n_{i}=\langle\gamma_{i},\alpha\rangle\) is the signed intersection number described above. A generalization of the model computation in Section 2.4 shows us that \(f\) is surjective. That is, we can construct a \(\kappa\)-tuple of perturbed geodesics with the right permutation and intersections with \(\alpha\). Let \(\mathbf{y}\in BSk_{\kappa}(T^{2},\mathbf{q},\mathbf{\alpha},*)_{d}\) and suppose \(f(\mathbf{y})=0\). By an argument similar to the discussion at the end of Section 5.2, we can identify \(\mathbf{y}\) with a \(\mathbb{Z}[[h]][c^{\pm 1},d^{\pm 1}]\)-linear combination of distinct \(\kappa\)-tuples of perturbed geodesics \(\mathbf{\gamma}_{i}\): \[\mathbf{y}=\sum_{i=1}^{n}g_{i}\mathbf{\gamma}_{i},\] where \(g_{i}\in\mathbb{Z}[[h]][c^{\pm 1},d^{\pm 1}]\). Since \(f(\mathbf{y})=0\), it follows that \(\sum_{i=1}^{n}g_{i}f(\mathbf{\gamma}_{i})=0\). Since the perturbed geodesics \(\mathbf{\gamma}_{i}\) are distinct, they belong to different homotopy classes and so their images under \(f\) are linearly independent. It follows that \(\mathbf{y}=0\) and thus \(f\) is injective. Consider an element \((1,\sigma)\in PR_{\kappa}\). This element is represented by \(V\)-perturbed geodesics from each \(q_{i}\) to \(\alpha_{\sigma(i)}\) which do not intersect each other or the curve \(\alpha\). Let \(\sigma_{i}\in\ddot{H}_{\kappa}\), viewed as a braid consisting of strands \(q_{i}\mapsto q_{i+1}^{\prime}\), \(q_{i+1}\mapsto q_{i}^{\prime}\) and \(q_{j}\mapsto q_{j}^{\prime}\) for \(j\neq i,\ i+1\), where by \(a\mapsto b\) we mean "from \(a\) to \(b\)". Our choice of perturbation term \(V(t,q)\) guarantees that the strand from \(q_{i}\) crosses over the strand from \(q_{i+1}\) when projected down from \(T^{2}\times[0,1]\to T^{2}\). **Lemma 6.8**.: _In the situation above, if \(\sigma(i)<\sigma(i+1)\), then the concatenation of the braid \(\sigma_{i}\) and the geodesics representing \((1,\sigma)\) can be isotoped to geodesics \(d^{-1}(1,\sigma_{i}\sigma)\). (In the expression \(\sigma_{i}\sigma\), \(\sigma_{i}\) is viewed as an element of \(S_{\kappa}\) under the projection \(B_{\kappa}(T^{2},\mathbf{q})\to S_{\kappa}\).)_ _On the other hand, if \(\sigma(i)>\sigma(i+1)\), then the concatenation of the braid \(\sigma_{i}\) and the geodesics representing \((1,\sigma)\) is equivalent to a linear combination of geodesics \(h(1,\sigma)+d(1,\sigma_{i}\sigma)\)._ Proof.: Suppose \(\sigma(i)<\sigma(i+1)\). (See the left-hand side of Figure 10.) Slide the end of the strand going to \(\alpha_{\sigma(i+1)}\) along \(\alpha_{\sigma(i+1)}\) past the strand going to \(\alpha_{\sigma(i)}\). Due to the arrangement of the \(\alpha_{j}\), this creates a crossing in which the sliding strand crosses over the other, picking up a factor of \(d^{-1}\). The result is a braid which has a positive crossing at the bottom (by this we mean at a lower \(t\)-coordinate where the braid is in \(T^{2}\times[0,1]\) with coordinates \((q,t)\)) due to the \(\sigma_{i}\) and a negative crossing at the top. Thus we can isotope the two strands apart, arriving at the set of geodesics representing \(d^{-1}(1,\sigma_{i}\sigma)\). Suppose on the other hand that \(\sigma(i)>\sigma(i+1)\). (See the right-hand side of Figure 10.) Resolve the \(\sigma_{i}\) braid as \(\sigma_{i}^{-1}+h\) to get two braids: \(\sigma_{i}^{-1}\cdot(1,\sigma)+h(1,\sigma)\). For the \(\sigma_{i}^{-1}\cdot(1,\sigma)\) braid, slide the strand along \(\alpha_{\sigma(i)}\). This creates a positive crossing and picks up a factor of \(d\). With this positive crossing, we can isotope the strands apart to get a braid \(d(1,\sigma_{i}\sigma)\). Therefore, \(\rho((1,\sigma),\sigma_{i})=\hbar(1,\sigma)+d(1,\sigma_{i}\sigma)\). _Notation_: Recall that \(CW(\sqcup_{i=1}^{\kappa}T_{\bar{u}_{i}}^{*}T^{2},\sqcup_{i=1}^{\kappa}N^{*} \alpha_{i})_{c}\) is a right DAHA-module. It follows that there is a right \(\check{H}_{\kappa}\) action on \(PR_{\kappa}\). It is often convenient to identify elements of \(PR_{\kappa}\) with \(\kappa\)-tuples of paths which are elements of \(BSk_{\kappa}(T^{2},\mathbf{q},\mathbf{\alpha},*)\). Since composition of paths is written as a left multiplication, we choose to adopt the notation for a left module. Let \(h\in\check{H}_{\kappa}\) and \(x\in PR_{\kappa}\), then we denote \(\rho(x,h)=h\cdot x\), where \(\rho\) is as in Section 5.1. After identifying \(h_{1},h_{2}\in\check{H}_{\kappa}\) with braids \(\gamma_{1},\gamma_{2}\) we have that \(\rho(x,\tilde{\rho}(h_{2},h_{1}))=(\gamma_{1}\gamma_{2})\cdot x=\gamma_{1} \cdot(\gamma_{2}\cdot x)\), a left module structure, where \(\tilde{\rho}:BSk_{\kappa}(T^{2},\mathbf{q},\mathbf{q},*)\otimes BSk_{\kappa}(T^{2}, \mathbf{q},\mathbf{q},*)\longrightarrow BSk_{\kappa}(T^{2},\mathbf{q},\mathbf{q},*)\) is the product on the braid skein algebra. **Corollary 6.9**.: _Let \(\sigma_{i}\in\check{H}_{\kappa}\) and \((1,\sigma)\in PR_{\kappa}\). Then, setting \(d=s\),_ \[\sigma_{i}\cdot((1,\sigma)+(1,\sigma_{i}\sigma))=s((1,\sigma)+(1,\sigma_{i} \sigma)).\] Proof.: Suppose \(\sigma(i)<\sigma(i+1)\). Then \(\sigma_{i}\cdot(1,\sigma)=s^{-1}(1,\sigma_{i}\sigma)\) by Lemma 6.8. On the other hand, \((\sigma_{i}\sigma)(i)>(\sigma_{i}\sigma)(i+1)\), so \(\sigma_{i}\cdot(1,\sigma_{i}\sigma)=\hbar(1,\sigma_{i}\sigma)+s(1,\sigma)\). Adding the terms, expanding \(\hbar=s-s^{-1}\), and canceling the \(s^{-1}(1,\sigma_{i}\sigma)\) gives the result. The case \(\sigma(i)>\sigma(i+1)\) follows immediately by letting \(\sigma=\sigma_{i}\sigma\). **Definition 6.10**.: Let \((\mathbf{a},\sigma)=(a_{1}^{n_{1}}\cdots a_{\kappa}^{n_{\kappa}},\sigma)\in \mathbb{Z}[a_{1}^{\pm 1},\ldots,a_{\kappa}^{\pm 1}]\times S_{\kappa}\) be an element of \(PR_{\kappa}\). The _enhanced polynomial representation_ of the DAHA \(\check{H}_{\kappa}\) (with presentation given in Theorem 6.2) on \(PR_{\kappa}\) is defined on generators as follows: (1) \(x_{i}\cdot(\mathbf{a},\sigma)=(a_{i}\cdot\mathbf{a},\sigma),\) (2) \(\sigma_{i}\cdot(1,\sigma)=\begin{cases}d^{-1}(1,\sigma_{i}\sigma)&\text{ if } \sigma(i)<\sigma(i+1)\\ d(1,\sigma_{i}\sigma)+\hbar(1,\sigma)&\text{ if }\sigma(i)>\sigma(i+1)\end{cases},\) Figure 10. The compositions \(\sigma_{2}(1,(12))\) and \(\sigma_{2}(1,(123))\). On the left, we have the case where \(\sigma(i)<\sigma(i+1)\) and sliding the ends of the strands creates a crossing of the dotted purple arcs which allows us to separate the strands. On the right, we have the case where \(\sigma(i)>\sigma(i+1)\) and we see that the strands are linked after sliding the ends of the strands across each other. (3) \(y_{1}\cdot(\mathbf{a},\sigma)=c^{2n_{1}}\tau_{\kappa}^{-1}\cdot(\mathbf{a}_{\tau_{\kappa}},\tau_{\kappa}\sigma)\), where \(\tau_{\kappa}=\sigma_{\kappa-1}\cdots\sigma_{1}\) and \(\mathbf{a}_{\tau_{\kappa}}=a_{\kappa}^{n_{1}}a_{1}^{n_{2}}\cdots a_{\kappa-1}^{n_{ \kappa}}\). (2) in Definition 6.10 only defines the action of \(\sigma_{i}\) on an element \((1,\sigma)\), but we can extend this to an action on \((\mathbf{a},\sigma)\) by using the action of \(x_{i}\) along with the relations of \(\check{H}_{\kappa}\). Since \(\sigma_{i}x_{i}=x_{i+1}\sigma_{i}^{-1}\) and \(\sigma_{i}-\sigma_{i}^{-1}=\hbar\), it follows that \(\sigma_{i}x_{i}=x_{i+1}(\sigma_{i}-\hbar)=x_{i+1}\sigma_{i}-\hbar x_{i+1}\). Similarly, \(\sigma_{i}x_{i+1}=x_{i}\sigma_{i}+\hbar x_{i+1}\). If \(j\neq i,\ i+1\), then \(\sigma_{i}x_{j}=x_{j}\sigma_{i}\). Thus we are able to express any product of \(\sigma_{i}\) and \(x_{j}\) as an expression where all the \(x_{j}\) are in front of the \(\sigma_{i}\). Since \((\mathbf{a},\sigma)=(x_{1}^{n_{1}}\cdots x_{\kappa}^{n_{\kappa}})\cdot(1,\sigma)\), it follows that \(\sigma_{i}\cdot(\mathbf{a},\sigma)=(\sigma_{i}\cdot x_{1}^{n_{1}}\cdots x_{\kappa }^{n_{\kappa}})\cdot(1,\sigma)\). _Claim 6.11_.: We can express \(\sigma_{i}\cdot x_{1}^{n_{1}}\cdots x_{\kappa}^{n_{\kappa}}\) as \[f(x_{1},\cdots,x_{\kappa})\sigma_{i}+g(x_{1},\cdots,x_{\kappa}),\] where \(f,g\in\mathbb{Z}[\hbar,x_{1}^{\pm 1},\cdots,x_{\kappa}^{\pm 1}]\). Proof.: This follows from repeated applications of the relations discussed above. The action of \(\sigma_{i}\) on a general element \((\mathbf{a},\sigma)\) then follows from the claim above and Definition 6.10. Similarly, we can compute \(y_{i}\cdot(\mathbf{a},\sigma)\) by using the relation \(y_{i}=\sigma_{i-1}y_{i-1}\sigma_{i-1}\) repeatedly to reduce to an expression of transpositions and \(y_{1}\). **Proposition 6.12**.: _The action defined above is the one given by_ \[\rho_{d}:BSk_{\kappa}(T^{2},\mathbf{q},\mathbf{q},*)\otimes BSk_{\kappa}(T^{2},\mathbf{q},\mathbf{\alpha},*)_{d}\longrightarrow BSk_{\kappa}(T^{2},\mathbf{q},\mathbf{\alpha},*)_{d},\] _where \(\rho_{d}([\gamma_{1}]_{c},[\gamma_{2}]_{c,d})=[\gamma_{1}\gamma_{2}]_{c,d}\)._ Proof.: It suffices to verify (1), (2), and (3) in Definition 6.10. (1) is immediate from the definitions. (2) follows from Lemma 6.8. (3) is easiest seen on the universal cover of \(T^{2}\); see Figure 11 below. We slide the endpoint of the strand going from \(q_{1}\) to \(\alpha_{\sigma(i)}\) in \(y_{1}\cdot(\mathbf{a},\sigma)\) down along \(\alpha_{\sigma(i)}\). We can homotope the strand until it looks like the right side of Figure 11 without creating any crossings with other strands or the marked point. Thus the two braids are the same element in \(BSk_{\kappa}(T^{2},\mathbf{q},\mathbf{\alpha},*)_{d}\). Next, we can pull the strand down across the marked point, picking up a factor of \(c^{2}\) for each marked point we cross in this direction. The resulting braid is equivalent to \(c^{2n_{1}}(\tau_{\kappa})^{-1}\cdot(\mathbf{a}_{\tau_{\kappa}},\tau_{\kappa}\sigma)\). This gives (3). **Example 6.13**.: Suppose \(\kappa=2\). We compute \((\sigma_{1}\cdot y_{1})\cdot(a_{1}^{2}a_{2}^{-1},\sigma_{1})\). First of all, \[y_{1}\cdot(a_{1}^{2}a_{2}^{-1},\sigma_{1})=c^{4}\sigma_{1}^{-1}(a_{1}^{-1}a_{2 }^{2},e).\] Then \(\sigma_{1}\cdot c^{4}\sigma_{1}^{-1}(a_{1}^{-1}a_{2}^{2},e)=c^{4}(a_{1}^{-1}a_{ 2}^{2},e)\). In order to identify our enhanced polynomial representation with the standard polynomial representation from Definition 6.4, we must first find a way to eliminate the \(S_{\kappa}\) factor of \(PR_{\kappa}\). The standard polynomial representation acts on \(\mathbb{Z}[[s]][c^{\pm 1}][X_{1}^{\pm 1},\cdots,X_{\kappa}^{\pm 1}]\), whereas the generators of \(PR_{\kappa}\) have permutations \(\sigma\in S_{\kappa}\) associated to them. We identify \(a_{i}\) with \(X_{i}\) and substitute \(\hbar=s-s^{-1}\) to revert back to the ring \(\mathbb{Z}[s^{\pm 1},c^{\pm 1}]\). We take an average of the permutations which defines a \(\mathbb{Z}[[s]][c^{\pm 1}]\)-linear map: \[S:\mathbb{Z}[[s]][c^{\pm 1}][X_{1}^{\pm 1},\cdots,X_{\kappa}^{\pm 1}] \longrightarrow PR_{\kappa}\] \[X_{1}^{n_{1}}\cdots X_{\kappa}^{n_{n}}\mapsto\sum_{\sigma\in S_{\kappa}}(a_{1}^{n_ {1}}\cdots a_{\kappa}^{n_{n}},\sigma).\] _Remark 6.14_.: The \(\alpha_{i}\) are distinct and this is captured by our permutation term \(\sigma\) in a generator \((\mathbf{a},\sigma)\). The permutation-averaging can be thought of as getting rid of this distinctness, resulting in a permutation-free polynomial determined by \(\mathbf{a}\). The following theorem is a more precise version of Theorem 1.3. **Theorem 6.15**.: _After setting \(d=s\), the standard polynomial representation agrees with the enhanced polynomial representation composed with the permutation-averaging map \(S\)._ _More precisely, given an element \(h\in\tilde{H}_{\kappa}\) and an element \(f\in\mathbb{Z}[[s]][c^{\pm 1}][X_{1}^{\pm 1},\cdots,X_{\kappa}^{\pm 1}]\),_ \[S(p(h,f))=\rho_{d}(h,S(f)),\] _where \(p(h,f)\) is the action of the standard polynomial representation and \(\rho_{d}(h,S(f))\) is the action of the enhanced polynomial representation._ Proof.: It suffices to show that the equality holds for the generators as in Definition 6.10: (1) Let \(x_{i}\in\tilde{H}_{\kappa}\) and \(f(X_{1},\cdots,X_{\kappa})\in\mathbb{Z}[[s]][c^{\pm 1}][X_{1}^{\pm 1},\cdots,X_{ \kappa}^{\pm 1}]\). Then \[S(p(x_{i},f)) =S(X_{i}f)=\sum_{\sigma\in S_{\kappa}}(a_{i}f(a_{1},\ldots,a_{ \kappa}),\sigma)\] \[=\rho_{d}\big{(}x_{i},\sum_{\sigma\in S_{\kappa}}(f(a_{1},\ldots,a _{\kappa}),\sigma)\big{)}=\rho_{d}(x_{i},S(f)).\] (2) Let \(\sigma_{i}\in\tilde{H_{\kappa}}\) and consider \(1\in\mathbb{Z}[[s]][c^{\pm 1}][X_{1}^{\pm 1},\cdots,X_{\kappa}^{\pm 1}]\). Let \(A_{\kappa}\) be the alternating group on \(\kappa\) elements, i.e. the subgroup of \(S_{\kappa}\) consisting of even permutations. For a transposition \(\sigma_{i}\) and permutation \(\sigma\in S_{\kappa}\), either \(\sigma\in A_{\kappa}\) or \(\sigma_{i}\sigma\in A_{\kappa}\), so \[\sum_{\sigma\in S_{\kappa}}(1,\sigma)=\sum_{\rho\in A_{\kappa}}((1,\rho)+(1, \sigma_{i}\rho)).\] Figure 11. The action of \(y_{1}\) on \((a_{1},(13))\). Then, using Corollary 6.9, \[S(p(\sigma_{i},1)) =S(s)=\sum_{\sigma\in S_{\kappa}}s(1,\sigma)=\sum_{\rho\in A_{\kappa }}s((1,\rho)+(1,\sigma_{i}\rho))\] \[=\sum_{\rho\in A_{\kappa}}\rho_{d}\big{(}\sigma_{i},(1,\rho)+(1, \sigma_{i}\rho)\big{)}=\rho_{d}\big{(}\sigma_{i},\sum_{\sigma\in S_{\kappa}}(1,\sigma)\big{)}\] \[=\rho_{d}(\sigma_{i},S(1)).\] (3) Let \(y_{1}\in\check{H}_{\kappa}\) and \(f(X_{1},\cdots,X_{\kappa})=X_{1}^{n_{1}}\cdots X_{\kappa}^{n_{\kappa}}\in \mathbb{Z}[[s]][c^{\pm 1}][X_{1}^{\pm 1},\cdots,X_{\kappa}^{\pm 1}]\). Then \(\omega(X_{1}^{n_{1}}\cdots X_{\kappa}^{n_{\kappa}})=c^{2n_{1}}X_{\kappa}^{n_{1 }}X_{1}^{n_{2}}\cdots X_{\kappa-1}^{n_{\kappa}}\). Taking advantage of the first two parts of the proof, we see that \[S(p(y_{1},f)) =S(\tau_{\kappa}^{-1}\omega(f))=S(\tau_{\kappa}^{-1}c^{2n_{1}}X_{ \kappa}^{n_{1}}X_{1}^{n_{2}}\cdots X_{\kappa-1}^{n_{\kappa}})\] \[=c^{2n_{1}}\tau_{\kappa}^{-1}S(X_{\kappa}^{n_{1}}X_{1}^{n_{2}} \cdots X_{\kappa-1}^{n_{\kappa}})=c^{2n_{1}}\tau_{\kappa}^{-1}\sum_{\sigma\in S _{\kappa}}(a_{\kappa}^{n_{1}}a_{1}^{n_{2}}\cdots a_{\kappa-1}^{n_{\kappa}},\sigma)\] \[=c^{2n_{1}}\tau_{\kappa}^{-1}\sum_{\sigma\in S_{\kappa}}(a_{ \kappa}^{n_{1}}a_{1}^{n_{2}}\cdots a_{\kappa-1}^{n_{\kappa}},\tau_{\kappa} \sigma)=y_{1}\cdot\sum_{\sigma\in S_{\kappa}}(f(a_{1},\ldots,a_{\kappa}),\sigma)\] \[=\rho_{d}(y_{1},S(f)).\] Theorem 6.15 can be restated as the following corollary: **Corollary 6.16**.: _Let \(W\subset PR_{\kappa}\) be the submodule generated over \(\mathbb{Z}[[s]][c^{\pm 1}]\) by elements of the form \(\sum_{\sigma\in S_{\kappa}}(\boldsymbol{a},\sigma)\). Then the enhanced polynomial representation has a subrepresentation over \(W\) which is isomorphic to the polynomial representation of the double affine Hecke algebra._
2309.14680
Finite volume effects of the Nambu-Jona-Lasinio model with the running coupling constant
With the Schwinger's proper-time formalism of the Nambu-Jona-Lasinio model, we investigate the finite volume effects in the presence of magnetic fields. Since the coupling constant $G$ can be influenced by strong magnetic fields, the model is solved with a running coupling constant $G(B)$ which is fitted by the lattice average $(\Sigma_u+\Sigma_d)/2$ and difference $\Sigma_u-\Sigma_d$. The investigation mainly focuses on the constituent quark mass and the thermal susceptibility depending on the magnetic fields, the temperatures and the finite sizes. For the model in finite or infinite volume, the magnetic fields can increase the constituent quark mass while the temperatures can decrease it inversely. There is a narrow range of the box length that makes the effects of finite volume perform prominently. The model will behave close to infinite volume limit for larger box length. It is shown that the influence of finite volume can be changed by magnetic fields and temperatures. Finally, we discuss the thermal susceptibility depending on the temperature in finite volume in the presence of magnetic fields.
Shou-Zheng Su, Ye-Yin Zhao, Xin-Jian Wen
2023-09-26T05:07:11Z
http://arxiv.org/abs/2309.14680v1
# Finite volume effects of the Nambu-Jona-Lasinio model with the running coupling constant ###### Abstract With the Schwinger's proper-time formalism of the Nambu-Jona-Lasinio model, we investigate the finite volume effects in the presence of magnetic fields. Since the coupling constant \(G\) can be influenced by strong magnetic fields, the model is solved with a running coupling constant \(G(B)\) which is fitted by the lattice average \((\Sigma_{u}+\Sigma_{d})/2\) and difference \(\Sigma_{u}-\Sigma_{d}\). The investigation mainly focuses on the constituent quark mass and the thermal susceptibility depending on the magnetic fields, the temperatures and the finite sizes. For the model in finite or infinite volume, the magnetic fields can increase the constituent quark mass while the temperatures can decrease it inversely. There is a narrow range of the box length that makes the effects of finite volume perform prominently. The model will behave close to infinite volume limit for larger box length. It is shown that the influence of finite volume can be changed by magnetic fields and temperatures. Finally, we discuss the thermal susceptibility depending on the temperature in finite volume in the presence of magnetic fields. ## 1 Introduction The investigations of finite volume effects have great importance for the strongly interacting matter and attract many authors to enthusiastically do theoretical and experimental work [1]. The strongly interacting matter is essentially described by the Quantum ChromoDynamics(QCD) which correctly give most features of interactions between quarks and gluons. Experimentally the study of strongly interacting matter is mainly performed by the heavy-ion collisions at present. The strongly interacting matter produced in a heavy-ion collision always has finite volume which depends on the size of the colliding nuclei,the collision center of mass energy and the centrality of the collision [2]. The hadronic fireballs in relativistic nuclear collision reaction have volumes corresponding to a size of \(2fm\) in radius or more [3]. The quark-gluon plasma(QGP) produced in high-energy heavy-ion collisions, which is also thought to have permeated the first microseconds of the Universe and to have cooled sufficiently to transform to hadronic matter soon, has sizes estimated between \(2fm\) and \(10fm\)[4, 5]. The fireballs of QGP formed in ultra-relativistic heavy-ion collisions undergo phase transition in a special range of temperature, volume and chemical potential [6, 7]. Therefore the volume plays an important role in the properties of strongly interacting matter produced in heavy-ion collisions. The effects of finite volume have been thought over for decades in QCD and, especially, their analyses are encouraged by the simulations of QCD on finite, discrete Euclidean space-time lattices [1, 8]. The finite volume effects appear sensibly until the box sizes up to \(L\simeq 5fm\) in the lattice simulations of light quark mass and strange quark mass [9]. Theoretically the investigation of finite volume effects has been completed by many effective approaches such as the Dyson-Schwinger equations of QCD [10, 11, 12], the quark-meson model [13, 14], the non-interacting bag model [15], the Nambu-Jona-Lasinio(NJL) model [16, 17, 18, 19], the linear sigma model [20, 21] and others [22, 23, 24]. The boundary conditions should be imposed on the effective models when the strongly interacting matter constrained in finite volume. There are many results worked on antiperiodic boundary condition(APBC) as well as periodic boundary condition(PBC) [25, 26, 27, 28], sice no restrictions impose on the spatial direction for finite size systems [19]. In Ref. [19], the authors also put forward the application of stationary wave condition(SWC) considering that quark's wave function equals zero on the boundary. The effects of finite volume may be induced by the spherical MIT boundary condition when the strongly interacting matter is considered to be constrained in a sphere [29]. By means of the Multiple Reflection Expansion(MRE) formalism, the finite volume effects can be taken into account in the Polyakov loop Nambu-Jona-Lasinio model and the deconfinement phase transition could be influenced by a finite radius [30]. In this work we adopt the well-known antiperiodic boundary condition(APBC). The magnetic field has been shown by plenty of investigations that it has great influence on the thermodynamics and the phase transition of strongly interacting matter [31, 32]. In the presence of magnetic fields, the effects of finite volume occur consequently and the magnetic catalysis effect remains in all considered ranges of finite sizes [33]. Furthermore the magnetic field is also confirmed that it has influence on the coupling constant [34]. Many authors have made efforts in constructing a magnetic-field-dependent running coupling constant [35, 36, 37, 38]. As the running coupling constant depends on magnetic fields, it will have important influence on the phase transition as well as the stability of quark matter [39, 40]. It is reasonable that we investigate finite volume effects in this work with a running coupling constant in the presence of magnetic fields. In this paper, we will investigate finite volume effects of strongly interacting matter with the framework of the two-flavor NJL model. In the following content, we first show a general Schwinger's proper-time formalism of the NJL model in the presence of magnetic fields in Sec. 2. The finite temperatures are taken into account by applying the Matsubara formalism. In Sec. 3, the model is generalized to finite volume with the antiperiodic boundary condition. The running coupling constant depending on magnetic field is determined through fitting dimensionless quantities to the lattice results. By solving the gap equation in finite volume, we show the numerical results in Sec. 4 and the results are compared with the cases of infinite volume. Finally, a short summary is given in Sec. 5. ## 2 Schwinger's Proper Time Formalism of the NJL model In the presence of an constant magnetic field, the two flavor Nambu-Jona-Lasinio model, which is an effective low-energy model for QCD and is superior to investigate quark matter at finite density or temperature, can be described by the Lagrangian \[{\cal L}_{NJL}=\bar{\psi}(i\not{D}-\hat{m}_{c})\psi+G[(\bar{\psi}\psi)^{2}-( \bar{\psi}\gamma_{5}\vec{\tau}\psi)^{2}]. \tag{1}\] The two flavor quark field \(\psi=(\psi_{u},\psi_{d})^{T}\), the current quark mass matrix is \(\hat{m}_{c}=diag(m_{u},m_{d})\) and \(G\) is the conpling constant. For simplicity we adopt \(m_{u}=m_{d}=m_{c}\) with the isospin-symmetric limit. The covariant derivative \(D^{\mu}=\partial^{\mu}+i\hat{Q}A^{\mu}\) with the electric charge matrix \(\hat{Q}=diag(2e/3,-e/3)\) in flavor space and \(A^{\mu}\) is the electromagnetic gauge field. To investigate finite size effects within a constant magnetic background field \(B\), we can choose the Landau gauge \(A^{\mu}=(0,-By,0,0)\) which means the magnetic field along \(z\) direction. In the mean-field approximation, the interaction terms are assumed to be deviated small from their thermal average and then the Lagrangian can be simplified as \[{\cal L}_{MF}=\bar{\psi}(i\not{D}-M)\psi+G\langle\bar{\psi}\psi\rangle^{2}, \tag{2}\] where the constituent quark mass \(M\) is self-consistently determined by the gap equation \[M=m_{c}-2G\langle\bar{\psi}\psi\rangle. \tag{3}\] The thermal average fields \(\langle\bar{\psi}\psi\rangle\) in this formula is called quark condensate. It can be defined by the trace of the dressed quark propagator \[\langle\bar{\psi}\psi\rangle=-\int\frac{d^{4}p}{(2\pi)^{4}}\Tr[iS(p)]. \tag{4}\] The original purpose of the Schwinger's proper-time method is to maintain invariance properties in field calculations [41]. Then it is widely used to calculate higher loop and investigate the hadron and chiral phase transition as well as finite volume effects [42]. The Schwinger's proper-time method starts with the Green's function for the particle field \[(i\not{D}-M)S(x,y)=\delta(x,y). \tag{5}\] In the coordinate space \(S(x,y)\) and \(\delta(x,y)\) can be regarded as the matrix elements of operators \(\hat{S}\) and \(\hat{1}\) respectively. Consequently the Green's function can be expressed as \[\hat{S}=\frac{1}{i\not{D}-M}=\frac{-\not{D}+M}{-(\not{D})^{2}+M^{2}}=(-\not{D }+M)i\int_{0}^{\infty}dse^{-is(H-i\epsilon)}, \tag{6}\] \[S(x,y)=\langle x|\hat{S}|y\rangle=(-\not{D}+M)i\int_{0}^{\infty}ds\langle x|e^ {-is(H-i\epsilon)}|y\rangle, \tag{7}\] where we have defined \(H=-(D\!\!\!\!/)^{2}+M^{2}\). The meaningful idea of Schwinger's proper-time method is to consider \(H\) as an Hamiltonian that describes evolution of some systems in proper time \(s\). The state is defined as \(|x(s)\rangle=e^{iHs}|x\rangle\). Then the matrix element of \(e^{-iHs}\) can be viewed as transformation function from a state \(|y(s=0)\rangle\) to another state \(|x(s)\rangle\), i.e. \[\langle x|e^{-iHs}|y\rangle=\langle x(s)|y(0)\rangle. \tag{8}\] The operators also depend upon the proper time parameter and evolute as the equation of motion in the Heisenberg Picture: \[i\frac{dx_{\mu}}{ds}=[x_{\mu},H]=-2i\Pi_{\mu}, \tag{9}\] \[i\frac{d\Pi_{\mu}}{ds}=[\Pi_{\mu},H]=-2iq_{f}F_{\mu\nu}\Pi^{\mu}. \tag{10}\] The transformation function can be solved from the differential equations \[i\frac{\partial\langle x(s)|y(0)\rangle}{ds}=\langle x(s)|H|y(0)\rangle, \tag{11}\] \[[i\frac{\partial}{\partial x^{\mu}}-q_{f}A_{\mu}(x)]\langle x(s)|y(0)\rangle= \langle x(s)|\Pi_{\mu}(s)|y(0)\rangle, \tag{12}\] \[[i\frac{\partial}{\partial y^{\mu}}-q_{f}A_{\mu}(y)]\langle x(s)|y(0)\rangle= \langle x(s)|\Pi_{\mu}(0)|y(0)\rangle,, \tag{13}\] with the boundary condition \[\langle x(s)|y(0)\rangle|_{s\to 0}=\delta(x-y). \tag{14}\] Following the details in Ref. [41, 43], the transition function is finally expressed as \[\langle x(s)|y(0)\rangle=\frac{-i}{(4\pi s)^{2}}e^{-\frac{i}{4}(x-y)q_{f}F \coth(eFs)(x-y)-\frac{1}{2}Tr\ln\frac{\sinh(q_{f}Fs)}{q_{f}Fs}-is(\frac{q_{f}}{ 2}\sigma F+m^{2})}, \tag{15}\] where the integration involved the Wilson line is neglected since it has no effect on the gap equations. Taking the Fourier transformation of \(S(x,y)\) and carrying out the integration with respect to the coordinate variables, the quark propagator in momentum space is calculated as \[S(p)= \int_{0}^{\infty}dse^{-is\{M^{2}-[(p^{0})^{2}-(p^{3})^{2}]+\frac{ (p^{1})^{2}+(p^{2})^{2}}{q_{f}Bs\cot(q_{f}Bs)}\}} \tag{16}\] \[\times[M-\gamma^{\mu}p_{\mu}-(\gamma^{1}p_{2}-\gamma^{2}p_{1}) \tan(q_{f}Bs)][1-\tan(q_{f}Bs)\gamma^{1}\gamma^{2}].\] With the quark propagator Eq. (16), the Schwinger's proper-time formalism of the NJL model can be constructed eventually. By taking the trace in the Dirac space, the flavor space and the color space, the quark condensate can be calculated as \[\langle\bar{\psi}\psi\rangle=-4MN_{c}\sum_{f=u}^{d}\int\frac{d^{4}p}{(2\pi)^{4 }}\int_{0}^{\infty}dse^{-is\{M^{2}-[(p^{0})^{2}-(p^{3})^{2}]+\frac{(p^{1})^{2} +(p^{2})^{2}}{q_{f}Bs\cot(q_{f}Bs)}\}} \tag{17}\] The integration of the quark condensate with respect to momentum carries out without difficulty using Gaussian integral. After transferring this expression into the Euclidean space by taking \(s\rightarrow-i\tau\), the quark condensate is finally calculated as \[\langle\bar{\psi}\psi\rangle=-\frac{MN_{c}}{4\pi^{2}}\sum_{f=u}^{d}\int_{0}^{ \infty}\frac{d\tau}{\tau}\frac{|q_{f}B|}{\tanh(|q_{f}B|\tau)}e^{-\tau M^{2}} \tag{18}\] In order to take account of finite temperature, the integral over the four-dimensional momentum in Eq. (17) should be replaced by the Matsubara formalism [44], namely, \[\int\frac{d^{4}p}{(2\pi)^{4}}f(p)\to iT\sum_{n=-\infty}^{+\infty} \int\frac{d^{3}p}{(2\pi)^{3}}f(i\omega_{n},\vec{p}). \tag{19}\] The zeroth component of the momentum is discretized by the fermion Matsubara frequencies \(p_{0}=i\omega_{n}=i(2n+1)\pi T\). As a result, the quark condensate in finite temperature is expressed as \[\langle\bar{\psi}\psi\rangle=-\frac{MN_{c}}{4\pi^{2}}\sum_{f=u}^{d}|q_{f}B| \int_{0}^{\infty}\frac{d\tau}{\tau}\frac{e^{-\tau M^{2}}}{\tanh(|q_{f}B|\tau )}\{1+2\sum_{n=1}^{+\infty}(-1)^{n}e^{-\frac{n^{2}}{4T^{2}\tau}}\}, \tag{20}\] where we have used the properties of the Jacobi's theta function \[\vartheta_{3}(z|x)=\sum_{n=-\infty}^{+\infty}e^{i\pi xn^{2}}e^{2niz}=1+2\sum _{n=1}^{+\infty}e^{i\pi xn^{2}}\cos(2nz). \tag{21}\] \[(-ix)^{\frac{1}{2}}\vartheta_{3}(z|x)=e^{-\frac{iz^{2}}{\pi x}}\vartheta_{3}( -\frac{z}{x}|-\frac{1}{x}). \tag{22}\] Since the NJL model is nonrenormalizable for its quadratic fermionic interaction, it is necessary to employ a regularization scheme to handle divergent integrals skillfully in the model. In this proper-time formalism we choose an ultraviolet cutoff \(\Lambda\) to replace the lower limit of integrations in above equations, namely, \[\int_{0}^{+\infty}f(\tau)d\tau\rightarrow\int_{1/\Lambda^{2}}^{+\infty}f(\tau )d\tau. \tag{23}\] Therefore the cutoff \(\Lambda\), together with the current quark mass \(m_{c}\) and the coupling strength \(G\) are three parameters of the model which should be determined by the pion decay constant \(f_{\pi}\), the pion mass \(m_{\pi}\), and the quark condensate in vacuum. In this work we adopt the parameters to be \(m_{c}=4.516MeV\), \(\Lambda=1164.1MeV\), \(G\Lambda=3.608\) in Ref. [45] where \(f_{\pi}=92.4MeV\), \(m_{\pi}=138MeV\) and \(-\langle\bar{\psi}\psi\rangle_{0}^{1/3}=260MeV\). ## 3 Finite volume effects in the presence of magnetic field The Schwinger's proper-time formalism of the NJL model in Sec. 2 describes systems in infinite volume. It is meaningful to generalize the previous results to the systems of finite volume since quark matter is always produced in restricted space regions by high-energy collision experiments [5]. Assuming that the system under consideration is restricted in a box with equal side lengths \(L\), the quantum fields will satisfy boundary conditions leading to the rules \[\int\frac{d^{3}p}{(2\pi)^{3}}f(p_{1},p_{2},p_{3})\rightarrow\frac{1}{L^{3}} \sum_{n_{1}=-\infty}^{+\infty}\sum_{n_{2}=-\infty}^{+\infty}\sum_{n_{3}=- \infty}^{+\infty}f(\omega_{n_{1}},\omega_{n_{2}},\omega_{n_{3}}), \tag{24}\] which the momenta are discretized as \[p_{i}\rightarrow\omega_{n_{i}}=\frac{2\pi}{L}(n_{i}+\alpha),\quad n_{i}=0,\pm 1,\pm 2,.... \tag{25}\] The boundary condition is usually called antiperiodic boundary condition if \(\alpha=1\) and periodic one if \(\alpha=0\). In this work we adopt the antiperiodic boundary condition. Consequently, by replacing the momentum integration in Eq. (17) with Eq. (19), Eq. (24) and Eq. (25), the quark condensate at finite temperature and volume can be expressed as \[\langle\bar{\psi}\psi\rangle= -\frac{2MN_{c}}{L^{3}}\sum_{f=u}^{d}\int_{1/\Lambda^{2}}^{\infty} \frac{d\tau}{\sqrt{\pi\tau}}e^{-\tau M^{2}}\{1+2\sum_{n=1}^{+\infty}(-1)^{n}e^ {-\frac{n^{2}}{4T^{2}\tau}}\} \tag{26}\] \[\times\sum_{n_{1}=-\infty}^{+\infty}e^{-\frac{\tanh(q_{f}B\tau)} {q_{f}B}\omega_{n_{1}}^{2}}\sum_{n_{2}=-\infty}^{+\infty}e^{-\frac{\tanh(q_{f }B\tau)}{q_{f}B}\omega_{n_{2}}^{2}}\sum_{n_{1}=-\infty}^{+\infty}e^{-\tau \omega_{n_{3}}^{2}},\] where the proper time has been transferred into the Euclidean space and the ultraviolet cutoff is taken on. As the quark condensate is given by Eq. (26) in the presence of constant magnetic field, the gap equation Eq. (3) can be solved with the fixed three parameters \(\Lambda\), \(m_{c}\) and \(G\). The coupling constant \(G\) controls the strength of strongly interaction in QCD and, however, can be influenced by sufficiently strong magnetic fields [33, 46]. As a consequence, a appropriate form of running coupling constant depending on magnetic fields has been attempted by many authors to fit the lattice results [36, 37, 38, 47]. A magnetic-field-dependent running coupling constant has influence on the constituent quark mass as well as the phase transition of the model [39, 47]. In this work we adopt the magnetic-field-dependent running coupling constant [36] \[G(B)=\frac{G}{1+\alpha\ln(1+\beta\frac{eB}{\Lambda_{QCD}^{2}})}, \tag{27}\] where \(\Lambda_{QCD}^{2}=200MeV\). The free parameters \(\alpha\) and \(\beta\) are fixed to get reasonable results of the lattice average \((\Sigma_{u}+\Sigma_{d})/2\) for \(T=0MeV\). The lattice average \((\Sigma_{u}+\Sigma_{d})/2\) relates to quark condensate in NJL model by defining the dimensionless quantity [48] \[\Sigma_{f}(B,T)=\frac{2m_{c}}{m_{\pi}^{2}f_{\pi}^{2}}[\langle\bar{\psi}_{f} \psi_{f}\rangle(B,T)-\langle\bar{\psi}_{f}\psi_{f}\rangle(0,0)]+1 \tag{28}\] In figure 1, by solving the model in infinite volume with the running coupling constant Eq. (27), the average \((\Sigma_{u}+\Sigma_{d})/2\) as well as the difference \(\Sigma_{u}-\Sigma_{d}\) at \(T=0MeV\) is fitted to the lattice results of Ref. [48]. The reasonable results are obtained with \(\alpha=2.39\) and \(\beta=0.002515\). ## 4 Numerical results With the Schwinger's proper-time formalism, the model in the background of magnetic fields is extended to finite volume with antiperiodic boundary condition at finite temperatures. As the parameters \(\alpha\) and \(\beta\) in the running coupling constant Eq. (27) are fixed with the lattice data, the model can be presented by solving the gap equation Eq. (3). In this section we will concentrate on the finite volume effects with the running coupling constant Eq. (27) in the presence of magnetic fields. When the quark condensate is calculated in finite volume, the constituent quark mass \(M\) solved from the gap equation Eq. (3) will be influenced apparently by finite volume. In figure 2, the constituent quark mass at vanishing temperature is presented as a function of the inverse length \(1/L\) in the presence of magnetic fields \(eB=0.0GeV^{2}\), \(eB=0.3GeV^{2}\), \(eB=0.5GeV^{2}\), \(eB=0.7GeV^{2}\), \(eB=1.0GeV^{2}\). The results at \(1/L=0MeV\) correspond to the cases in infinite volume. As the magnetic field becomes stronger, the constituent quark mass is obviously increased especially when the box Figure 1: The average \((\Sigma_{u}+\Sigma_{d})/2\) and the difference \(\Sigma_{u}-\Sigma_{d}\) at \(T=0MeV\) are fitted to the lattice results of Ref. [48]. The reasonable results are obtained with \(\alpha=2.39\) and \(\beta=0.002515\) in the running coupling constant Eq. (27). Figure 2: The constituent quark mass \(M\) at vanishing temperature depends on the inverse length \(1/L\) in the presence of magnetic fields. The values at \(1/L=0MeV\) correspond to?those in infinite volume. length is close to infinite volume. While the increase falls off as the box length becomes smaller. When the box length is quite small, for example \(1/L=250MeV\) which corresponds to \(L\simeq 0.79fm\), the constituent quark mass is small enough to close to chiral limit. As the box length \(L\) increases away from quite small values, the constituent mass \(M\) will sharply increases for all cases of \(eB\) until it is close to the infinite volume limit. \(M\) is close to the infinite volume limit at about \(L=15fm\) for the case of \(eB=0.0GeV^{2}\) and about \(L=8fm\) for \(eB=1.0GeV^{2}\). The box length that is close to the infinite volume limit is reduced by stronger magnetic fields. To find out the range of the box length which sharply affects \(M\), the partial derivative respecting to length \(\partial M/\partial L\) is presented in figure 3. Overall, the constituent quark mass \(M\) varies sharply with the box length when \(\partial M/\partial L\) is massively greater than 0. According to the cases of \(T=0MeV\) at \(eB=0.1GeV^{2}\), \(0.3GeV^{2}\) and \(0.5GeV^{2}\), the constituent quark mass \(M\) will varies more sharply at stronger magnetic field. While it will varies less sharply at higher temperature according to the cases of \(eB=0.5GeV^{2}\) at \(T=0MeV\), \(150MeV\) and \(200MeV\). In addition, the box length that is close to the infinite volume limit can also be reduced by higher temperatures. The location of protrusions in figure 3 means that the constituent quark mass \(M\) varies sharply for most cases when the box length is restricted in a narrow range between about \(0.5fm\) and \(4fm\). The narrow range can be reduced by stronger magnetic fields and by higher temperatures. It means that the finite volume effects could be weakened when the system is in stronger magnetic fields and higher temperatures. Therefore, the box length in the following numerical results is mainly set in the narrow range where the effects of finite volume appear obviously. In figure 4, the constituent quark mass \(M\) is presented as a function of magnetic field \(eB\) at \(L=1.7fm\) Figure 3: The partial derivative respecting to length \(\partial M/\partial L\) depends on the box length \(L\) in the presence of magnetic fields \(eB=0.1GeV^{2}\),\(0.3GeV^{2}\) and \(0.5GeV^{2}\). The temperatures are appropriately selected as \(T=0MeV\),\(150MeV\) and \(200MeV\) for investigation. \(1.4fm\), \(1.2fm\), \(1.1fm\) and \(1.0fm\). For larger values of length, the behavior of \(M\) is close to the line of infinite volume. Evidently \(M\) decreases as the temperature increases as usual for all cases. While it decreases as the length decreases especially when the temperature is not high enough. When the temperature becomes high enough, for example \(T=250MeV\), \(M\) will decreases slightly by the decrease of \(L\). The dependence of constituent quark mass \(M\) on the magnetic field \(eB\) is presented in figure 5 at \(L=2fm\), \(1.7fm\), \(1.5fm\), \(1.2fm\) and \(1.0fm\) when the temperature is \(T=100MeV\). For larger values of length, the behavior of \(M\) is also close to the line Figure 4: The constituent quark mass \(M\) depends on the temperature \(T\) in the presence of magnetic field \(eB=0.7GeV^{2}\) at the length \(L=1.7fm\), \(1.4fm\), \(1.2fm\), \(1.1fm\) and \(1.0fm\). The solid line stands for the results in infinite volume. Figure 5: The constituent quark mass \(M\) depends on the magnetic field \(eB\) at the box length \(L=2fm\), \(1.7fm\), \(1.5fm\), \(1.2fm\) and \(1.0fm\) when the temperature \(T=100MeV\). The solid line stands for the results in infinite volume. of infinite volume. For the system in infinite volume, \(M\) will decreases as \(eB\) increases beginning with \(0GeV^{2}\). Soon afterwards \(M\) will apparently increases by the increase of the magnetic field. Contrasting the lines of finite volume with the infinite volume, the constituent quark mass \(M\) can be significantly decreased by the box length. The thermal susceptibility is defined as \[\chi_{T}=-m_{\pi}\frac{\partial\sigma}{\partial T}, \tag{29}\] where \(\sigma\) is given by \[\sigma=\frac{\langle\bar{\psi}_{u}\psi_{u}\rangle(B,T)+\langle\bar{\psi}_{d} \psi_{d}\rangle(B,T)}{\langle\bar{\psi}_{u}\psi_{u}\rangle(B,0)+\langle\bar{ \psi}_{d}\psi_{d}\rangle(B,0)}. \tag{30}\] In figure 6, the thermal susceptibility \(\chi_{T}\) is presented as a function of temperature \(T\) in the presence of magnetic field \(eB=0.1GeV^{2}\) at the box length \(L=2fm\), \(1.5fm\), \(1.3fm\) and \(1.0fm\). Similarly the lines of \(\chi_{T}\) is also close to the infinite volume for larger values of the length. As the box length decreases to small values showed in figure 6, \(\chi_{T}\) will move to the right, which leads to the peaks of the lines lying at higher temperatures. The peak of the thermal susceptibility \(\chi_{T}\) defines a pseudocritical temperature. ## 5 Summary In this work we investigate the effects of finite volume at finite temperatures in the presence of magnetic fields with the Schwinger's proper-time formalism of the NJL model. The system in finite volume is considered by the antiperiodic boundary condition. Since the coupling constant \(G\) can be influenced by sufficiently strong magnetic fields, the investigations of this paper work with the running coupling constant Eq. (27) depending on the magnetic field. The running coupling constant Eq. (27) is Figure 6: The thermal susceptibility \(\chi_{T}\) depends on the temperature \(T\) in the presence of magnetic field \(eB=0.1GeV^{2}\) at the length \(L=2fm\), \(1.5fm\), \(1.3fm\) and \(1.0fm\). The solid line stands for the results in infinite volume. properly determined by fitting the free parameters with the lattice average \((\Sigma_{u}+\Sigma_{d})/2\) and difference \(\Sigma_{u}-\Sigma_{d}\) at \(T=0MeV\). The numerical results of the model in finite volume are presented by solving the gap equation. The magnetic field has the effect that increases the constituent quark mass of the model in both infinite and finite volume according to the dependence of \(M\) on the inverse length. However, the increase falls off as the box length becomes smaller. When the length is quite small, the constituent quark mass is small enough to close to chiral limit. When the value of box length is appropriately larger, the constituent quark mass depending on the length will behave close to the infinite volume limit. Especially the box length that is close to the infinite volume limit can be reduced by stronger magnetic fields and by higher temperatures. For the system in finite temperatures and volume, there is a narrow range of the box length that makes the effects of finite volume perform prominently. This narrow range can also be reduced by stronger magnetic fields and by higher temperatures. At finite temperatures, the constituent quark mass decreases by higher temperatures for both infinite and finite volume in the presence of magnetic fields. When the temperature is not very high, the constituent quark mass can decreases obviously as the box length decreases in the narrow range which effects of finite volume perform prominently. While the constituent quark mass will decrease slightly when the temperature is high enough. Considering the dependence of \(M\) on the magnetic field at finite temperature, the constituent quark mass can also decrease obviously as the length decreases in the narrow range which effects of finite volume perform prominently. For the constituent quark mass depending on the temperature and the magnetic field, \(M\) will behave close to the infinite volume limit when the box length is appropriate large. While \(M\) has small values when the box length is quite small. The thermal susceptibility \(\chi_{T}\) changes obviously by the effects of finite volume. In the presence of magnetic fields, \(\chi_{T}\) also behave close to the infinite volume limit when the length has appropriately large values. The diagram of \(\chi_{T}\) will move to the right in the \(T\)-axis when the box length decreases in the narrow range which effects of finite volume perform prominently. Consequently the peaks of the \(\chi_{T}\), which define pseudocritical temperature, move to higher temperatures. The authors would like to thank support from the National Natural Science Foundation of China (under the Grant Nos. 11875181, 11705163, 12275102 and 11475110), the National Key Research and Development Program of China (under the Grant No. 2022YFA1604900) and the Natural Science Foundation of Anhui Sanlian University (under the Grant Nos. KJZD2021003, KJZD2022010, KJZD2023007).
2309.05766
Efficient two-qutrit gates in superconducting circuits using parametric coupling
Recently, significant progress has been made in the demonstration of single qutrit and coupled qutrit gates with superconducting circuits. Coupled qutrit gates have significantly lower fidelity than single qutrit gates, owing to long implementation times. We present a protocol to implement the CZ universal gate for two qutrits based on a decomposition involving two partial state swaps and local operations. The partial state swaps can be implemented effectively using parametric coupling, which is fast and has the advantage of frequency selectivity. We perform a detailed analysis of this protocol in a system consisting of two fixed-frequency transmons coupled by a flux-tunable transmon. The application of an AC flux in the tunable transmon controls the parametric gates. This protocol has the potential to lead to fast and scalable two-qutrit gates in superconducting circuit architectures.
Mahadevan Subramanian, Adrian Lupascu
2023-09-11T18:49:51Z
http://arxiv.org/abs/2309.05766v2
# Efficient two-qutrit gates in superconducting circuits using parametric coupling ###### Abstract Recently, significant progress has been made in the demonstration of single qutrit and coupled qutrit gates with superconducting circuits. Coupled qutrit gates have significantly lower fidelity than single qutrit gates, owing to long implementation times. We present a protocol to implement the CZ universal gate for two qutrits based on a decomposition involving two partial state swaps and local operations. The partial state swaps can be implemented effectively using parametric coupling, which is fast and has the advantage of frequency selectivity. We perform a detailed analysis of this protocol in a system consisting of two fixed-frequency transmons coupled by a flux-tunable transmon. The application of an AC flux in the tunable transmon controls the parametric gates. This protocol has the potential to lead to fast and scalable two-qutrit gates in superconducting circuit architectures. ## I Introduction The framework for the theoretical exploration and applications of quantum information is usually focused on the use of two-state systems, or qubits [1]. Encoding quantum information using multilevel system, or qudits, is motivated by potential advantages in expanding the capacity of quantum information processors [2], improved quantum error correction [3; 4], and effective compilation of gates [5]. Besides applications in quantum computing, the use of qudits improves quantum communication [6] and quantum sensing [7], and has applications in quantum simulation [8; 9]. Currently explored physical implementations of qudits include ion traps [10], molecular devices [11], solid-state defects [12; 13], and superconducting devices [8; 14]. Superconducting systems are a particularly favourable implementation of qudits, due to the ability to engineer quantum properties and control relevant transitions. In recent years, significant progress was made in this field, with achievements including advanced control on single [15; 16; 17; 14] and coupled [18; 19; 20] qutrits. In a manner similar to qubit based computing, two qudit gates have significantly larger errors than single qutrit gates, owing to the inherently longer execution time in currently used approaches. We explore a method for qutrit-qutrit gates focused on the implementation of the universal CZ gate using an effective decomposition into swap-type gates based on parametric coupling. Parametric coupling has been used extensively for coupled qubit gates and has been applied in a recent paper to qutrit-qutrit gates [20]. We identify optimal decompositions of a CZ gate into parametric gates, and we performed a detailed analysis for transmon based qutrits. The contents of this paper is divided into three sections. In the following section, we develop and discuss the theory and explain the working principle of swap type gates using parametric coupling. In the next section, we perform a numerical analysis of parametric gates based on simulations of the dynamics while also discussing the challenges involved in choosing appropriate parameters for these simulations. Finally, in the next section we show how qutrit gates can be compiled using the two entangling gates we implement using parametric coupling including a way to decompose the qutrit CZ gate. ## II Implementation of swap-type gates using parametric coupling Parametric gates are enabled by modulating the couplings or energy levels of a circuit at a specific frequency so as to enable a sideband transition between certain energy levels [21; 22; 23; 24; 25; 26; 27; 28; 29]. Parametric coupling shows great promise in designing scalable superconducting circuits [21; 24] by allowing desired specific transitions to be activated based on frequency selectivity. A commonly explored setup involves transmons coupled through a tunable transmon [21]. Alternate circuit configurations which use transmons as well [27; 30] and flux qubits or DC-SQUIDs instead of transmons [23; 28] have been explored as well. Our theoretical analysis of parametric coupling, while focused on two transmons coupled via a flux tunable transmon, can be straightforwardly extended to alternative superconducting circuits. ### Circuit Hamiltonian The circuit that we analyze (see figure 1) consists of two fixed frequency transmons "Q1" and "Q2" and a coupler implemented as a flux - tunable transmon "C". This architecture is directly based on the use of a tunable bus [21] and similar architectures have been used for two-qubit gates [31; 32]. All the transmons are capacitively coupled to each other which results in the coupled system Hamiltonian \[\hat{H}_{\text{lab}}(t)=\hat{H}_{0}(t)+\hat{H}_{m}+\hat{H}_{d} \tag{1}\]
2309.03317
Sub-Array Selection in Full-Duplex Massive MIMO for Enhanced Self-Interference Suppression
This study considers a novel full-duplex (FD) massive multiple-input multiple-output (mMIMO) system using hybrid beamforming (HBF) architecture, which allows for simultaneous uplink (UL) and downlink (DL) transmission over the same frequency band. Particularly, our objective is to mitigate the strong self-interference (SI) solely on the design of UL and DL RF beamforming stages jointly with sub-array selection (SAS) for transmit (Tx) and receive (Rx) sub-arrays at base station (BS). Based on the measured SI channel in an anechoic chamber, we propose a min-SI beamforming scheme with SAS, which applies perturbations to the beam directivity to enhance SI suppression in UL and DL beam directions. To solve this challenging nonconvex optimization problem, we propose a swarm intelligence-based algorithmic solution to find the optimal perturbations as well as the Tx and Rx sub-arrays to minimize SI subject to the directivity degradation constraints for the UL and DL beams. The results show that the proposed min-SI BF scheme can achieve SI suppression as high as 78 dB in FD mMIMO systems.
Mobeen Mahmood, Asil Koc, Duc Tuong Nguyen, Robert Morawski, Tho Le-Ngoc
2023-09-06T18:57:37Z
http://arxiv.org/abs/2309.03317v1
# Sub-Array Selection in Full-Duplex Massive MIMO for Enhanced Self-Interference Suppression ###### Abstract This study considers a novel full-duplex (FD) massive multiple-input multiple-output (mMIMO) system using hybrid beamforming (HBF) architecture, which allows for simultaneous uplink (UL) and downlink (DL) transmission over the same frequency band. Particularly, our objective is to mitigate the strong self-interference (SI) solely on the design of UL and DL RF beamforming stages jointly with sub-array selection (SAS) for transmit (Tx) and receive (Rx) sub-arrays at base station (BS). Based on the measured SI channel in an enechoic chamber, we propose a min-SI beamforming scheme with SAS, which applies perturbations to the beam directivity to enhance SI suppression in UL and DL beam directions. To solve this challenging non-convex optimization problem, we propose a swarm intelligence-based algorithmic solution to find the optimal perturbations as well as the Tx and Rx sub-arrays to minimize SI subject to the directivity degradation constraints for the UL and DL beams. The results show that the proposed min-SI BF scheme can achieve SI suppression as high as 78 dB in FD mimimo systems. ## I Introduction The ever-increasing demand for data traffic has presented a considerable challenge for future wireless communications systems, which must efficiently utilize the available frequency spectrum. In this regard, the full-duplex (FD) communications technology has demonstrated potential for significant improvement in spectral efficiency as compared to traditional frequency and time-division duplexing systems. The simultaneous transmission of uplink (UL) and downlink (DL) signals in the same frequency and time resources in FD communications has the potential to theoretically double the capacity by utilizing resources effectively [1]. Massive multiple-input multiple-output (mMIMO), which is a pivotal enabler of fifth-generation (5G) networks, utilizes large array structures at the base station (BS) to serve multiple users via spatial multiplexing. The three dimensional (3D) beamforming of mMIMO can further enhance the performance by exploiting the additional spatial degrees of freedom (DoF) offered by multiple transmitter (Tx) and receiver (Rx) antennas. Thus, FD and mMIMO together can fulfill the throughput and latency demands of 5G and beyond 5G (B5G) wireless communications systems with limited spectrum resources [2]. The simultaneous transmission and reception of FD communications over the same frequency band may sound like a promising solution, but it comes with a serious challenge: strong self-interference (SI). Contrary to half-duplex (HD) communications, SI, which is produced as a result of the strong transmit signal's coupling with the Rx chains, has a significant adverse effect on the performance of FD systems because it can impair the Rx antennas' ability to receive the UL signal. Many research efforts have focused on SI suppression in FD systems to fully utilize this technology [3, 4, 5]. In particular, different SI suppression techniques can be broadly classified as follows: 1) antenna isolation; 2) analog cancellation; and 3) digital cancellation [6, 7, 8]. In FD communications systems, antenna isolation, analog/digital SI cancellation (SIC), and their combinations have been used to effectively suppress the strong SI signal below the Rx noise level [9]. In 5G and B5G systems, there is a growing trend toward utilizing an increased number of antennas at BS. For instance, the third generation partnership project (3GPP) has been contemplating the deployment of 64-256 antenna configurations [10]. However, this poses a significant hurdle for analog SIC in FD mMIMO systems, as the associated analog complexity becomes prohibitively large as an increased number of antennas results in more SI components. To mitigate this challenge, SoftNull relies exclusively on transmit beamforming to mitigate SI, thereby completely obviating the need for analog cancelers [11]. In the realm of mMIMO HD systems, fully-digital beamforming (FDBF) and hybrid beamforming (HBF) are two common approaches for mitigating interference. Recent studies, for instance, SoftNull, have exploited the availability of multiple antennas in FD mMIMO systems in order to provide SI suppression via FDBF, commonly referred to as spatial suppression. However, FDBF becomes infeasible for mMIMO systems with very large array structures due to the following reasons: 1) prohibitively high cost; 2) complexity; and 3) energy consumption. Conversely, HBF, which involves the design of both the radio frequency (RF) and baseband (BB) stages, can approach the performance of FDBF by reducing the number of energy-intensive RF chains, thereby minimizing power consumption. In the related studies, various HBF techniques, which include HD transmission in DL and UL, are investigated in [12, 13, 14] and for FD transmission in [15, 16, 17, 18, 19]. In particular, the authors in [12, 13, 14] introduce different HBF techniques, where the RF stage is constructed utilizing users' angular information only. The angle-of-departure (AoD) and angle-of-arrival (AoA) information is used in [15] to propose a hybrid precoding/combining (HPC) technique for a millimeter-wave (mmWave) FD mMIMO system to suppress SI and decrease the number of RF chains. The authors in [16] introduced the HPC for an FD amplify-and-forward (AF) relay using correlated estimation errors to mitigate SI. For the multi-user (MU) FD mMIMO system in [17], the non-orthogonal beams are generated to serve multiple users to maximize sum-rate capacity while suppressing the strong SI. Similarly, the authors in [18] show that SI can be reduced by around 30 dB through the joint design of the transmit and receive RF beamformer weights, as well as the precoder and combiner matrices. A two-timescale HBF scheme for FD mmWave multiple-relay transmission is investigated in [19], where the analog and digital beams are updated based on channel samples and real-time low-dimensional effective channel state information (CSI) matrices, respectively. Most hybrid mMIMO systems consider either fully-connected (FC) or sub-array-connected (SAC) HBF architectures. Compared to FC, SAC requires a lower number of phase shifters (PSs). Thus, its use can reduce power consumption at the expense of some performance degradation; however, it can provide a better spectral-energy efficiency tradeoff [20]. Compared to HD transmissions, the use of SAC in FD mMIMO systems is limited. The use of SAC architecture both for Tx and Rx in FD transmissions can provide an additional DoF to suppress strong SI. To address this gap in the literature, this paper introduces a novel sub-array selection scheme (SAS) to suppress strong SI in FD mMIMO systems using a measured SI channel. To the best of our knowledge, this is the first work that considers SI suppression solely based on the design of the transmit and receive RF beamforming stages and SAS. Our objective here is to show that the SI level can be reduced to the noise floor by merely employing the spatial DoF that large array architectures afford, without the need for expensive and complicated analogue cancellation circuits. We propose a novel min-SI hybrid beamforming scheme, which applies perturbations to the beam directions jointly with SAS for enhanced SI suppression. To reduce the high computational complexity during the search for optimal perturbations, we propose a swarm intelligence-based algorithmic solution to find the optimal perturbations, and the Tx/Rx sub-arrays to minimize SI while satisfying the directivity degradation constraints for the UL and DL beams. The results show that the proposed min-SI BF scheme together with SAS can achieve SI suppression as high as 78 dB in real-time implementations. ## II System Model & Measured SI Channel ### _System Model_ We consider a single-cell FD mMIMO system for joint DL and UL transmission as shown in Fig. 1. Here, the BS operates in FD mode to simultaneously serve \(K_{D}\) DL and \(K_{U}\) UL single-antenna UEs over the same frequency band, while the UEs operate in HD mode due to the hardware/software constraints on UEs (e.g., low power consumption, limited signal processing and active/passive SI suppression capability). As shown in Fig. 2, the BS is equipped with transmit/receive uniform rectangular arrays (URAs), which are separated by an antenna isolation block for passive (i.e., propagation domain) SI suppression. Specifically, the transmit (receive) URA has \(M_{D}=M_{D}^{(x)}\times M_{D}^{(y)}(M_{U}=M_{U}^{(x)}\times M_{U}^{(y)})\) antennas, where \(M_{D}^{(x)}(M_{U}^{(x)})\) and \(M_{D}^{(y)}(M_{U}^{(y)})\) denote the number of transmit (receive) antennas along \(x\)-axis and \(y\)-axis, respectively. For the proposed FD mMIMO system, we consider the DL signal is processed through DL BB stage \(\mathbf{B}_{D}\in\mathbb{C}^{N_{D}\times K_{D}}\) and DL RF beamformer \(\mathbf{F}_{D}\in\mathbb{C}^{M_{D}\times N_{D}}\), where \(N_{D}\) is the number of RF chains such that \(K_{D}\leq N_{D}\ll M_{D}\). Similarly, the received UL signal at BS is processed through UL RF beamformer \(\mathbf{F}_{U}\in\mathbb{C}^{N_{U}\times M_{U}}\) and UL BB combiner \(\mathbf{B}_{U}\in\mathbb{C}^{K_{U}\times N_{U}}\) by utilizing \(K_{U}\leq N_{U}\ll M_{U}\) RF chains. Here, the UL and DL RF beamforming stages (i.e., \(\mathbf{F}_{U}\) and \(\mathbf{F}_{D}\)) are built using low-cost PSs. The DL channel matrix is denoted as \(\mathbf{H}_{D}\in\mathbb{C}^{K_{D}\times M_{D}}\) with \(\mathbf{h}_{D,k}\in\mathbb{C}^{M_{D}}\) as the \(k^{th}\) DL UE channel vector. Similarly, \(\mathbf{H}_{U}\in\mathbb{C}^{M_{U}\times K_{U}}\) is the UL channel matrix with \(\mathbf{h}_{U,k}\in\mathbb{C}^{M_{U}}\) as the \(k^{th}\) UL UE channel vector. Due to the FD transmission, the SI channel matrix \(\mathbf{H}_{SI}\in\mathbb{C}^{M_{U}\times M_{D}}\) is present between Tx and Rx antennas at the BS. For the DL transmission, the transmitted signal vector at the BS is defined as \(\mathbf{s}_{D}=\mathbf{F}_{D}\mathbf{B}_{D}\mathbf{d}_{D}\in\mathbb{C}^{M_{D}}\), where \(\mathbf{d}_{D}=\left[d_{D,1},\cdots,d_{D,K_{D}}\right]^{T}\in\mathbb{C}^{K_{D}}\) is the DL data signal vector such that \(\mathbb{E}\{\mathbf{d}_{D}\mathbf{d}_{D}^{H}\}=\mathbf{I}_{K_{D}}\). The transmitted signal vector satisfies the maximum DL transmit power constraint, which is \(\mathbb{E}\{||\mathbf{s}_{D}||^{2}\}=\mathrm{tr}(\mathbf{F}_{D}\mathbf{B}_{D }\mathbf{B}_{D}^{H}\mathbf{F}_{D}^{H})\leq P_{D}\), where \(P_{D}\) is the total DL transmit power. Then, the received DL signal vector is given as follows: \[\mathbf{r}_{D}=\underbrace{\mathbf{H}_{D}\mathbf{F}_{D}\mathbf{B}_{D}\mathbf{ d}_{D}}_{\text{Desired Signal}}+\underbrace{\mathbf{H}_{U}\mathbf{d}_{U}}_{\text{IUI by ULUE}}+\underbrace{\mathbf{w}_{D}}_{\text{Noise}}, \tag{1}\] where \(\mathbf{H}_{U}\in\mathbb{C}^{K_{D}\times K_{U}}\) is the inter-user interference (IUI) between the DL/UL UE and \(\mathbf{w}_{D}=\left[w_{D,1},\cdots,w_{D,K_{D}}\right]^{T}\sim\) Fig. 1: System model of FD mMIMO HBF communications system. Fig. 2: Tx and Rx antenna setup in anechoic chamber. \(\mathcal{CN}\left(0,\sigma_{W}^{2}\mathbf{I}_{K_{D}}\right)\) is the complex circularly symmetric Gaussian noise vector. Here, we define \(P_{U}\) as the transmit power of each UL UE. Similar to the DL data signal vector, the UL received signal at BS can be written as follows: \[\bar{\mathbf{r}}_{U}= \underbrace{\mathbf{B}_{U}\mathbf{F}_{U}\mathbf{H}_{U}\mathbf{d}_{ U}}_{\text{Desired Signal}}+\underbrace{\mathbf{B}_{U}\mathbf{F}_{U}\mathbf{H}_{S} \mathbf{F}_{D}\mathbf{B}_{D}\mathbf{d}_{D}+\underbrace{\mathbf{w}_{U}}_{\text{ Midified Noise}}}_{\text{Mobile Model Noise}}, \tag{2}\] where \(\mathbf{d}_{U}=\left[d_{U,1},\cdots,d_{U,K_{U}}\right]^{T}\in\mathbb{C}^{K_{U}}\) is the UL data signal vector such that \(\text{E}\left\{\mathbf{d}_{U}\mathbf{d}_{U}^{H}\right\}=\mathbf{I}_{K_{U}}\) and \(\tilde{\mathbf{w}_{U}}=\mathbf{B}_{U}\mathbf{F}_{U}\mathbf{w}_{U}\), where \(\mathbf{w}_{U}=\left[w_{u,1},\cdots,w_{U,K_{U}}\right]^{T}\sim\mathcal{CN}(0, \sigma_{W}^{2}\mathbf{I}_{K_{U}})\) is the complex circularly symmetric Gaussian noise vector. The desirable DL (UL) beam direction has azimuth and elevation angles \(\theta_{D}(\theta_{U})\) and \(\psi_{D}(\psi_{U})\), respectively. For simplicity, we consider the following: 1) a single UL and DL UE (i.e., \(K_{D}=K_{U}=1\)) 1; and 2) a uniform linear sub-array, where \(\psi_{D}=\psi_{U}=90^{\circ}\). Then, the phase response vectors of the DL and UL directions can be written as follows: Footnote 1: For simplicity in presentation, in the following discussion, we consider a simple scenario of single UL and a single DL UE. However, the proposed scheme can be applied to multiple UL and DL UEs, which is left as our future work. \[\mathbf{\Phi}_{D}(\theta_{D})= \llbracket\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! problem defined in (9) is non-convex and intractable due to the non-linearity constraints. ## III Tx and Rx Sub-Array Mapping and Proposed Joint Min-SI BF and SAS In this section, our objectives are to suppress strong SI solely based on the design of min-SI RF-BF stages \(\mathbf{f}_{U}\) and \(\mathbf{f}_{D}\) jointly with SAS to provide an additional DoF in FD mMIMO systems, which can avoid the use of costly analog cancellation circuits. In Fig. 3(a), the antenna mapping is shown for both Tx and Rx of BS, which consists of 64 elements at BS and separated by an antenna isolation block. At first, we discuss the sub-array mapping for our given Tx/Rx setup. ### _Sub-Array Mapping_ We consider the following two different sub-array configurations for Tx and Rx: 1) 1\(\times\)4 sub-array; and 2) 1\(\times\)8 sub-array. Given 64 Tx or Rx antenna elements, we can have 16 possible Tx and Rx sub-arrays of 1\(\times\)4 elements, which are arranged in the form of ULA. Fig. 3(b) depicts the mapping of 16 different 1\(\times\)4 sub-arrays for both Tx and Rx. For instance, sub-array 1 for Tx and Rx constitutes antenna elements with index values 1,9,17,25. It can be seen that using 1\(\times\)4 sub-arrays at Tx and Rx can give rise to \(16\times 16=256\) possible combinations for the Tx and Rx sub-array selection, which can be computationally expensive. Similarly, Fig. 3(c) shows the mapping for different 1\(\times\)8 sub-arrays for both Tx and Rx. For instance, sub-array 1 for Tx and Rx now constitutes antennas with indices 1,9,17,25,33,41,49,57. The selection of 1\(\times\)8 Tx and 1\(\times\)8 Rx sub-array gives rise to \(8\times\)8 \(=64\) possible combinations for SAS. ### _Min-SI BF with SAS_ We propose a particle swarm optimization (PSO)-based min-SI BF with SAS scheme to find the optimal DL and UL beam directions \(\hat{\theta}_{D},\hat{\theta}_{U}\) together with Tx sub-array index \(i\) and Rx sub-array index \(j\) to minimize SI while satisfying the corresponding directivity degradation constraints \(C_{1}\) and \(C_{2}\). The algorithm starts with a swarm of \(N_{p}\) particles, each with its own position, velocity, and fitness value, which are randomly placed in optimization search space of perturbation coefficients. During a total of \(T\) iterations, the particle \(p\) communicates with each other, and move for the exploration of the optimization space to find the optimal solution. Here, we define the perturbation vector \(\mathbf{X}_{p}^{(t)}\) as follows: \[\mathbf{X}_{p}^{(t)}=[\hat{\theta}_{D}^{p},\hat{\theta}_{U}^{p},i^{p},j^{p}], \tag{10}\] where \(p=1,\ldots,N_{p}\) and \(t=0,1,\ldots,T\). For each \(p^{th}\) particle, by substituting (10) in (5) and (6), the DL and UL RF beamformers \(\mathbf{f}_{D}(\mathbf{X}_{p}^{(t)})\) and \(\mathbf{f}_{U}(\mathbf{X}_{p}^{(t)})\) can be obtained as function of perturbation angles \(\hat{\theta}_{D}^{p}\) and \(\hat{\theta}_{U}^{p}\), respectively. By using (7), we can write the achieved SI suppression as follows: \[\mathbf{A}_{\text{SI}}(\mathbf{X}_{p}^{(t)})\!\!=\!\!-10\!\log_{10}\!\!\left( \frac{1}{\!N}\!\sum_{n}\!\!\left[\!\mathbf{f}_{U}^{T}(\mathbf{x}_{p}^{(t)}) \mathbf{H}_{SI}(\mathbf{X}_{p}^{(t)})\mathbf{f}_{D}(\mathbf{X}_{p}^{(t)}) \right]^{2}\!\right)\!. \tag{11}\] At the \(t^{th}\) iteration, the personal best for the \(p^{th}\) particle and the current global best among all particles are respectively found as follows: \[\mathbf{X}_{\mathrm{best},p}^{(t)}=\operatorname*{arg\,min}_{\mathbf{X}_{p}^{ (t^{*})},\forall t^{*}=0,1,\cdots,t}\mathbf{A}_{\text{SI}}(\mathbf{X}_{p}^{( t^{*})}), \tag{12}\] The convergence of the proposed PSO-based min-SI BF with SAS for enhanced SI suppression depends on the velocity vector \(\mathbf{v}_{p}\) for both personal best \(\mathbf{X}_{\mathrm{best},p}\) and global best \(\mathbf{X}_{\mathrm{best}}\) solutions, which is defined as follows: \[\mathbf{v}_{p}^{(t+1)}\!\!=\!\mathbf{\Omega}_{1}(\mathbf{X}_{\mathrm{best}}^{ (t)}\!\!-\!\!\mathbf{X}_{p}^{(t)})\!\!+\!\!\mathbf{\Omega}_{2}(\mathbf{X}_{ \mathrm{best},p}^{(t)}\!\!\!-\!\!\mathbf{X}_{p}^{(t)})\!\!+\!\!\mathbf{\Omega }_{3}^{(t)}v_{p}^{(t)}, \tag{14}\] where \(\mathbf{v}_{p}^{(t)}\) is the velocity of the \(p^{th}\) particle at the \(t^{th}\) iteration, \(\mathbf{\Omega}_{1},\mathbf{\Omega}_{2}\) are the random diagonal matrices with the uniformly distributed entries over \([0,2]\) and represent the social relations among the particles, and the tendency of a given particle for moving towards its personal best, respectively. Here, \(\mathbf{\Omega}_{3}=\big{(}\frac{T-1}{T}\big{)}\)\(\mathbf{I}_{(2N_{D}+2N_{U})}\) is the diagonal inertia weight matrix, which finds the balance between exploration and exploitation for optimal solution in search space. By using (14), the position of each particle during \(t^{th}\) iteration is updated as: \[\mathbf{X}_{p}^{(t+1)}=\mathrm{clip}\left(\mathbf{X}_{p}^{(t)}+\mathbf{v}_{p} ^{(t+1)},\mathbf{X}_{\text{Low}},\mathbf{X}_{\text{Upp}}\right), \tag{15}\] where \(\mathbf{X}_{\text{Low}}\in\mathbb{R}^{(2N_{D}+2N_{U})}\) and \(\mathbf{X}_{\text{Upp}}\in\mathbb{R}^{(2N_{D}+2N_{U})}\) are the lower-bound and upper-bound vectors for the perturbation coefficients, respectively, and are constructed according to the earlier defined boundaries of each perturbation coefficient given in \(C_{1}\) and \(C_{2}\). Here, we define \(\mathrm{clip}(x,a,b)=\min(\max(x,a),b)\) as the clipping function to avoid exceeding the bounds. Furthermore, different from the sub-optimal approach, we here consider each perturbation coefficient as a continuous variable inside its boundary. The proposed perturbation-based SI minimization with SAS scheme using PSO is summarized in Algorithm 1. ``` Input:\(N_{p},T\), \(\mathbf{H}_{SI}\), \((\theta_{D},\psi_{D})\), \((\theta_{U},\psi_{U})\). Output:\(\hat{\theta}_{D},\hat{\theta}_{U},i,j\). 1for\(t=0:T\)do 2for\(p=1:N_{p}\)do 3ift = 0then 4 Initialize the velocity as \(\mathbf{v}_{p}^{(0)}=\mathbf{0}\). 5 Initialize \(\mathbf{X}_{p}^{(t)}\) uniformly distributed in \([\mathbf{X}_{\text{Low}},\mathbf{X}_{\text{Upp}}]\). 6else 7 Update the velocity \(\mathbf{v}_{p}^{(t)}\) via (14). 8 Update the perturbation \(\mathbf{X}_{p}^{(t)}\) via (15). 9 end if 10 Find the personal best \(\mathbf{X}_{p,\mathrm{best},n}^{(t)}\) via (12). 11 end for 12 Find the global best \(\mathbf{X}_{\text{best}}^{(t)}\) as in (13). 13 end for ``` **Algorithm 1**Min-SI BF with SAS Algorithm ## IV Illustrative Results In this section, we present the Monte Carlo simulation results to illustrate the performance of the proposed SI suppression technique in FD mMIMO systems. Particularly, we investigate the amount of achieved SI suppression by the design of min-SI RF-BF stages together with SAS. We consider \(N_{D}=N_{U}=1\) RF chain to serve a single UL and DL UE with 1\(\times\)4 and 1\(\times\)8 sub-array configurations for the results presented hereafter. For PSO, we use \(N_{p}=20,\Omega_{1}=\Omega_{2}=2\) and \(\Omega_{3}=1.1\). In Fig. 4, we plot the beampatterns using both 1\(\times\)4 and 1\(\times\)8 sub-arrays for six different angular locations of UL/DL UE (i.e., \(\{\theta_{D},\theta_{U}\}\in\{15^{\circ}:30^{\circ}:180^{\circ}\}\)). In particular, we refer the case when the beams generated by the UL and DL RF beamformers are steered at exact UE locations (i.e., both \(\mathbf{f}_{D}(\theta_{D})\) and \(\mathbf{f}_{U}(\theta_{U})\) steer the beams at \(\theta_{D}\) and \(\theta_{U}\), respectively) as directivity-based beamforming (DBF). It can be seen that 1\(\times\)8 sub-array can generate narrower beams when compared to 1\(\times\)4 sub-array, and can serve more number of users when compared to 1\(\times\)4 sub-array. However, due to the orthogonality, there is still a limitation on the number of the orthogonal UL/DL beams that can be generated with 1\(\times\)8 sub-array. As a result, using DBF restricts the maximum number of UL and DL users that can be served simultaneously in FD mMIMO systems. In the following, we present SI suppression results for 1\(\times\)4 and 1\(\times\)8 sub-array configurations using the proposed min-SI BF with SAS scheme. ### _SI Suppression Using 1\(\times\)4 Sub-Array_ Fig. 5 presents the results using min-SI BF with SAS for 1\(\times\)4 sub-array for both Tx and Rx. We consider DL and UL UE located at angular locations \(\theta_{D}=105^{\circ}\) and \(\theta_{U}=45^{\circ}\), respectively. It must be noted that compared to DBF RF beamformers \(\mathbf{f}_{D}(\theta_{D})\) and \(\mathbf{f}_{U}(\theta_{U})\), which direct the beams in the desired UE directions \(\theta_{D}\), \(\theta_{U}\), the min-SI RF beamformers with SAS introduce beam perturbations at \(\hat{\theta}_{D}\) and \(\hat{\theta}_{U}\) (i.e., \(\mathbf{f}_{D}(\hat{\theta}_{D})\) and \(\mathbf{f}_{U}(\hat{\theta}_{U})\)). The proposed scheme then finds the optimal perturbations as \(110.6^{\circ}\) and \(58^{\circ}\) for DL and UL beams, respectively. Moreover, as shown in Fig. 5(b), the optimal Tx and Rx sub-array indices are found to be 3 and 1, respectively which can achieve SI suppression of around 78.5 dB at the expense of directivity degradation of \(\epsilon=2\) dB. In Fig. 6, we compare the achieved SI using BW = 20 MHz for the following two schemes: 1) proposed min-SI BF with SAS; and 2) DBF. We consider 6 different angular locations for UL and DL UE (i.e., \(\{\theta_{D},\theta_{U}\}\in[15^{\circ}:30^{\circ}:180^{\circ}]\)). It can be seen that the design of RF beamformers \(\mathbf{f}_{D}(\theta_{D})\) and \(\mathbf{f}_{U}(\theta_{U})\) using DBF can achieve SI suppression ranging from 37.2 dB to 67.1 dB for different UL/DL UE angle pairs. On the other hand, the proposed min-SI BF scheme with SAS can achieve SI suppression ranging from 74.9 dB to 78.4 dB. This shows that the design of min-SI RF beamformers \(\mathbf{f}_{D}(\hat{\theta}_{D})\), \(\mathbf{f}_{U}(\hat{\theta}_{U})\) with SAS can provide an additional SI gain of 33 dB on average when compared to DBF, and can improve SI suppression by a maximum of 40.1 dB (e.g., for \(\theta_{D}=135^{\circ},\theta_{U}=15^{\circ}\), SI suppression improves from 37.2 dB to 78.1 dB.). Similarly, Fig. 7 shows the enhanced SI suppression for the proposed min-SI BF scheme with SAS using BW = 100 MHz. DBF can provide SI suppression ranging from 37.3 dB to 67 dB, whereas, the proposed min-SI BF with SAS can achieve SI suppression ranging from 67.7 dB to 76.6 dB. Thus, the proposed min-SI BF scheme can provide an SI suppression gain of around 30.3 dB on average with a maximum SI suppression gain of 37.4 dB. ### _SI Suppression Using 1\(\times\)8 Sub-Array_ In this section, we present the results by using 1\(\times\)8 sub-array for both Tx and Rx for a FD mMIMO system. Fig. 8 depicts the SI suppression results using the proposed min-SI BF approach with SAS for 6 different UL and DL angular locations using BW = 20 MHz. The use of a larger array structure can further suppress SI by generating narrower beams. Therefore, compared to SI suppression ranging between 39.5 dB and 69.5 dB for DBF, the proposed min-SI BF scheme can achieve SI suppression ranging from 71.1 dB to 77.4 dB. On average, the proposed scheme can provide an SI suppression Fig. 4: Beampatterns via DBF. (a) 1\(\times\)4 sub-array. (b) 1\(\times\)8 sub-array. Fig. 5: Proposed min-SI BF with SAS. (a) UL and DL beam perturbations. (b) Tx and Rx sub-array indices for SAS. Fig. 6: SI suppression using \(1\times 4\) sub-array at 20 MHz. (a) DBF. (b) min-SI BF with SAS. Fig. 7: SI suppression using 1\(\times\)4 sub-array at 100 MHz. (a) DBF. (b) min-SI BF with SAS. gain of around 24.8 dB with a maximum suppression gain of 33 dB at \(\theta_{D}=165^{\circ},\theta_{U}=15^{\circ}\). Fig. 9 depicts the achieved SI suppression using BW = 100 MHz. By designing \(\mathbf{f}_{D}(\theta_{D})\) and \(\mathbf{f}_{U}(\theta_{U})\) using DBF can achieve SI suppression between 39.6 and 69.2 dB, whereas, the proposed min-SI BF scheme can provide SI suppression ranging from 59.9 dB to 75.4 dB. Thus, the use of min-SI BF with SAS can provide an additional SI suppression of around 17.9 dB and a maximum SI suppression gain of 25.2 dB. Compared to SI suppression results with 20 MHz, a slightly lower SI suppression is achieved with BW of 100 MHz due to the use of larger number of frequency sampling points (as given in (7)). ## V Conclusions In this paper, we have considered a novel FD mMIMO systems using HBF architecture for simultaneous UL and DL transmission over the same frequency band. In particular, we have addressed the optimization problem of suppressing the strong SI solely on the design of UL and DL RF beamforming stages jointly with Tx and Rx SAS. Based on the measured SI channel, we have proposed a novel min-SI BF scheme jointly with SAS for both Tx and Rx sub-arrays. To solve this challenging non-convex problem, we have proposed a swarm intelligence-based algorithmic solution to find the optimal perturbations as well as the Tx and Rx sub-arrays while satisfying the directivity degradation constraints for the UL and DL beams. The results show that min-SI BF scheme together with SAS can achieve high SI suppression when compared to DBF for both 1\(\times\)4 and 1\(\times\)8 sub-array configurations, and can achieve SI suppression as high as 78 dB for FD mMIMO systems.
2309.07171
Feasibility studies for imaging e$^{+}$e$^{-}$ annihilation with modular multi-strip detectors
Studies based on imaging the annihilation of the electron (e$^{-}$) and its antiparticle positron (e$^{+}$) open up several interesting applications in nuclear medicine and fundamental research. The annihilation process involves both the direct conversion of e$^{+}$e$^{-}$ into photons and the formation of their atomically bound state, the positronium atom (Ps), which can be used as a probe for fundamental studies. With the ability to produce large quantities of Ps, manipulate them in long-lived Ps states, and image their annihilations after a free fall or after passing through atomic interferometers, this purely leptonic antimatter system can be used to perform inertial sensing studies in view of a direct test of Einstein equivalence principle. It is envisioned that modular multistrip detectors can be exploited as potential detection units for this kind of studies. In this work, we report the results of the first feasibility study performed on a e$^{+}$ beamline using two detection modules to evaluate their reconstruction performance and spatial resolution for imaging e$^{+}$e$^{-}$ annihilations and thus their applicability for gravitational studies of Ps.
S. Sharma, L. Povolo, S. Mariazzi, G. Korcyl, K. Kacprzak, D. Kumar, S. Niedzwiecki, J. Baran, E. Beyene, R. S. Brusa, R. Caravita, N. Chug, A. Coussat, C. Curceanu, E. Czerwinski, M. Dadgar, M. Das, K. Dulski, K. Eliyan, A. Gajos, N. Gupta, B. C. Hiesmayr, L. Kaplon, T. Kaplanoglu, K. Klimaszewski, P. Konieczka, T. Kozik, M. K. Kozani, W. Krzemien, S. Moyo, W. Mryka, L. Penasa, S. Parzych, E. Perez. Del Rio, L. Raczynski, Shivani, R. Y Shopa, M. Skurzok, E. L. Stepien, P. Tanty, F. Tayefi, K. Tayefi, W. Wislicki, P. Moskal
2023-09-12T11:37:42Z
http://arxiv.org/abs/2309.07171v1
# Feasibility studies for imaging \(e^{+}e^{-}\) annihilation with modular multi-strip detectors ###### Abstract Studies based on imaging the annihilation of the electron (\(e^{-}\)) and its antiparticle positron (\(e^{+}\)) open up several interesting applications in nuclear medicine and fundamental research. The annihilation process involves both the direct conversion of \(e^{+}e^{-}\) into photons and the formation of their atomically bound state, the positronium atom (Ps), which can be used as a probe for fundamental studies. With the ability to produce large quantities of Ps, manipulate them in a long-lived Ps states, and image their annihilations after a free fall or after passing through atomic interferometers, this purely leptonic antimatter system can be used to perform inertial sensing studies in view of a direct test of Einstein's equivalence principle. It is envisioned that modular multi-strip detectors can be exploited as potential detection units for this kind of studies. In this work, we report the results of the first feasibility study performed on a \(e^{+}\) beamline using two detection modules to evaluate their reconstruction performance and spatial resolution for imaging \(e^{+}e^{-}\) annihilations and thus their applicability for gravitational studies of Ps. keywords: Position sensitive detectors, modular J-PET, positron and positronium beam, inertial sensing on Ps + Footnote †: journal: Nuclear Instruments and Methods in Physics Research A ## 1 Introduction The positron (\(e^{+}\)) is the lightest stable antiparticle and differs from other antimatter objects primarily in the sense that other antimatter objects require an accelerator for their creation in the laboratory [1; 2]. Since it is relatively easy to obtain \(e^{+}\) either by pair production processes or in \(\beta^{+}\) radioactive decays, it became popular shortly after its discovery [3]. High-energy positrons are produced on a large scale in accelerator facilities such as the Large Hadron Collider (LHC) [4] and the Beijing Electron Positron Collider Upgrade (BEPCII) [5]. The study of their collisions with electrons or protons enables the exploration of the fundamental constituents of matter, the study of particle interactions, and even the search for new particles or phenomena beyond the Standard Model (BSM). While low-energy positrons (up to tens of keV) have a variety of applications, such as a non-invasive tracer in medical imaging [6; 7; 8; 9; 10], \(e^{+}\) annihilation-based techniques are used to identify and characterise defects in materials [11], and in fundamental physics [12; 13]. Moreover, \(e^{+}\) can form a metastable atom when interacting with \(e^{-}\), the positronium atom (Ps) [14], which is a purely leptonic object and an excellent two-body system for testing non-relativistic quantum electrodynamics (nrQED) in the bound state [15; 16]. Ps can be formed in one of two possible ground states: spin 0 state, known as para-Ps (p-Ps, \({}^{1}S_{0}\)), which is short-lived (125 ps), or spin 1 state, long-lived state of Ps (142 ns), also known as ortho-Ps (o-Ps, \({}^{3}S_{1}\)) [14]. The study of the decays of ortho-positronium atoms has been used for a deeper understanding of the fundamental symmetries [13; 17]. With the ability to populate o-Ps in excited states through laser manipulation [15; 18], its lifetime can be enhanced by more than one order of magnitude [19; 20]. Positronium atoms in Rydberg or 2\({}^{3}\)S Ps state have been postulated as a potential probe for performing inertial sensing studies towards a direct test of Einstein's equivalence principle on antimatter [21; 22; 23]. In particular, the proposed studies on 2\({}^{3}\)S Ps are based on the application of the technique of atomic interferometry/deflectometry to measure gravitational effects on Ps atoms. The experimental scheme described by Mariazzi et al. [24] requires a beam of 2\({}^{3}\)S Ps atoms, optimization of the parameters for the interferometer setup, and position-sensitive detectors with sub-nm spatial resolution to study the fringe pattern formed as the Ps atoms pass through the interferometers. As suggested in Ref. [24], such resolution could be achieved by scanning the fringe pattern with a material grating of the same periodicity of the fringe moved by a piezoelectric actuator. A position sensitive detector can be used to count the Ps annihilations on the grating [25]. An additional stopper can be placed 10-20 mm behind the moving grating to detect the Ps atoms crossing the grating. Ps annihilations on the grating and on the stopper can be distinguished if the spatial resolution of the detector is better than the distance between the grating and stopper. This requirement places a limitation on the detector to be used, as it should have a spatial resolution of 10-20 mm. Imaging techniques must be used to reconstruct the annihilation vertices. Therefore, detectors with good time-of-flight (TOF) resolution are preferable. The modular multi-strip detection units based on plastic scintillators developed by the J-PET (Jagiellonian-PET) collaboration [26; 27; 28; 29; 30] are a good solution for this type of measurements. The detection modules can be operated individually or in pairs and are suitable as position-sensitive detectors for inertial sensing studies to reconstruct annihilation vertices. To investigate the feasibility of imaging \(e^{+}e^{-}\) annihilations with two modular detection units and to evaluate their reconstruction performance, a pilot measurement was performed at the \(e^{+}\) beamline of the Anti-Matter Laboratory (AML) in Trento. The characteristic details of the detectors will be discussed in the next section. Section 3 describes the experimental details, followed by the results (in Section 4). Section 5 provides the summary and an outlook of the studies. ## 2 Modular Jagiellonian Positron Emission Tomograph (Modular J-PET) The modular J-PET is based on the design of stand-alone detection modules with connected front-end electronics (see Fig. 1 (a)). Each module consists of densely packed 13 plastic scintillators of dimension 500\(\times\)24\(\times\)6 mm\({}^{3}\) glued on both sides to a 1\(\times\)4 matrix of silicon photomultipliers (SiPMs) (Fig. 1(a,b)). The signals from the SiPMs are read out using a newly developed electronic front-end board that enables signal sampling in the voltage domain with an accuracy of 20 ps [31]. Data are stored with Field Programmable Gate Array (FPGAs) in triggerless mode, which is easily reconfigurable [32]. For the estimation of energy deposition by the photon interaction inside plastic scintillators, the time-over-threshold method (TOT) is adapted instead of the charge collection method [33]. The hit position and hit time are estimated by measuring the arrival time of light signals at each end of the scintillator [26]. The total length of a single module is 90.6 cm, including the length of the associated front-end electronic boards, and the width is 9 cm. A single module weighs less than 2 kg. The J-PET collaboration constructed 24 of such detection modules for positron emission tomography applications, which can be assembled into a cylinder with a diameter of 76.2 cm and an axial field of view of 50 cm [34]. The applications of detection modules with positron and positronium beams have been discussed in a previous work [35]. The technical details of the modules and the algorithm for data analysis are presented in the section 4. ## 3 Experimental setup and data measurement To evaluate the performance of modular detectors for the reconstruction of the vertices of \(e^{+}\) annihilations, two detection units were brought to the AML laboratory of the University of Trento in Italy. A new beamline has recently been commissioned that can deliver a continuous positron beam with a spot diameter of less than 5 mm. The details of the positron beamline will be reported elsewhere [36]. To perform the experiment, a flange was used as a beam terminator, representing the origin surface of the two counter-propagating 511 keV photons from \(e^{+}\) annihilation. The annihilation photons were registered by modular J-PET detection units placed on each side at a distance of 10 cm from the centre of the flange where the annihilation spots are expected to be (Fig. 2). The red dot in the image shows the centre of the flange. The signals from the SiPMs are processed by FTAB boards (combination of front-end electronics, TDCs and readouts channels) using an FPGA-based controller board and stored in external memory via a fast data transfer switch. In addition, to monitor the \(e^{+}\) beam rate, a NaI(Tl) single crystal of dimension \(3^{"}\times 3^{"}\) was aligned at a distance of about 25 cm behind the flange. The crystal was surrounded by a 5 mm thick Figure 1: (a) Shows a single module with complete signal readout chain. The individual scintillators are wrapped first with Vikuiti and then with black foils. (b) represents the edge of the scintillator before the wrapping, which is attached to the matrix of SiPMs. Figure 2: The picture shows the experimental setup with two modular J-PET detection modules placed 10 cm apart on either side of the positron beam and centered on a flange where positrons are to be annihilated (red dot on the flange). A NaI(Tl) single crystal detector is placed at 25 cm distance from the flange, aligned with the axis of the \(e^{+}\) beam. Data are acousitised by an FPGA-based controller board and eventually stored on the computer hard disk via a fast data transfer switch. cylindrical tungsten shield to reduce the number of unwanted counts. Positron rate was recorded every 10 minutes by integrating the 511 keV photon peak after correction for background and attenuation factor caused by the flange material. ## 4 Results ### Low-level data reconstruction The binary data recorded by the FPGA boards are processed using the dedicated data analysis framework developed by the J-PET collaboration [37]. The procedures are divided into steps that start with reading the timestamps of the DAQ channels and end with categorized physical events for the further analysis. The data were collected in triggerless mode and the binary data packets are collected in time slots of 50 microseconds. The signals are reconstructed for each SiPM using the measured timestamps at two thresholds (30 mV and 70 mV) for rising and falling edges, TOT is calculated using a rectangular approximation, as shown in Fig. 3. Signals from up to 4 SiPMs located at the end of each strip are combined into a matrix signal. The arrival time of the signal is calculated using the average values of the SiPMs signals found within 1.3 ns coincidence. The average of the measured TOT values on all SiPMs on each side of the scintillators gives the measure of energy deposition for a given interaction. For the \(e^{+}\) annihilation spot imaging we are using line-of-response (LOR) and time-of-flight (TOF), which require the reconstruction of time and position of photon interaction in scintillating strips. For J-PET modules, these two observables are estimated based on the measured time difference of the light signals arriving at both ends of the scintillators and read out by photomultipliers, and the estimated value of effective velocity of light in plastic strips [26]. Figure 3: Shows the schematic of signal readout through a 1x4 matrix of SiPMs of a plastic scintillator at both edges. A photon interacts with the scintillator and deposits a certain energy. Based on the deposited energy, the signals obtained at each SiPM are probed at two thresholds of 30 mV and 70 mV. The deposited energy can be estimated by summing the TOTs calculated for the signals from all SiPMs at the fixed thresholds on either side of the scintillators. A signal from a SiPM is only considered if it has an amplitude higher than the lowest applied threshold (30 mV). The formula in the figure shows the case where the signal amplitude of all SiPMs on both sides of the scintillator exceeds the two applied thresholds for the calculation of TOT, which is used as a measure of energy deposition(Edep). ### Calibration of the detector The calibration of the electronic offsets for estimation of interaction time in plastic scintillators was performed using cosmic rays, when the e\({}^{+}\) beam was off. Figure 4(a) shows the pictorial representation of the detector placement in the experimental setup, where cosmic rays irradiate all scintillators equally. Due to the experimental constraint, the detectors were placed with a vertical angular displacement of 60\({}^{\circ}\). For the analysis of the measured data, the coordinate system was rearranged as right-handed contention (Fig. 4(b)), fixing the x-axis in the upward direction, the y-axis in the direction of the beam, and the z-axis along the axial length of the scintillators. The first step was to synchronise the time differences between signals registered at the opposite ends of the strips. To do this, the slopes of the time difference signal edges for each strip were compared and aligned. The top panel in Fig. 4 (c) shows the measured hit time difference of the signals from the scintillators versus their IDs before calibration and the panel below after calibration. The scintillators in both modules were assigned unique IDs ranging from 1 to 26 (each module has 13 scintillators). The calibration of TOF offsets for each strip is based on the selection of a pair of muon hits, with one hit occurring in the plastic scintillator of the upper module and the other in the lower. Assuming average velocity (\(V_{\mu}\)) of the muons is 29.8 cm/ns [38], we could calculate the TOF offsets for vertically aligned or adjacent strips according to the scheme shown in Fig. 4 (d). The offset for the middle strip (e.g. with ID = 20 of the upper module) was set to 0. Then the offset of the middle strip of the lower module (C\({}_{7}\)) can be estimated by calculating the difference between the measured TOF and the estimated TOF (d\({}_{20-7}/V_{\mu}\)), where d\({}_{20-7}\) is the distance that muon travels when interacting with strips IDs 20 and 7. For the neighbouring plastic strips, the estimated TOF offset was corrected, as shown in the example for strip ID 21 (C\({}_{21}\)). The same scheme was chosen to calculate the offsets of the other scintillators. The estimated offsets Figure 4: Calibration procedure used to synchronise the timing information as well as the TOF of the plastic scintillators. (a) Shows the placement of two detection modules around the \(e^{+}\) beam, with the red dashed lines representing the cosmic showers on the modules. (b) Coordinate system used for calibration and later for data analysis. The results of the hit time differences of the individual signals before (upper panel) and after (lower panel) the calibrations are shown in (c). In addition, the TOF offsets for the scintillators of the modules were calibrated according to the scheme described in (d). Values of the corrections for the TOF offsets for each scintillator strip are shown in (e), first iteration (left panel) and 20\({}^{th}\) iteration (right panel) calibration. The details of the calibration method are explained in the text. were optimised using the iterative approach. The procedure was repeated until the corrections calculated in the iteration were smaller than 50 ps. Figure 4(e) shows these corrections to TOF offsets as a function of scintillator IDs for first (left panel) and 20\({}^{th}\) iteration (right panel). ### Data analysis for Imaging \(e^{+}e^{-}\) annihilations For the reconstruction of \(e^{+}\) annihilation vertices, an algorithm has been developed to analyse events with 2-hits expected from 511 keV photons. The first selection criterion for choosing annihilation photons is based on the energy deposition measured as TOT in the context of J-PET data analysis framework. Fig. 5(a) shows a typical TOT spectra obtained for 511 keV photons. Hits are selected as annihilation candidates whose measured TOT values fall between selected range shown by the dashed lines. The selected candidates are further filtered out based on their emission time difference estimated from the centre of the flange. To calculate the emission time of the photon, the hit time of each of the two annihilation candidates is corrected by their estimated TOF. The last criterion applied is based on angular correlation. Hits are marked as back-to-back if the angular difference is between 175\({}^{0}\) - 180\({}^{0}\). The hit times of tagged annihilation candidates allow the calculation of the TOF as the difference of their arrival time, which is later used to reconstruct the annihilation point on the constructed LOR based on their hit positions. The TOF spectrum obtained is shown in Fig. 5. For the current setup, the value of TOF resolution (\(\sigma\)) is \(\approx\) 125 ps. Fig. 5(c,d) shows the reconstructed images of the annihilation vertices for the zx and zy planes. The projections on Figure 5: (a) Shows the TOT spectrum, which is a measure of energy deposition. Since the photons interact with the plastic scintillator mainly via Compton scattering, a Compton edge is expected for the maximum energy deposition, which is visible in the figure. The peak at lower TOT values corresponds to the lower energy deposits. Selecting the window of TOT values around the Compton edge, as shown in the figure, allows the interactions to be marked by 511 keV photons. (b) TOF spectrum estimated for the selected annihilation pairs. (c,d) show the images of the annihilation spots in the zx and zy planes reconstructed by the developed algorithm, and (e) show the projections of the reconstructed images in the x, y, and z axes, respectively. the x, y, and z axes are shown in Fig. 5(e). The spatial resolutions (\(\sigma\)) in the x, y, and z directions are 11, 4, and 13 mm, respectively. Furthermore, since we have access to the \(e^{+}\) beam rate, we could estimate the efficiency of two modules in mapping the \(e^{+}\)annihilation vertices. To calculate the reconstruction efficiency, we first estimated the total number of annihilation pairs incident on the detection modules using the \(e^{+}\) beam rate, corrected for the solid angle covered by the two detection units. Finally, the number of entries in the final spectra (Fig. 5(c, d)) is divided by the number of total annihilations. The estimated reconstruction efficiency is 14%. ## 5 Conclusions and outlook We have shown that the modular J-PET with only 2 multi-strip detection units has the potential for imaging the \(e^{+}e^{-}\) annihilation spots. The experiment was performed on a \(e^{+}\) beamline capable of delivering a continuous monoenergetic \(e^{+}\) beam with diameter of a few mm. The obtained TOF and spatial resolution (\(\sigma\)) are promising for the planned applications on gravitational tests. One of the main objectives of this study was to investigate the ability of the J-PET modules to distinguish annihilation spots that are within 10-20 mm, especially along the beam direction (y-axis). In the present study, we found that the resolution (\(\sigma\)) along the y-axis is about 4 mm, which is promising for the use of these detection modules as position sensitive detectors for inertial sensing measurements on Ps atoms [24; 35]. During these pilot studies, some limitations were identified. To cover the larger solid angles for annihilation photon registration, we placed the modules relatively close to the beamline, which hindered the ability to calibrate the modules with a point source of known activity as strips could not be irradiated uniformly, thus limiting the optimal calibration of the modules. In addition, a dedicated Monte Carlo simulation is required to validate the achieved reconstruction performance of the modules, which can also correctly estimate the attenuation caused by the flanges used and the efficiency of the analysis cuts. Nevertheless, we performed the calibration of the detectors for the first time with cosmic rays and were able to calibrate the detectors. The preliminary results show that the modular multi-strip detectors are able to reconstruct the annihlation vertices of the \(e^{+}\) beam spot with an efficiency of 14%, which could be further increased by optimizing the geometry of the detection modules. ## 6 Acknowledgements The authors acknowledge the technical and administrative support of A. Heczko, M. Kajetanowicz and W. Migdal. This work was supported by the Foundation for Polish Science through the TEAM POIR.04.04.00-00-4204/17program, the National Science Centre of Poland through grants MAESTRO no. 2021/42/A/ST2/00423, OPUS no. 2019/35/B/ST2/03562, Miniatura 6 no.2022/06/X/ST2/01444, the Ministry of Education and Science through grant no. SPUB/SP/490528/2021, the EU Horizon 2020 research and innovation programme, STRONG-2020 project, under grant agreement No 824093, and the SciMat and qLife Priority Research Areas budget under the program Excellence Initiative - Research Universityat the Jagiellonian University, and Jagiellonian University project no. CRP/0641.221.2020. B.C.H. acknowledges support of this research by the Austrian Science Fund (FWF) project P36102-N. The authors also gratefully acknowledge the support of Q@TN, the joint laboratory of the University of Trento, FBK-Fondazione Bruno Kessler, INFN-National Institute of Nuclear Physics, and CNR-National Research Council.
2309.06509
Homeostasis in Gene Regulatory Networks
In this paper, we use the framework of infinitesimal homeostasis to study general design principles for the occurrence of homeostasis in gene regulatory networks. We assume that the dynamics of the genes explicitly includes both transcription and translation, keeping track of both mRNA and protein concentrations. Given a GRN we construct an associated Protein-mRNA Network (PRN), where each individual (mRNA and protein) concentration corresponds to a node and the edges are defined in such a way that the PRN becomes a bipartite directed graph. By simultaneously working with the GRN and the PRN we are able to apply our previous results about the classification of homeostasis types (i.e., topologically defined homeostasis generating mechanism) and their corresponding homeostasis patterns. Given an arbitrarily large and complex GRN $\mathcal{G}$ and its associated PRN $\mathcal{R}$, we obtain a correspondence between all the homeostasis types (and homeostasis patterns) of $\mathcal{G}$ and a subset the homeostasis types (and homeostasis patterns) of $\mathcal{R}$. Moreover, we completely characterize the homeostasis types of the PRN that do not have GRN counterparts.
Fernando Antoneli, Martin Golubitsky, Jiaxin Jin, Ian Stewart
2023-09-12T18:31:29Z
http://arxiv.org/abs/2309.06509v1
# Homeostasis in Gene Regulatory Networks ###### Abstract Gene regulatory networks lie at the heart of many important intracellular signal transduction processes. A Gene Regulatory Network (GRN) is abstractly defined as a directed graph, where the nodes represent genes and the edges represent the causal regulatory interactions between genes. It can be used to construct mathematical models describing the time-varying concentrations of the several molecular species attached to each gene (node) in the network. In the deterministic setting this is typically implemented by a system of ordinary differential equations. A biological system exhibits homeostasis when there is a target quantity, called the input-output function, whose values stay within a narrow range, under relatively wide variation of an external parameter. A strong form of homeostasis, called infinitesimal homeostasis, occurs when the input-output function has a critical point. In this paper, we use the framework of infinitesimal homeostasis to study general design principles for the occurrence of homeostasis in gene regulatory networks. We assume that the dynamics of the genes explicitly includes both transcription and translation, keeping track of both mRNA and protein concentrations. Given a GRN we construct an associated Protein-mRNA Network (PRN), where each individual (mRNA and protein) concentration corresponds to a node and the edges are defined in such a way that the PRN becomes a bipartite directed graph. By simultaneously working with the GRN and the PRN we are able to apply our previous results about the classification of homeostasis types (i.e., topologically defined homeostasis generating mechanism) and their corresponding homeostasis patterns. Given an arbitrarily large and complex GRN \(\mathcal{G}\) and its associated PRN \(\mathcal{R}\), we obtain a correspondence between all the homeostasis types (and homeostasis patterns) of \(\mathcal{G}\) and a subset the homeostasis types (and homeostasis patterns) of \(\mathcal{R}\). Moreover, we completely characterize the homeostasis types of the PRN that do not have GRN counterparts. **Keywords:** Infinitesimal Homeostasis, Coupled Dynamical Systems, Input-Output Network, Robust Perfect Adaptation, Gene Expression, Gene Regulatory Network 1 Footnote 1: Centro de Bioinformática Mética, Universidade Federal de São Paulo, São Paulo, SP, Brazil 2 Footnote 2: Department of Mathematics, The Ohio State University, Columbus, OH, USA 3 Footnote 3: Department of Mathematics, The Ohio State University, Columbus, OH, USA 4 Footnote 4: Mathematics Institute, University of Warwick, Coventry, UK \({}^{*}\)Correspondence: [email protected], [email protected] ###### Contents * 1 Introduction * 1.1 Mathematical Modeling of Gene Regulatory Networks * 1.2 Gene Expression Homeostasis * 2 Gene Regulatory Networks * 2.1 From GRN to PRN * 2.2 Infinitesimal Homeostasis in PRN * 2.3 Infinitesimal Homeostasis in GRN * 3 Conclusion and Outlook * A Homeostasis in Input-Output Networks * A.1 Core Networks and Homeostasis Classes * A.2 Combinatorial Characterization of Homeostasis * A.3 Homeostasis Inducing and Homeostasis Patterns * B Infinitesimal Homeostasis in PRN and GRN * B.1 Simple Paths in the PRN * B.2 Homeostasis Subnetworks in GRN * B.3 Enumerating Homeostasis Subnetworks in GRN and PRN * C Homeostasis Patterns in PRN and GRN * C.1 Homeostasis Pattern Networks * C.2 Homeostasis Inducing in GRN and PRN ## 1 Introduction _Gene expression_ is the process by which the information encoded in a gene is turned into a biological function, which ultimately manifests itself as a phenotype effect. This is accomplished by a complex series of enzymatic chemical reactions within the cell leading to the synthesis of specific macro-molecules called the _gene product_. The process of gene expression is used by all known life - eukaryotes (including multicellular organisms), prokaryotes (bacteria and archaea), and even viruses - to generate the molecular machinery for life. There are, basically, two types of gene products: (i) for _protein-coding genes_ the gene product is a _protein_; (ii) _non-coding genes_, such as transfer RNA (tRNA) and small nuclear RNA (snRNA), the gene product is a functional _non-coding_ RNA (ncRNA). _Regulation of gene expression_, or simply _gene regulation_, is the range of mechanisms that are used by cells to increase or decrease the amount of specific gene products. Sophisticated schemes of gene expression are widely observed in biology, going from triggering developmental pathways, to responding to environmental stimuli. A _gene_ (or _genetic_) _regulatory network_ (GRN) is a collection of molecular regulators that interact with each other and with other substances in the cell to govern the gene expression levels of mRNA and proteins. The _molecular regulators_ can be DNA, RNA, protein or any combination of two, or more of these three that form a complex, such as a specific sequence of DNA and a transcription factor to activate that sequence. The interaction can be direct or indirect (through transcribed RNA or translated protein). When a protein acts as a regulator it is called a _transcription factor_, which is one of the main players in regulatory networks. By binding to the promoter region of a coding gene they turn them on, initiating the production of another protein, and so on. Transcription factors can be _excitatory_ (_activators_) or _inhibitory_ (_repressors_). Another fundamental concept in biology is that of _homeostasis_, which derives from the Greek language, and means 'to maintain a similar stasis' [5]. A prototypical biological example of homeostasis occurs in warm blooded mammals where the mammal's internal body temperature remains approximately constant on variation of the external temperature. The notion of homeostasis is often associated with regulating global physiological parameters like temperature, hormone levels, or concentrations of molecules in the bloodstream in complex multicellular organisms. However, it also can be applied to unicellular organisms, where the issue is how some internal cell state of interest (such as the concentration of some gene product) responds to changes in the intra-cellular and/or extra-cellular environment [34, 59, 53]. For instance, Antoneli et al. [1] study the occurrence of homeostasis in a feedforward loop motif from the GRN of _S. cerevisiae_ (see Figure 1). In a series of papers about homeostasis in systems of ODEs we have developed a comprehensive theory for its analysis and classification: (1) homeostasis can be formulated in terms of infinitesimal homeostasis and singularity theory [18, 19, 20], (2) infinitesimal homeostasis in biochemical networks where nodes represent one-dimensional concentrations of substrates can be studied in an abstract framework of 'input-output networks' [50, 65, 21], (3) infinitesimal homeostasis can be topologically characterized in terms of 'homeostasis types' on a general class of input-output networks [36, 37, 65], and (4) homeostasis types themselves can be classified in terms of 'homeostasis patterns' [9]. In this paper we build on these results to deal with infinitesimal homeostasis in gene regulatory networks (GRN). As we explain in Section 2 a GRN is not exactly a biochemical network as in [50]. There is a'mismatch' between the number of nodes in the network and the number of state variables of the underlying system of ODEs. In order to resolve this'mismatch', we generalize the approach of [1], used to analyze feedforward loops, to arbitrary GRNs. The idea there was to replace each 'gene' node of a GRN by a pair of 'protein' and'mRNA' nodes to obtain a _protein-mRNA network_ (PRN). Now, PRNs have a mathematical structure similar to that of biochemical networks and hence the theory of [9, 65] can be readily applied. More importantly, since the classification results have a purely combinatorial side, we can consider the GRN and its associated PRN simultaneously, and work out a correspondence between the classifications of homeostasis types and homeostasis patterns in both of them. Even though infinitesimal homeostasis only makes sense on the PRN (dynamical) level, its purely combinatorial aspects can be transferred to the GRN. The main result of this paper is a complete characterization of homeostasis types and homeostasis patterns on the PRN that have correspondent on the GRN. A byproduct of this characterization is the discovery of homeostasis types and homeostasis patterns on the PRN that do not have counterparts on the GRN. The novelty of our approach is the simultaneous use of two networks, the GRN and the PRN, in the analysis of gene expression homeostasis, and the lack of assumptions about the functional form of the differential equations. ### Mathematical Modeling of Gene Regulatory Networks The development of advanced experimental techniques in molecular biology is producing increasingly large amounts of experimental data on gene regulation. This, in turn, demands the development of mathematical modelling methods for the study and analysis of gene regulation. Mathematical models of GRNs describe both gene expression and regulation, and in some cases generate predictions that support experimental observations. Formally, a GRN is represented by a directed graph. Nodes represent the variables associated to genes (e.g., mRNA and/or protein concentration) and directed links represent couplings between genes (e.g., effect of one gene product on other genes). In any case, GRN models can be roughly divided into three classes (see [26]): 1. _Logical models._ This class of models aims to describe regulatory networks qualitatively. They allow users to obtain a basic understanding of the different functionalities of a given network under different conditions. Their qualitative nature makes them flexible and easy to fit to biological phenomena, although they can only answer qualitative questions. Among them, the most common approaches are those based on Boolean networks [27, 57, 60]. See also Barbuti et al. [2] for a comprehensive review. 2. _Continuous models._ This class of models allows us to understand and manipulate behaviours that depend on finer timing and exact molecular concentrations. For example, to simulate the effects of dietary restriction on yeast cells under different nutrient concentrations, users must resort to the finer resolution of continuous models. These models are best formulated in terms of nonlinear dynamical systems given by coupled systems of ordinary differential equations [15, 16, 17, 55, 62]. See also Polynakis et al. [47]. 3. _Stochastic models._ This class of models was introduced following the observation that the functionality of a regulatory network is often affected by noise [12]. As the majority of these models account for interactions between individual molecules, they are called single molecule level models [23, 28, 32, 40, 46]. See also Bocci et al. [4]. This paper focuses on the mathematical modelling of GRNs using coupled systems of ordinary differential equations. In the coupled ODE setting there is an important issue concerning the number of variables / equations associated to each node. We will return to this issue in Section 2. ### Gene Expression Homeostasis Homeostasis is often modeled using differential equations and, in this context, can be interpreted in two mathematically distinct ways. One boils down to the existence of a 'globally stable equilibrium'. Here changes in the environment are considered to be perturbations of _initial conditions_[30, 31, 61]. A stronger interpretation works with a _parametrized_ family of differential equations possessing a _stable equilibrium_. Now homeostasis means that a function of this equilibrium, the _input-output function_, changes by a relatively small amount when the _parameter_ varies by a much larger amount [44, 18, 59]. In this paper we adopt the second, stronger, interpretation and focus on the mathematical aspects of this concept. There is a large body of work about homeostasis from the standpoint of control theory. In this context, homeostasis is related to the more stringent notion of _robust perfect adaptation_. Now, the input-output function is exactly constant over the parameter range (see [13, 52, 29] for more details). We will not consider this stricter form of homeostasis here, but it is worth pointing out that our results do apply to this context, since robust perfect adaptation is a particular case of the notion of 'infinitesimal homeostasis' (see [36, 37]). Given a family of differential equations depending on a parameter \(\mathcal{I}\) and possessing a stable equilibrium \(X(\mathcal{I})\), we say that _homeostasis_ occurs in this system if the input-output function \(z(\mathcal{I})=\Phi(X(\mathcal{I}))\), where \(\Phi\) a smooth function, is approximately constant upon variation of \(\mathcal{I}\). Golubitsky and Stewart [18] observe that homeostasis on some neighborhood of a specific value \(\mathcal{I}_{0}\) follows from the occurrence of _infinitesimal homeostasis_ at \(\mathcal{I}_{0}\): \(\frac{dz}{d\mathcal{I}}(\mathcal{I}_{0})=0\). This observation is essentially the well-known fact that the value of a function changes most slowly near a stationary (or critical) point. Despite the name, 'infinitesimal homeostasis' often implies that the system is homeostatic over a relatively large interval of the parameter [20, Section 5.4]. The key quantity is the value of the _second derivative_ of \(z\) at \(\mathcal{I}_{0}\). As the second derivative becomes smaller, the interval of homeostasis becomes larger. In Antoneli et al. [1] the authors apply this formalism to find infinitesimal homeostasis in a small 3-node GRN called 'feedforward loop motif' (see Figure 1). Assuming that the regulation of the three genes is inhibitory (repression) and that only gene SPF1 is regulated by upstream transcription factors (the input parameter), they show that the protein concentration of gene GAP1 (the output node) robustly exhibits infinitesimal homeostasis with respect to variation on the regulation level of SPF1 over a wide range. Moreover, SPF1 and GZF3 (i.e., their protein concentrations) are not homeostatic for any value of the input parameter. Here, 'robustly' means that the occurrence of infinitesimal homeostasis (or not) on the protein concentrations of the three genes described above is persistent under variation of kinetic parameters of the defining differential equations (the rates of synthesis and degradation of the mRNA and protein concentrations). In order to obtain compatibility of the infinitesimal homeostasis formalism with the GRN structure they use the protein-mRNA network (PRN) representation. Moreover, in this particular setting, they were able to explicitly compute the homeostasis point by assuming a special functional form for the equations. In this paper, we consider arbitrary large and complex GRN, with the most general functional form for the dynamics. However, in this generality, it is no longer possible to explicitly compute homeostasis points. Structure of the Paper.In Section 2 we introduce the notion of GRN and its associated PRN. We recall the definition of input-output network, infinitesimal homeostasis, homeostasis types, homeostasis subnetworks and homeostasis patterns and show that this theory can be directly applied to PRNs. Next, we explain how the purely combinatorial part of the classification theory can be applied directly to the GRN and how this relates to the classification for the PRN. We illustrate the general theory with two paradigmatic examples: (a) feedforward loop and (b) feedback inhibition. The proofs of all the results in full generality are provided in the Appendix. In Appendix A, we recall the basic terminology of input-output networks and state the results regarding the combinatorial characterization of homeostasis types and homeostasis patterns. In Appendix B, we prove the results about infinitesimal homeostasis classification in the GRN and its associated PRN are related to each other. In Appendix C, we show how the structure of homeostasis patterns in the GRN and its associated PRN are related to each other. ## 2 Gene Regulatory Networks At the abstract level a GRN is a directed graph whose nodes are the genes and a directed link from a source gene to a target gene indicates that the gene product of the source gene acts as a molecular regulator of the target gene. _Autoregulation_ occurs when a link connects a gene to itself, that is, the gene product is a molecular regulator of the gene itself. At the dynamical level, a gene (a node in the GRN) represents the collection of processes that ultimately lead to the making of the gene product. A protein-coding gene should have at least two processes: (i) _transcription_, that is, the synthesis of a mRNA copy from a DNA strand and (ii) _translation_, that is, the synthesis of protein from a mRNA. The 'output' of the corresponding node in the GRN is the protein concentration at a given time. Assume the simplest scenario, namely, a GRN containing only protein-coding genes. Figure 1: Feedforward loop motif from the GRN of _S. cerevisiae_ (see Antoneli et al. [1]). Note that autoregulation implies that the protein concentration associated with the gene SFP1 affects directly the mRNA concentration associated with that gene. Then, each gene (node) represents two concentrations (the concentration of mRNA and the concentration of protein). That is, there is a mismatch between the number of nodes \(N\) and the number of state variables \(2N\). There are two ways to deal with this mismatch. 1. _Protein-protein formalism._ This approach is based on the fact that, in some cases, the changes to mRNA concentrations occur much faster than the changes to the concentrations of the associated proteins [47]. More specifically, the mRNA concentration quickly reaches a steady-state value before any protein is translated from it. Formally, this technique is called _quasi steady-state approximation_ (QSSA) [56]. Then, we can solve the steady-state mRNA equations and plug the result in the protein equations. This procedure effectively reduces the number of state variables by half, thus matching of the number of nodes in the GRN (see [48, 49, 26, 53]). 2. _Protein-mRNA formalism._ In this approach we keep the mRNA and protein concentrations for each gene and double the number of nodes of the network, leading to the notion of _protein-mRNA network_ (PRN) [35, 47]. Now the network has two 'types' of nodes (mRNA and protein) and two 'types' of arrows (mRNA \(\rightarrow\) protein and protein \(\rightarrow\) mRNA). As we will see below there is a correspondence between the two networks that allows us to transfer some properties back and forth. For instance, this is the approach adopted in Antoneli et al. [1] for the particular example of feedforward loop motif (see also [41, 23, 24]). There are mathematical and biological reasons to prefer the second possibility. From the mathematical point of view it is more convenient to work with the PRN [38]. More specifically, it allows us to use the general theory of network dynamics [20] to associate a natural class of ODEs to a PRN, which contains virtually all models discussed in the literature. Moreover, it makes it possible to apply the techniques developed in Wang et al. [65] and Duncan et al. [9] to classify the homeostasis types in PRN and GRN. Biologically, the use of protein-protein networks is more appropriate to model prokaryotic gene regulation, due to the fact that: (i) transcription and translation occur, almost simultaneously, within the cytoplasm of a cell due to the lack of a defined nucleus, (ii) the coding regions typically take up \(\sim 90\%\) of the genome, whereas the remaining \(\sim 10\%\) does not encode proteins, but most of it still has some biological function (e.g., genes for transfer RNA and ribosomal RNA). In this case, it is reasonable to assume that gene expression is regulated primarily at the transcriptional level. On the other hand, gene expression in eukaryotes is a much more complicated process, with several intermediate steps: transcription occurs in the nucleus, where mRNA is processed, modified and transported, and translation can occur in a variety of regions of the cell. In particular, several non-coding genes that are transcribed into functional non-coding RNA molecules, such as, microRNAs (miRNAs), short interfering RNAs (siRNAs) and long non-coding RNAs (lncRNAs), play an important role in eukaryotic gene regulation. As we will explain later, it is very easy to incorporate these regulatory elements into the framework of PRNs (see Remark 2.8). ### From GRN to PRN From now on we make the simplifying assumption that the GRN consists of protein-coding genes with transcription and translation. Since we are interested in gene expression homeostasis, we consider _input-output_ GRNs. They are supplied with an external parameter \(\mathcal{I}\) - e.g., environmental disturbance or transcription activity (a function of the concentration of transcription factors) - that affects the mRNA transcription of one gene, called the _input gene_\(\iota\) of the GRN. The protein concentration of a second gene, called the _output gene_\(o\) of the GRN, is the concentration where we expect to exhibit homeostasis. These two distinguished nodes are fixed throughout the analysis. We assume that only the input node is affected by the external parameter \(\mathcal{I}\). Given a GRN \(\mathcal{G}\) we construct an associated PRN \(\mathcal{R}\) as follows. Every node \(\rho\) in the GRN \(\mathcal{G}\) corresponds to two nodes in the associated PRN: \(\rho^{R}\) (the mRNA concentration of gene \(\rho\)) and \(\rho^{P}\) (the protein concentration of gene \(\rho\)). Since there is no intermediary process, the protein concentration \(\rho^{P}\) is affected only by the mRNA concentration \(\rho^{R}\). Hence there is a PRN arrow from \(\rho^{R}\rightarrow\rho^{P}\) and no other PRN arrow has head node \(\rho^{P}\). Next, each GRN arrow \(\sigma\rightarrow\rho\) leads to a PRN arrow from the protein concentration \(\sigma^{P}\) to the mRNA concentration \(\rho^{R}\) (that is, \(\sigma\) is a transcription factor of \(\rho\)). Note that each arrow in the GRN leads to a single arrow in the PRN. In particular, autoregulation in \(\rho\) leads to an arrow of \(\rho^{P}\rightarrow\rho^{R}\). Finally, if the GRN is an input-output network with input gene \(\iota\) and output gene \(o\), then the associated PRN is an input-output network with input node \(\iota^{R}\) and output node \(o^{P}\). It follows that a PRN is always a _bipartite digraph (directed graph)_[45], where the two distinguished subsets of nodes are the \(\rho^{R}\) nodes and the \(\rho^{P}\) nodes. For example, the abstract GRN corresponding to the feedforward loop motif shown in Figure 1 is the 3-node input-output network shown in Figure 2(a). Its associated PRN is the 6-node input-output network shown in Figure 2(b). Following [20], it is straightforward to associate a class of ODEs (vector fields) to a PRN. Figure 2: **Feedforward loop.** (a) The 3-node input-output GRN. Triangles designate genes and dashed arrows designate either gene coupling or auto-regulation. (b) The corresponding 6-node PRN. Circles designate mRNA concentrations and squares designate protein concentrations. Solid lines stand for \({}^{R}\longrightarrow{}^{P}\) coupling inside a single gene and dashed lines for \({}^{P}\dashrightarrow{}^{R}\) coupling between genes, that is, the couplings (arrows) inherited from the GRN. The class of ODEs that are compatible with the network structure are called _admissible_ (see Section 2.2). To simplify notation we use the name of each node to refer to the one-dimensional coordinate corresponding to that node. **Example 2.1**.: Consider the 3-node feedforward loop GRN shown in Figure 2a. Its associated PRN is the 6-node network shown in Figure 2b. To each node of the PRN corresponds a 1-dimensional state variable. Hence the total state of the system is given by a vector \((\iota^{R},\iota^{P},\rho^{R},\rho^{P},o^{R},o^{P})\in\mathbf{R}^{6}\). The general admissible system of ODEs is \[\begin{split}\dot{\iota}^{R}&=f_{\iota^{R}}(\iota^ {R},\iota^{P},\mathcal{I})\\ \dot{\iota}^{P}&=f_{\iota^{P}}(\iota^{R},\iota^{P}) \\ \dot{\rho}^{R}&=f_{\rho^{R}}(\iota^{P},\rho^{R}) \\ \dot{\rho}^{P}&=f_{\rho^{P}}(\rho^{R},\rho^{P}) \\ \dot{o}^{R}&=f_{o^{R}}(\iota^{P},\rho^{P},o^{R}) \\ \dot{o}^{P}&=f_{o^{P}}(o^{R},o^{P})\end{split} \tag{2.1}\] The input node represents the mRNA concentration \(\iota^{R}\) and the output node represents the protein concentration \(o^{P}\). The input parameter \(\mathcal{I}\) appears explicitly only in the equation of the input node. \(\Diamond\) In general, the special bipartite structure of the PRN imposes restrictions on the functional form of the admissible vector fields. For each gene \(\rho\) there is a pair of of PRN nodes \(\rho^{R}\), \(\rho^{P}\) that yields a pair of 1-dimensional state variables and corresponding differential equations \[\begin{split}\dot{\rho}^{R}&=f_{\rho^{R}}(\rho^{R},\rho^{P},\tau_{1}^{P},\ldots,\tau_{k}^{P})\\ \dot{\rho}^{P}&=f_{\rho^{P}}(\rho^{R},\rho^{P})\end{split} \tag{2.2}\] Here, \(f_{\rho^{R}}\) and \(f_{\rho^{P}}\) are smooth functions. The variables \(\tau_{1}^{P},\ldots,\tau_{k}^{P}\) are the transcription factors (TFs), that is, the corresponding protein concentrations of the genes \(\tau_{1},\ldots,\tau_{k}\) that regulate gene \(\rho\). They are determined by the GRN arrows \(\tau_{i}\to\rho\) and the corresponding PRN arrows \(\tau_{i}^{P}\to\rho^{R}\). The presence of the variable \(\rho^{P}\) in the function \(f_{\rho^{R}}\) occurs if and only if gene \(\rho\) has a self-coupling in the GRN (see Remark 2.3 below). If \(\rho\) is the input node then the function \(f_{\rho^{R}}\) depends explicitly on the input parameter \(\mathcal{I}\), as well. From now on we will assume only the general form (2.2) for each protein-coding gene, since our classification results depend only on the GRN coupling structure, not on the particular form of the equations (see Remark 2.2). **Remark 2.2** (**Activation and Repression)**.: Very often, GRNs in the literature are drawn with two types of arrows: (i) \(\tau\to\rho\) to indicate that gene \(\tau\) (more specifically, its protein) acts as an _activator_, or _excitatory_ transcription factor, of gene \(\rho\), and (ii) \(\tau\dashp\rho\) to indicate that gene \(\tau\) (more specifically, its protein) acts as a _repressor_, or _inhibitory_ transcription factor, of gene \(\rho\). In terms of the associated differential equations this information is encoded in the \(\tau\)-dependence of the function \(f_{\rho^{R}}\) in (2.2). Typically, the dependence of \(f_{\rho^{R}}\) on \(\tau\) is defined by the so called _gene input function_. Well-known examples of gene input functions are the classical _Michalis-Menten_ and _Hill_ functions [51], and their multi-variate versions [25]. Since we do not specify the functional form of the differential equations, we will not use distinct arrow types in the GRN to indicate activation/repression of genes. However, we do employ distinct arrow types (see [7]) in the PRN to distinguish mRNA to protein (\({}^{R}\longrightarrow{}^{P}\)) and protein to mRNA (\({}^{P}\dashrightarrow{}^{R}\)) couplings. \(\lozenge\) **Remark 2.3** (**Autoregulation)**.: Self-coupling, or autoregulation, is a peculiar feature of GRNs. It means that the gene product acts as a transcription factor of the gene itself. The dynamical interpretation of autoregulation is revealed by the associated PRN. It is a coupling from the protein node to the mRNA node of the _same_ gene (see Figure 2). Moreover, it is clear from (2.1) that the self-coupling representing autoregulation is not the same as'self-interaction'. In fact, all nodes of the PRN are _self-interacting_, in the sense that the right-hand side of each differential equation explicitly depends on the state variable on the left-hand side. The clarification of the dynamical interpretation of autoregulation is another advantage of the PRN formalism. \(\lozenge\) ### Infinitesimal Homeostasis in PRN Input-output networks occur naturally when studying homeostasis in biochemical networks [18, 50]. We recall the setup of an abstract input-output network introduced in [65]. An input-output network \(\mathcal{G}\) is a directed graph consisting of \(n+2\) nodes. There is an _input node_\(\iota\), an _output node_\(o\), and \(n\)_regulatory nodes_\(\rho=(\rho_{1},\ldots,\rho_{n})\). We assume that every node lies on a path from the input node \(\iota\) to the output node \(o\). That is, the network is a _core_ network. An _admissible system_ associated with the input-output network \(\mathcal{G}\) is a parameterized system of ODEs \[\dot{X}=F(X,\mathcal{I}) \tag{2.3}\] where \(X=(x_{\iota},x_{\rho},x_{o})\in\mathbf{R}^{n+2}\) are the node state variables, \(\mathcal{I}\in\mathbf{R}\) is the _external input parameter_, and \(F=(f_{\iota},f_{\rho},f_{o})\) is the associated vector field. Explicitly, (2.3) is the system \[\begin{split}\dot{x}_{\iota}&=f_{\iota}(x_{\iota},x _{\rho},x_{o},\mathcal{I})\\ \dot{x}_{\rho}&=f_{\rho}(x_{\iota},x_{\rho},x_{o})\\ \dot{x}_{o}&=f_{o}(x_{\iota},x_{\rho},x_{o})\end{split} \tag{2.4}\] The compatibility of \(F\) with the network \(\mathcal{G}\) is given by the following conditions: 1. \(f_{j}\) depends on node \(\ell\) only if there is an arrow in the network \(\mathcal{G}\) from \(\ell\to j\). 2. \(f_{\iota}\) is the only vector field component that depends explicitly on \(\mathcal{I}\) and \(f_{\iota,\mathcal{I}}\neq 0\) generically. We write \(f_{i,j}\) to denote the partial derivative of \(f_{i}\) with respect to \(j\) at \((X_{0},\mathcal{I}_{0})\). In order to define the notion of 'infinitesimal homeostasis' in the context of input-output networks, assume that \(X_{0}\) is a linearly stable equilibrium of (2.4) at \(\mathcal{I}=\mathcal{I}_{0}\). Stability of \(X_{0}\) implies that there is a unique stable equilibrium at \(X(\mathcal{I})=\big{(}x_{\iota}(\mathcal{I}),x_{\rho}(\mathcal{I}),x_{o}( \mathcal{I})\big{)}\) as \(\mathcal{I}\) varies on neighborhood of \(\mathcal{I}_{0}\). **Definition 2.4**.: The _input-output_ function of system (2.4), at the family of equilibria \(\big{(}X(\mathcal{I}),\mathcal{I}\big{)}\), is the function \(\mathcal{I}\to o(\mathcal{I})\), that is, the projection of \(X(\mathcal{I})\) onto the coordinate \(o\). We say that the input-output function \(o(\mathcal{I})\) exhibits _infinitesimal homeostasis_ at \(\mathcal{I}_{0}\), if \[o^{\prime}(\mathcal{I}_{0})=0 \tag{2.5}\] where \({}^{\prime}\) indicates differentiation with respect to \(\mathcal{I}\). \(\Diamond\) A straightforward application of Cramer's rule in [65] gives a formula for determining infinitesimal homeostasis. Let \(J\) be the \((n+2)\times(n+2)\) Jacobian matrix of (2.4) at the equilibrium \(X_{0}\) with \(\mathcal{I}=\mathcal{I}_{0}\), \[J=\begin{bmatrix}f_{\iota,\iota}&f_{\iota,\rho}&f_{\iota,o}\\ f_{\rho,\iota}&f_{\rho,\rho}&f_{\rho,o}\\ f_{o,\iota}&f_{o,\rho}&f_{o,o}\end{bmatrix} \tag{2.6}\] The \((n+1)\times(n+1)\)_homeostasis matrix_\(H\) is obtained from the Jacobian matrix \(J\) by eliminating the first row and the last column, that is, \[H=\left[\begin{array}{cc}f_{\rho,\iota}&f_{\rho,\rho}\\ f_{o,\iota}&f_{o,\rho}\end{array}\right] \tag{2.7}\] **Lemma 2.5** ([65, Lemma 1.5]).: _Suppose \(o(\mathcal{I})\) is the input-output function of an input-output network \(\mathcal{G}\). Then \(\mathcal{I}_{0}\) is a point of infinitesimal homeostasis if and only if_ \[\det(H)=0\] _at the equilibrium \(\big{(}X(\mathcal{I}_{0}),\mathcal{I}_{0}\big{)}\)._ **Example 2.1** (**Continued)**.: Consider the 6-node PRN shown in Figure 2b. The input-output function is given by \(\mathcal{I}\mapsto o^{P}(\mathcal{I})\). The homeostasis matrix for the corresponding system of ODEs (2.1) is the \(5\times 5\) matrix obtained from the \(6\times 6\) Jacobian matrix by deleting its first row and its last column, namely, \[H=\left[\begin{array}{cccc}f_{\iota^{P},\iota^{R}}&f_{\iota^{P},\iota^{P}}& 0&0&0\\ 0&f_{\rho^{R},\iota^{P}}&f_{\rho^{R},\rho^{R}}&0&0\\ 0&0&f_{\rho^{P},\rho^{R}}&f_{\rho^{P},\rho^{P}}&0\\ 0&f_{\sigma^{R},\iota^{P}}&0&f_{\sigma^{R},\rho^{P}}&f_{\sigma^{R},\rho^{R}}\\ 0&0&0&0&f_{\sigma^{P},\sigma^{R}}\end{array}\right]\] It follows that \[\det(H)=f_{\iota^{P},\iota^{R}}\left(f_{\rho^{R},\iota^{P}}\,f_{\rho^{P},\rho ^{R}}\,f_{o^{R},\rho^{P}}+f_{\rho^{R},\rho^{R}}\,f_{\rho^{P},\rho^{P}}\,f_{o^ {R},\iota^{P}}\right)f_{o^{P},o^{R}} \tag{2.8}\] In this example the decomposition of \(\det(H)\) into irreducible factors leads to three different 'homeostasis types', two degree 1 factors and one degree 3 factor. The 'homeostasis class' associated with the three irreducible factors in this example is called _structural homeostasis_. Structural homeostasis associated to a degree 1 factor is called _Haldane homeostasis_. \(\Diamond\) Lemma 2.5 reduces the computation of infinitesimal homeostasis to solving \(\det(H)=0\), where \(H\) is the homeostasis matrix in (2.7). Wang et al. [65] use Frobenius-Konig theory to show the existence of two \((n+1)\times(n+1)\) permutation matrices \(P\) and \(Q\) such that \[PHQ=\left[\begin{array}{cccc}B_{1}&*&\cdots&*\\ 0&B_{2}&\cdots&*\\ \vdots&&&\vdots\\ 0&0&\cdots&B_{m}\end{array}\right] \tag{2.9}\] where \(B_{1},\ldots,B_{m}\) are \(m\) unique irreducible square blocks. Hence \[\det(H)=\det(B_{1})\cdots\det(B_{m})\] and no further factorization of \(\det(H)\) is possible. The unique irreducible square blocks \(B_{\eta}\) in (2.9) are called _irreducible blocks_. The irreducible blocks of a homeostasis matrix \(H\) correspond to the possible _homeostasis types_ that can occur in a network with homeostasis matrix \(H\). Furthermore, they can be divided into exactly two _homeostasis classes_: _structural_ and _appendage_ (see Appendix A.1). We say that infinitesimal homeostasis is caused by (or is of type) \(B_{\eta}\), if \[\det(B_{\eta})=0\qquad\text{and}\qquad\det(B_{\xi})\neq 0\quad\text{for all }\xi\neq\eta \tag{2.10}\] Next, we associate a subnetwork \(\mathcal{K}_{\eta}\) of \(\mathcal{G}\) to each irreducible block \(B_{\eta}\), called _homeostasis subnetwork_. The homeostasis subnetworks \(\mathcal{K}_{\eta}\) are completely determined by the irreducible blocks \(B_{\eta}\), and vice-versa. Hence, they can be divided into two classes (structural or appendage) according to the homeostasis class of the corresponding irreducible block. Moreover, the homeostasis subnetworks can be fully characterized in terms of combinatorial properties, so that we can obtain all homeostasis types directly from the network \(\mathcal{G}\). **Example 2.1** (**Continued)**.: The three homeostasis subnetworks of the 6-node PRN shown in Figure 2b can be obtained from the input-output network as explained in Appendix A.2. The two one-dimensional structural blocks \([f_{\iota^{P},\iota^{R}}]\) and \([f_{o^{P},o^{R}}]\) correspond to Haldane subnetworks \(\iota^{R}\to\iota^{P}\) and \(o^{R}\to o^{P}\), respectively. The 3-dimensional structural block \[\begin{bmatrix}f_{\rho^{R},\iota^{P}}&f_{\rho^{R},\rho^{R}}&0\\ 0&f_{\rho^{P},\rho^{R}}&f_{\rho^{P},\rho^{P}}\\ f_{o^{R},\iota^{P}}&0&f_{o^{R},\rho^{P}}\end{bmatrix} \tag{2.11}\] corresponds to the 4-node subnetwork generated by the nodes \(\{\iota^{P},\rho^{R},\rho^{P},o^{R}\}\). \(\lozenge\) Next, we consider the notion of 'homeostasis pattern', namely, the set of nodes that are simultaneously homeostatic whenever the output node is homeostatic. **Definition 2.3**.: Let \(\mathcal{G}\) be an input-output network and suppose that infinitesimal homeostasis occurs in \(\mathcal{G}\) at \(\mathcal{I}_{0}\), that is, \(o^{\prime}(\mathcal{I}_{0})=0\). A _homeostasis pattern_ of \(\mathcal{G}\) is a set of nodes \(\sigma\) (including the output node \(o\)) for which the condition \(\sigma^{\prime}(\mathcal{I}_{0})=0\), holds generically. \(\lozenge\) Homeostasis patterns can be graphically represented by coloring the nodes of \(\mathcal{G}\) that are homeostatic. It turns out that each homeostasis pattern is fully determined by a homeostasis subnetwork and vice-versa. In other words, there is complete correspondence between the homeostasis types, the homeostasis subnetworks and the homeostasis patterns of \(\mathcal{G}\) (see Appendix A.3). **Example 2.1** (**Continued)**.: The three homeostasis subnetworks of the 6-node PRN shown in Figure 2(b) lead to three distinct homeostasis patterns shown in Table 1. Notice that the two Haldane homeostasis subnetworks correspond to the coupling between the mRNA and protein in the _same_ gene. This creates a homeostasis pattern where one gene has one homeostatic PRN-node while the other PRN-node is not. Whereas in the pattern associated to the 3-dimensional structural type both associated PRN-nodes of each gene are either simultaneously homeostatic or not. Furthermore, it can be shown that the 3-dimensional structural type is the cause of homeostasis in the example of [1]. This is most easily seen by comparing the homeostasis patterns: it is shown in [1, Fig. 3] that only the PRN-nodes associated to the output node of the GRN are homeostatic, which corresponds exactly to the homeostasis pattern \(\{o^{R},o^{P}\}\) in the PRN in Figure 2b. \(\Diamond\)\(\Diamond\) ### Infinitesimal Homeostasis in GRN As we have shown using the GRN of Figure 1 (or its abstract version in Figure 2a the PRN construction allows us to apply the theory of [65] to obtain all the possible homeostasis types and corresponding homeostasis patterns on the PRN. Now it remains to explain how the results obtained for the PRN can be 'lifted back' to the GRN. In other words, we first need to define what it means for a node in a GRN to be homeostatic. Then, we use the general theory to determine in a purely combinatorial fashion, the 'formal homeostasis subnetworks' of the GRN (see Appendix A.2). Lastly, we show how the homeostasis subnetworks of the GRN and the associated PRN relate to each other. Since only the homeostasis subnetworks of the PRN have a 'dynamical' interpretation of infinitesimal homeostasis, we use the relation obtained before reinterpreting infinitesimal homeostasis on the GRN level. \begin{table} \begin{tabular}{c c c} \hline \hline Homeostasis Type & Homeostasis Subnetwork & Homeostasis Pattern \\ \hline \([f_{\iota^{P},\iota^{R}}]\) & \(\iota^{R}\rightarrow\iota^{P}\) & \(\{\iota^{P},\rho^{R},\rho^{P},o^{R},o^{P}\}\) \\ \([f_{o^{P},o^{R}}]\) & \(o^{R}\to o^{P}\) & \(\{o^{P}\}\) \\ 3D matrix (2.11) & \(\left\langle\iota^{P},\rho^{R},\rho^{P},o^{R}\right\rangle\) & \(\{o^{R},o^{P}\}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Infinitesimal homeostasis and corresponding homeostasis patterns of the 6-node PRN shown in Figure 2b. In the second column the notation \(\left\langle\,\cdot\,\right\rangle\) stands for the ‘subnetwork generated’. In the last column we list the homeostatic nodes. In order to explain this relation we will consider another simple Example 2.3, in addition to Example 2.1. Figure 3a shows a 3-node GRN called _feedback inhibition_, which plays an important role in the GRN of _E. coli_ and Figure 3b is the associated PRN. Feedback inhibition appears twice in the regulatory cascade of carbohydrate catabolism of _E. coli_[39, Figs. 1 and 2(a)], with \((\iota,\tau,o)=(\mathrm{IHF},\mathrm{CRP},\mathrm{FIS})\) and \((\iota,\tau,o)=(\mathrm{ARC-A},\mathrm{HNS},\mathrm{GAD-X})\). **Example 2.3**.: Consider the 3-node feedback inhibition GRN, Figure 3a, and the associated 6-node PRN in Figure 3b. The total state is given by a vector \((\iota^{R},\iota^{P},\rho^{R},\rho^{P},o^{R},o^{P})\in\mathbf{R}^{6}\) and general admissible system of ODEs is \[\dot{\iota}^{R} =f_{\iota^{R}}(\iota^{R},\mathcal{I})\] \[\dot{\iota}^{P} =f_{\iota^{P}}(\iota^{R},\iota^{P})\] \[\dot{\tau}^{R} =f_{\tau^{R}}(\tau^{R},o^{P})\] \[\dot{\tau}^{P} =f_{\tau^{P}}(\tau^{R},\tau^{P})\] \[\dot{o}^{R} =f_{o^{R}}(\iota^{P},\tau^{P},o^{R})\] \[\dot{o}^{P} =f_{o^{P}}(o^{R},o^{P})\] The homeostasis matrix is \[H=\begin{bmatrix}f_{\iota^{P},\iota^{R}}&f_{\iota^{P},\iota^{P}}&0&0&0\\ 0&0&f_{\tau^{R},\tau^{R}}&0&0\\ 0&0&f_{\tau^{P},\tau^{R}}&f_{\tau^{P},\tau^{P}}&0\\ 0&f_{o^{R},\iota^{P}}&0&f_{o^{R},\tau^{P}}&f_{o^{R},o^{R}}\\ 0&0&0&0&f_{o^{P},o^{R}}\end{bmatrix}\] and the homeostasis determinant is \[\det(H)=f_{\tau^{R},\tau^{R}}f_{\tau^{P},\tau^{P}}f_{o^{R},\iota^{P}}f_{\iota^ {P},\iota^{R}}f_{o^{P},o^{R}} \tag{2.12}\] Figure 3: **Feedback inhibition.** (a) The 3-gene input-output GRN. Triangles designate genes and dashed arrows designate gene coupling. (b) The corresponding 6-node input-output PRN. Circles designate mRNA concentrations and squares designate protein concentrations. Solid lines \({}^{R}\longrightarrow{}^{P}\) stand for coupling inside a single gene and dashed lines \({}^{P}\dashrightarrow{}^{R}\) for coupling between genes, that is, the couplings inherited from the GRN. The homeostasis determinant is a completely reducible polynomial of degree five. Thus the PRN has five degree 1 homeostasis types. The associated homeostasis subnetworks and the homeostasis patterns are shown in Table 2. For comparison, let us determine the homeostasis subnetworks and homeostasis patterns obtained by working directly with the GRN. Recall that each homeostasis subnetwork corresponds to a 'homeostasis mechanism' that can cause homeostasis and they can be divided into two classes: (i) _structural class_, corresponds to 'generalized feedforward mechanisms' and (ii) _appendage class_ corresponds to 'generalized feedback mechanisms'. Let us start with Example 2.1, Figure 1(a). First, observe that the self-coupling of the input node \(\iota\) does not affect the construction of homeostasis subnetworks (we simply keep the arrow during the procedure). Then, since \(\iota\) and \(o\) are the only super-simple nodes and there are no appendage nodes, it follows that there is only one homeostasis subnetwork of structural class, see Table 3. Comparing with Table 1 we see that the associated PRN, Figure 1(b), has three homeostasis subnetworks. The two first homeostasis subnetworks of the PRN are Haldane subnetworks. Moreover, in both cases the two PRN-nodes of the Haldane subnetwork come from the same GRN-node. This observation suggests that these two Haldane subnetworks do not have a 'counterpart' in the GRN. Now we consider Example 2.3, Figure 2(a). Here, we have that the only super simple nodes are \(\iota\) and \(o\), but the regulatory node \(\tau\) is appendage. Then, there are two homeostasis \begin{table} \begin{tabular}{c c c} \hline \hline Homeostasis Type & Homeostasis Subnetwork & Homeostasis Pattern \\ \hline \([f_{\tau^{R},\tau^{R}}]\) & \(\tau^{R}\) & \(\{o^{R},o^{P}\}\) \\ \([f_{\tau^{P},\tau^{P}}]\) & \(\tau^{P}\) & \(\{\tau^{R},o^{R},o^{P}\}\) \\ \([f_{o^{R},\iota^{P}}]\) & \(\iota^{P}\to o^{R}\) & \(\{\tau^{R},\tau^{P},o^{R},o^{P}\}\) \\ \([f_{\iota^{P},\iota^{R}}]\) & \(\iota^{R}\rightarrow\iota^{P}\) & \(\{\iota^{P},\tau^{R},\tau^{P},o^{R},o^{P}\}\) \\ \([f_{o^{P},o^{R}}]\) & \(o^{R}\to o^{P}\) & \(\{\tau^{R},\tau^{P},o^{P}\}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Infinitesimal homeostasis and corresponding homeostasis patterns of the 6-node PRN shown in Figure 2(b). In the last column we list the homeostatic nodes. \begin{table} \begin{tabular}{c c c} \hline \hline Homeostasis Class & Homeostasis Subnetwork & Homeostasis Pattern \\ \hline Structural & \(\left<\iota,\rho,o\right>\) & \(\{o\}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Infinitesimal homeostasis and corresponding homeostasis patterns of the GRN shown in Figure 1(a). In the second column the notation \(\left<\,\cdot\,\right>\) stands for the ‘subnetwork generated’. In the last column we list the homeostatic nodes. subnetworks, one of structural class and the other of appendage class, see Table 4. Comparing with Table 2 we see that the associated PRN, Figure 2(b), has five homeostasis subnetworks (two appendage and three structural). Among the three structural subnetworks of the PRN, two of them (the last two cases in Table 2) are such that the two PRN-nodes come from the same GRN-node. The third case in Table 2) is different since the PRN-nodes come from distinct GRN-nodes. The two appendage subnetworks of the PRN are given by a single appendage node (i.e., a degree 1 irreducible factor). Appendage homeostasis associated to degree 1 irreducible factors is called _null-degradation homeostasis_. These examples suggest the following observations. 1. The simplest example of structural homeostasis is _Haldane homeostasis_. PRNs can have two types of Haldane homeostasis: (i) one corresponds to an arrow connecting an mRNA node to a protein node (e.g., the first two cases in Example 1 and the two last cases in Table 2 and (ii) the other corresponds to an arrow connecting a protein node to an mRNA node (e.g., the third case in Table 2). 2. The simplest example of appendage homeostasis is _null-degradation homeostasis_. In general, there are two different types of null-degradation in PRNs: (i) one occurs in a protein node (e.g., the second case in Table 2) and (ii) the other occurs in an mRNA node (e.g., the first case in Table 2). **Definition 2.4**.: Let \(\mathcal{G}\) be a GRN with associated PRN \(\mathcal{R}\). Let \(\tau\in\mathcal{G}\) be a node with \(\tau^{R}\in\mathcal{R}\) be the mRNA node and \(\tau^{P}\in\mathcal{R}\) be the protein node. 1. An appendage node \(\tau\in\mathcal{G}\) is a _single appendage node_ if \(\{\tau\}\) is an appendage subnetwork of \(\mathcal{G}\) with no self-coupling. 2. If \(\{\tau^{R}\}\) is an appendage subnetwork of \(\mathcal{R}\) then it is called _\(R\)-null-degradation_. 3. If \(\{\tau^{P}\}\) is an appendage subnetwork of \(\mathcal{R}\) then it is called _\(P\)-null-degradation_. 4. If \(\langle\tau^{R},\tau^{P}\rangle\) is a structural subnetwork of \(\mathcal{R}\) then it is called _\(\mathcal{R}\)-Haldane_. \(\lozenge\) Now we state the first main result of the paper, regarding the relation between the homeostasis subnetworks of a GRN and its associated PRN (see Appendix A.2 for the terminology and Appendix B.2 for the proof.) \begin{table} \begin{tabular}{c c c} \hline \hline Homeostasis Class & Homeostasis Subnetwork & Homeostasis Pattern \\ \hline Appendage & \(\tau\) & \(\{o\}\) \\ Structural & \(\iota\to o\) & \(\{\tau,o\}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Infinitesimal homeostasis and corresponding homeostasis patterns of the GRN shown in Figure 2(a). In the last column we list the homeostatic nodes. **Theorem 2.5**.: _The homeostasis subnetworks of a GRN \(\mathcal{G}\) and its associated PRN \(\mathcal{R}\) correspond uniquely to each other, except in the following cases:_ 1. _The_ \(\mathcal{R}\)_-Haldane subnetworks of_ \(\mathcal{R}\) _correspond uniquely to the super-simple nodes of_ \(\mathcal{G}\) _(super-simple nodes are not homeostasis subnetworks of_ \(\mathcal{G}\)_)._ 2. _For every single appendage node_ \(\tau\in\mathcal{G}\) _the appendage subnetwork_ \(\{\tau\}\) _of_ \(\mathcal{G}\) _yields 2 appendage subnetworks_ \(\{\tau^{R}\}\) _(_\(R\)_-null-degradation) and_ \(\{\tau^{P}\}\) _(_\(P\)_-null-degradation) of_ \(\mathcal{R}\)_._ Proof.: It follows from Theorem B.6 from Appendix B.2. Now we consider the following question: _Can we combinatorially determine the homeostasis patterns of a GRN and its associated PRN?_ In principle this can be done since the classification of homeostasis patterns is purely combinatorial. We start by introducing the notion of a homeostasis pattern in a GRN that is derived from the associated PRN. **Definition 2.6**.: Consider a GRN \(\mathcal{G}\) and its associated PRN \(\mathcal{R}\). Suppose that infinitesimal homeostasis occurs in the PRN at \(\mathcal{I}_{0}\). A node \(\rho\in\mathcal{G}\) is said to be _GRN-homeostatic_ if both associated PRN-nodes \(\rho^{R}\) and \(\rho^{P}\) are simultaneously homeostatic at \(\mathcal{I}_{0}\). A _GRN-generating homeostasis pattern_ is a homeostasis pattern \(\mathcal{P}\) on \(\mathcal{R}\) such that, for every PRN-node in \(\mathcal{P}\) corresponds to GRN-homeostatic node. In a GRN-generating homeostasis pattern all PRN-nodes appear in mRNA-protein pairs. That is, the set of GRN-homeostatic nodes match perfectly the PRN homeostasis pattern. Let us consider again our two running examples. The results for the feedforward loop GRN and its associated PRN, Figure 2, are shown in Table 5 and for the feedback inhibition GRN and the associated PRN, Figure 3, are shown in Table 6. From these examples we draw a couple of remarks: 1. \(\mathcal{R}\)-Haldane subnetworks do not produce GRN-generating homeostasis patterns. There is always a homeostatic protein node whose pairing mRNA node is not homeostatic. \begin{table} \begin{tabular}{c c c c c} \hline \hline PRN Subnet & GRN Subnet & PRN Pattern & GRN-H Nodes & GRN Pattern \\ \hline \(\iota^{R}\rightarrow\iota^{P}\) & \(\iota^{*}\) & \(\{\iota^{P},\rho^{R},\rho^{P},o^{R},o^{P}\}\) & \(\{\rho,o\}\) & \(-\) \\ \(o^{R}\to o^{P}\) & \(o^{*}\) & \(\{o^{P}\}\) & \(-\) & \(-\) \\ \(\left<\iota^{P},\rho^{R},\rho^{P},o^{R}\right>\) & \(\left<\iota,\rho,o\right>\) & \(\{o^{R},o^{P}\}\) & \(\{o\}\) & \(\{o\}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Infinitesimal homeostasis and homeostasis patterns of the feedforward loop network GRN and its associated PRN from Figure 2. The \({}^{*}\) denotes the GRN super-simple node corresponding to the \(\mathcal{R}\)-Haldane subnetwork. See Appendix B.3 for the computations. 2. \(P\)-null-degradation subnetworks do not produce GRN-generating homeostasis patterns. There is always a homeostatic mRNA node whose pairing protein node is not homeostatic. \(\lozenge\) These observations hold the essence of the second main result of the paper, regarding the relation between the homeostasis patterns of a GRN and its associated PRN. It is useful to introduce the following terminology. Let \(\mathcal{K}\) be a homeostasis subnetwork of the GRN \(\mathcal{G}\). Then we define a map \(\mathcal{K}\to\mathcal{K}^{R}\) from the set of homeostasis subnetworks of \(\mathcal{G}\) to the set of homeostasis subnetworks of the associated \(\mathcal{R}\) as follows. If \(\mathcal{K}\neq\{\tau\}\), where \(\tau\) is a single appendage node, the \(\mathcal{K}^{R}\) is the unique subnetwork given by Theorem 2.5. If \(\mathcal{K}=\{\tau\}\), where \(\tau\) is a single appendage node, then \(\mathcal{K}^{R}=\{\tau^{R}\}\). With this definition, the map \(\mathcal{K}\to\mathcal{K}^{R}\) is injective and the complement of its image in the set of homeostasis subnetworks of the PRN is exactly the set of \(\mathcal{R}\)-Haldane and \(P\)-null-degradation subnetworks. **Theorem 2.7**.: _Let \(\mathcal{G}\) be a GRN and \(\mathcal{R}\) its associated PRN. Then the homeostasis patterns on \(\mathcal{G}\) correspond exactly to the GRN-generating homeostasis patterns on \(\mathcal{R}\). The homeostasis patterns on \(\mathcal{R}\) associated to the \(\mathcal{R}\)-Haldane and the \(P\)-null-degradation subnetworks do not correspond to homeostasis patterns on \(\mathcal{G}\)._ Proof.: This follows from Theorem C.6 and Lemma C.7 from Appendix C.2. **Remark 2.8** (**Non-coding genes)**.: As mentioned before, in eukaryotic cells there are several regulatory mechanisms modulating transcription and translation. Almost all regulatory modulation is performed by non-coding genes, i.e., genes that are transcribed into RNA, but the RNA is not translated into protein. The PRN formalism can be extended to include non-coding genes thanks to the following observation: the gene product of a non-coding gene is an mRNA whose regulatory activity is performed by direct interaction with other mRNAs [33, 54]. Thus, unlike a protein-coding gene, a non-coding gene \(\nu\) yields only one scalar state \begin{table} \begin{tabular}{c c c c c} \hline \hline PRN Subnet & GRN Subnet & PRN Pattern & GRN-H Nodes & GRN Pattern \\ \hline \(\tau^{R}\) & \(\tau\) & \(\{o^{R},o^{P}\}\) & \(\{o\}\) & \(\{o\}\) \\ \(\tau^{P}\) & \(\tau\) & \(\{\tau^{R},o^{R},o^{P}\}\) & \(\{o\}\) & \(-\) \\ \(\iota^{P}\to o^{R}\) & \(\iota\to o\) & \(\{\tau^{R},\tau^{P},o^{R},o^{P}\}\) & \(\{\tau,o\}\) & \(\{\tau,o\}\) \\ \(\iota^{R}\to\iota^{P}\) & \(\iota^{*}\) & \(\{\iota^{P},\tau^{R},\tau^{P},o^{R},o^{P}\}\) & \(\{\tau,o\}\) & \(-\) \\ \(o^{R}\to o^{P}\) & \(o^{*}\) & \(\{\tau^{R},\tau^{P},o^{P}\}\) & \(\{\tau\}\) & \(-\) \\ \hline \hline \end{tabular} \end{table} Table 6: Infinitesimal homeostasis and homeostasis patterns of the feedforward loop network GRN and its associated PRN from Figure 3. The \({}^{*}\) denotes the GRN super-simple node corresponding to the \(\mathcal{R}\)-Haldane subnetwork. See Appendix B.3 for the computations. variable and one differential equation \[\dot{\nu}=f_{\nu}(\nu,\underbrace{\tau_{1}^{P},\ldots,\tau_{k}^{P}}_{\text{ TFs}}, \underbrace{\rho_{1}^{R},\ldots,\rho_{\ell}^{R}}_{\text{mRNAs}})\] Here, \(f_{\nu}\) is a smooth function. The variables \(\tau_{1}^{P},\ldots,\tau_{k}^{P}\) are the protein concentrations associated to the transcription factors (TFs) that regulate gene \(\nu\). The variables \(\rho_{1}^{R},\ldots,\rho_{\ell}^{R}\) are the mRNA concentrations associated to the protein-coding genes that interact with \(\nu\). Finally, for each \(\rho_{j}^{R}\) above, the corresponding mRNA equation must now depend on \(\nu\): \[\dot{\rho}_{j}^{R}=f_{\rho_{j}^{R}}(\rho_{j}^{R},\ldots,\nu)\] The consequence for the PRN diagram is that a non-coding gene: (i) gives rise to a single PRN-node \(\nu=\nu^{R}\), instead of two, (ii) receives arrows from protein nodes \(\tau_{i}^{P}\rightarrow\nu^{R}\), (iii) has a bidirectional connection with the mRNA nodes that it interacts with \(\rho_{j}^{R}\xrightarrow{\rightarrow}\nu\). With these new requirements the PRN is no longer a bipartite digraph, now it is a _tripartite digraph_. \(\Diamond\) ## 3 Conclusion and Outlook In this paper, we present a framework for the analysis and classification of homeostasis in gene regulatory networks. We accomplish this by combining a formalism for the study of gene regulatory networks (GRN), called _protein-mRNA networks_ (PRN), with the theories of Wang et al. [65] and Duncan et al. [9], for the classification of homeostasis types and homeostasis patterns in input-output networks, respectively. Given an arbitrary input-output GRN (consisting of protein-coding genes) \(\mathcal{G}\), we associate an input-output PRN \(\mathcal{R}\), which enables us to apply the results of [65, 9]. By comparing the results for the PRN with a suitable application of the combinatorial piece of the theory to the GRN, we obtain a refinement of the classification of homeostasis types and homeostasis patterns of \(\mathcal{R}\). The final result is a complete characterization of homeostasis types and homeostasis patterns on the PRN that have correspondent on the GRN. An interesting byproduct is the discovery of homeostasis types and homeostasis patterns on the PRN without GRN counterpart. The 'new' PRN homeostasis types are degree one homeostasis types, namely, they are related to one dimensional irreducible factors of the homeostasis determinant. They are: (i) \(\mathcal{R}\)_-Haldane_, that occurs when the linearized coupling between the mRNA and protein of the same gene changes from excitation to inhibition as the input parameter varies, and (ii) \(P\)_-null-degradation_, that occurs when the linearized self-interaction of a protein changes from degradation to production as the input parameter varies. Although the existence of \(\mathcal{R}\)-Haldane and \(P\)-null-degradation is mathematically established, their occurrence in biological models is unlikely. \(\mathcal{R}\)-Haldane homeostasis is related to the _synthesis rate_ of the protein from the mRNA template and \(P\)-null-degradation is related to the _degradation rate_ of the protein. Both these rates are _constant_ (the first is positive and the second is negative) in specific model equations for GRN modeling [35, 47]. The main novelties of our approach include: (i) the _simultaneous_ use of two networks, the GRN and the PRN, in the analysis of gene expression homeostasis, and (ii) the _lack of assumptions_ about the functional form of the differential equations. The protein-mRNA formalism is well-known in the literature [35, 47]. We take this formalism one step further, by formally introducing the protein-mRNA network (PRN) and using it in conjunction with the GRN to extract a complete view of gene expression homeostasis. See [24] for an approach similar to the construction of the PRN. Generally speaking, all model equations for gene expression found in the literature have an explicit functional form [1, 23, 35, 41, 42, 47]. Here, we assume only the general form 2.2, forced by the admissibility of vector fields with respect to the PRN. Hence, our classification results apply to virtually any model equation for gene expression. Even more importantly, this leaves open the possibility to use 'higher-order' terms to model more complicated interactions [3]. In the terminology of [20] our results are called _model independent_. This means that the classification results obtained here provide a complete list of possible behaviors, with respect to homeostasis, that is _independent_ of the model equations - the list depends only on the topology of the network. Which of those behaviors will be observed in a particular realization of the dynamics (e.g., a model equation) _depends_ on the specific form of the dynamics. There are several relevant ways to generalize and extend the theory of homeostasis in gene regulatory networks. _Multiple inputs._ The defining condition for occurrence of homeostasis (2.10) is generic for single-variable input-output functions. The notion of infinitesimal homeostasis can be naturally generalized to multi-variable input-output functions and the theory of homeostasis types can be extended to this setting [19, 36, 37]. In this case, we are lead to consider _higher codimension homeostasis_[10]. _Dynamics and bifurcations in PRN._ Recall that the starting point of the analysis of homeostasis is to assume that there is a family of asymptotically stable equilibria and define the input-output function from it. One can take a step back and ask about the existence and uniqueness of equilibria in a given GRN, that is, how the dynamics and bifurcations of a GRN is constrained by the network structure. In [38] the authors propose an approach based on PRN, where they analyze the general models, given by the admissible systems, as considered here, and then to specialize the results to specific models. _Additional biological mechanisms._ In order to keep the exposition as simple as possible, we have used a'minimal' definition of PRN, consisting only of protein-coding genes and accounting only for the transcription and translation processes. Nevertheless, we have hinted that is not difficult to extend the framework to include other biological mechanisms, for instance, non-coding genes (see Remark 2.8. Furthermore, there are several possible biological mechanisms that can be included in the modeling of gene expression. Here is a small sample of relevant biological mechanisms: (a) spatial localization of the transcription and translation processes [6, 14], (b) transcriptional time delays [43], (c) multi-site phosphorylation / dephosphorylation of transcription factors [22], (d) DNA-level transcriptional regulation [8]. Relation to other approaches.As we mentioned earlier there is another approach to the modeling of GRN, based on the QSSA [56]. It is called _simplified GRN models_ in [47], _protein regulatory networks_ in [64] and _protein-only models_ in [11]. Such models can be used only with protein-coding GRN, and provide a perfect match between the nodes of the GRN and the equations for protein concentrations. A substantial difference between mRNA-protein models and protein-only models is that their _generic dynamics_ are not the same [11]; for instance, the former can have oscillatory solutions while the latter can not [38]. Finally, as discussed before, the PRN formalism easily allows for extensions and generalizations, by 'unfolding' each gene node into two or more PRN-nodes representing the concentrations of different intermediate molecules. On the other hand, the simplified GRN formalism reverses this process, by lumping all the intermediate molecular products onto a single (protein) concentration. Acknowledgments.The research of FA was supported by Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP) grants 2019/12247-7 and 2019/21181-0. ## Appendix A Homeostasis in Input-Output Networks In this section we recall basic terminology and results on infinitesimal homeostasis in input-output networks from [65, 9]. ### Core Networks and Homeostasis Classes Wang et al. [65] show that the determination of infinitesimal homeostasis in an input-output network reduces to the study of an associated 'core subnetwork'. A _core network_ is an input-output network where every node is both upstream from the output node and downstream from the input node. Every input-output network \(\mathcal{G}\) has a _core subnetwork_\(\mathcal{G}_{c}\) whose nodes are the nodes in \(\mathcal{G}\) that are both upstream from the output node and downstream from the input node and whose arrows are the arrows in \(\mathcal{G}\) whose head and tail nodes are both nodes in \(\mathcal{G}_{c}\). Throughout this paper we assume that input-output networks are core networks. Suppose that \(B_{\eta}\) is an irreducible component in the decomposition (2.9), where \(B_{\eta}\) is a \(k\times k\) diagonal block, that is, \(B_{\eta}\) has degree \(k\). Since the entries of \(B_{\eta}\) are entries of \(H\), these entries have the form \(f_{\rho,\tau}\); that is, the entries are either \(0\) (if \(\tau\to\rho\) is not an arrow in \(\mathcal{G}\)), self-coupling (if \(\tau=\rho\)), or coupling (if \(\tau\to\rho\) is an arrow in \(\mathcal{G}\)). **Definition A.1**.: Let \(B_{\eta}\) be an irreducible component in the decomposition (2.9). 1. The homeostasis class of type \(B_{\eta}\) of degree \(k\) is _appendage_ if \(B_{\eta}\) has \(k\) self-couplings and _structural_ if \(B_{\eta}\) has \(k-1\) self-couplings. 2. The subnetwork \(\mathcal{K}_{\eta}\) of \(\mathcal{G}\) associated with the homeostasis block \(B_{\eta}\) is defined as follows. The nodes in \(\mathcal{K}_{\eta}\) are the union of nodes \(p\) and \(q\) where \(f(p,x_{q})\) is a nonzero entry in \(B_{\eta}\) and the arrows of \(\mathcal{K}_{\eta}\) are the union of arrows \(q\to p\) where \(p\neq q\). \(\lozenge\) The following theorem shows that all irreducible components belong to either appendage or structural class. When \(B_{\eta}\) is appendage, the subnetwork \(\mathcal{K}_{\eta}\) has \(k\) nodes and \(B_{\eta}\) can be put in a _standard Jacobian form_. Also, when \(B_{\eta}\) is structural, the subnetwork \(\mathcal{K}_{\eta}\) has \(k+1\) nodes and \(B_{\eta}\) can be put in a _standard homeostasis form_. **Theorem A.2** ([65]).: _Let \(H\) be an \((n+1)\times(n+1)\) homeostasis matrix and let \(B_{\eta}\) be a \(k\times k\) irreducible component in the decomposition (2.9) with \(k\geq 1\). Then \(B_{\eta}\) has either \(k-1\) or \(k\) self-coupling. Furthermore,_ 1. _If_ \(B_{\eta}\) _has_ \(k-1\) _self-coupling entries, then_ \(B_{\eta}\) _has the form:_ \[\left[\begin{array}{cccc}f_{\rho_{1},\rho_{1}}&\cdots&f_{\rho_{1},\rho_{k-1 }}&f_{\rho_{1},l}\\ \vdots&\ddots&\vdots&\vdots\\ f_{\rho_{k-1},\rho_{1}}&\cdots&f_{\rho_{k-1},\rho_{k-1}}&f_{\rho_{k-1},l}\\ f_{j,\rho_{1}}&\cdots&f_{j,\rho_{k-1}}&f_{j,l}\end{array}\right]\] 2. _If_ \(B_{\eta}\) _has_ \(k\) _self-coupling entries, then_ \(B_{\eta}\) _has the form:_ \[\left[\begin{array}{ccc}f_{\rho_{1},\rho_{1}}&\cdots&f_{\rho_{1},\rho_{k}}\\ \vdots&\ddots&\vdots\\ f_{\rho_{k},\rho_{1}}&\cdots&f_{\rho_{k},\rho_{k}}\end{array}\right]\] ### Combinatorial Characterization of Homeostasis In order to classify these homeostasis subnetworks, we recall some combinatorial properties on input-output networks. **Definition A.3**.: Let \(\mathcal{G}\) be a core input-output network. 1. A directed path connecting node \(\rho\) to node \(\tau\) is called a _simple path_ if it visits each node on the path at most once. Further, an _\(\iota o\)-simple path_ is a simple path from the input node \(\iota\) to the output node \(o\). 2. A node in \(\mathcal{G}\) is _simple_ if the node lies on an \(\iota o\)-simple path, and _appendage_ if the node is not simple. Further, a simple node is called a _super-simple_ node if it lies on every \(\iota o\)-simple path in \(\mathcal{G}\). 3. A simple path from node \(\rho\) to node \(\tau\) is an _appendage path_ if every node on this path, except perhaps for \(\rho\) and \(\tau\), is an appendage node. \(\Diamond\) **Definition A.4**.: Let \(\mathcal{G}\) be a core input-output network. 1. The _appendage subnetwork_\(\mathcal{A}_{\mathcal{G}}\) of \(\mathcal{G}\) is the subnetwork consisting of all appendage nodes and all arrows in \(\mathcal{G}\) connecting appendage nodes. 2. The _complementary subnetwork_ of an \(\iota o\)-simple path \(S\) is the subnetwork \(\mathcal{C}_{S}\) consisting of all nodes not on \(S\) and all arrows in \(\mathcal{G}\) connecting those nodes. 3. Nodes \(\rho_{i},\rho_{j}\) in \(\mathcal{A}_{\mathcal{G}}\) are _path equivalent_ if there exists paths in \(\mathcal{A}_{\mathcal{G}}\) from \(\rho_{i}\) to \(\rho_{j}\) and from \(\rho_{j}\) to \(\rho_{i}\). An _appendage path component_ (or an appendage strongly connected component) is a path equivalence class in \(\mathcal{A}_{\mathcal{G}}\). 4. Let \(\mathcal{A}\subset\mathcal{A}_{\mathcal{G}}\) be an appendage path component. We say that \(\mathcal{A}\) satisfies the _no cycle condition_ if for every \(\iota o\)-simple path \(S\), nodes in \(\mathcal{A}\) do not form cycles with \(\mathcal{C}_{S}\setminus\mathcal{A}\). \(\Diamond\) Nodes in the appendage subnetwork \(\mathcal{A}_{\mathcal{G}}\) can be written uniquely as the disjoint union \[\mathcal{A}_{\mathcal{G}}=(\mathcal{A}_{1}\;\dot{\cup}\;\cdots\;\dot{\cup} \mathcal{A}_{s})\;\;\dot{\cup}\;\;(\mathcal{B}_{1}\;\dot{\cup}\;\cdots\;\dot{ \cup}\mathcal{B}_{t})\] (A.1) where each \(\mathcal{A}_{i}\) is an appendage path component that satisfies the no cycle condition and each \(\mathcal{B}_{i}\) is an appendage path component that violates the no cycle condition. Moreover, each \(\mathcal{A}_{i}\) (resp. \(\mathcal{B}_{i}\)) can be viewed as a subnetwork of \(\mathcal{A}_{\mathcal{G}}\) by including the arrows in \(\mathcal{A}_{\mathcal{G}}\) that connect nodes in \(\mathcal{A}_{i}\) (resp. \(\mathcal{B}_{i}\)). We call a component \(\mathcal{A}_{i}\) a _no cycle appendage path component_, and a component \(\mathcal{B}_{i}\) a _cycle appendage path component_. **Definition A.5**.: Let \(\rho_{1},\rho_{2}\) be adjacent super-simple nodes of \(\mathcal{G}\). 1. We say a simple node \(\rho\) is _between_\(\rho_{1}\) and \(\rho_{2}\) if there exists an \(\iota\)\(o\)-simple path that includes \(\rho_{1}\) to \(\rho\) to \(\rho_{2}\) in that order. 2. The _super-simple subnetwork_\(\mathcal{L}^{\prime}(\rho_{1},\rho_{2})\) is the subnetwork of \(\mathcal{G}\) whose nodes are simple nodes between \(\rho_{1}\) and \(\rho_{2}\) and whose arrows are arrows of \(\mathcal{G}\) connecting nodes in \(\mathcal{L}(\rho_{1},\rho_{2})\). 3. The _structural subnetwork_\(\mathcal{L}(\rho_{1},\rho_{2})\) is the subnetwork of \(\mathcal{G}\) generated by \(\mathcal{L}^{\prime}(\rho_{1},\rho_{2})\cup\mathcal{B}\), where \(\mathcal{B}\) consists of all cycle appendage path components that connect to \(\mathcal{L}^{\prime}(\rho_{1},\rho_{2})\). \(\diamondsuit\) **Theorem A.6** ([65]).: _Let \(\mathcal{G}\) be a core input-output network._ 1. _Suppose_ \(\mathcal{A}_{\eta}\subset\mathcal{A}_{\mathcal{G}}\) _is a no cycle appendage path component, then_ \(\mathcal{A}_{\eta}\) _forms_ an appendage homeostasis subnetwork _of_ \(\mathcal{G}\) _and it is associated with an appendage homeostasis block._ 2. _Let_ \(\rho_{i},\rho_{i+1}\) _be adjacent super-simple nodes in_ \(\mathcal{G}\)_. Then_ \(\mathcal{L}(\rho_{i},\rho_{i+1})\) _forms_ a structural homeostasis subnetwork _of_ \(\mathcal{G}\) _and it is associated with a structural homeostasis block._ _Conversely, the set of homeostasis subnetworks of \(\mathcal{G}\) is exactly the collection of subnetworks described in_ (a) _or_ (b) _above._ ### Homeostasis Inducing and Homeostasis Patterns Suppose that infinitesimal homeostasis occurs in an input-output network, that is, \(o^{\prime}(\mathcal{I}_{0})=0\) for some input value \(\mathcal{I}_{0}\). In this case, we say that node \(o\) is _homeostatic at \(\mathcal{I}_{0}\)_. **Definition A.7**.: A _homeostasis pattern_ is the collection of all nodes that, in addition to the output node, are simultaneously forced to be homeostatic at \(\mathcal{I}_{0}\). \(\diamondsuit\) Since the output node \(o\) is homeostatic when \(\det(H)=0\) at \((X_{0},\mathcal{I}_{0})\), thus at least one irreducible factor of \(\det(H)\) vanishes. We call a homeostasis subnetwork \(\mathcal{K}_{\eta}\)_homeostasis inducing_ if \(h_{\mathcal{K}_{\eta}}\equiv\det(B_{\eta})=0\) at \((X_{0},\mathcal{I}_{0})\). We let \(\mathcal{K}\Rightarrow\nu\) denote if node \(\nu\in\mathcal{G}\) is generically homeostatic whenever \(\mathcal{K}\) is homeostasis inducing (e.g. every homeostasis subnetwork in \(\mathcal{G}\) induces the output node \(o\)). Given a subset of nodes \(\mathcal{N}\), if for every node \(\nu\in\mathcal{N}\), \(\mathcal{K}\Rightarrow\nu\), then we write \(\mathcal{K}\Rightarrow\mathcal{N}\). In [9], it proved that the induction relation is characterized by homeostasis subnetworks. Furthermore, the induction applies in at least one direction for distinct homeostasis subnetworks, and no subnetwork induces itself. Hence, there are four types of homeostasis inducing: structural / appendage homeostasis \(\Rightarrow\) structural / appendage subnetworks. Before stating the main theorem about the classification of homeostasis patterns, we need to introduce some terminology. **Definition A.8**.: Let \(\mathcal{G}\) be an input-output network. The _structural pattern network_\(\mathcal{P}_{\mathcal{S}}\) of \(\mathcal{G}\) is the feedforward network whose nodes are the super-simple nodes \(\rho_{j}\) and the _backbone_ nodes \(\widetilde{\mathcal{L}}_{j}\) given by \[\widetilde{\mathcal{L}}_{j}\ \cup\{\rho_{j},\rho_{j+1}\}=\mathcal{L}(\rho_{j}, \rho_{j+1})\] where \(\mathcal{L}(\rho_{i},\rho_{i+1})\) is a structural homeostasis subnetwork of \(\mathcal{G}\). The arrows of \(\mathcal{P}_{\mathcal{S}}\) are defined by the natural ordering of nodes in \(\mathcal{P}_{\mathcal{S}}\) as \[\iota\rightarrow\widetilde{\mathcal{L}}_{1}\rightarrow\rho_{2}\rightarrow \widetilde{\mathcal{L}}_{2}\rightarrow\cdots\rightarrow\widetilde{\mathcal{L }}_{q}\to o\] (A.2) If a structural homeostasis subnetwork only consists of two adjacent super-simple nodes (Haldane homeostasis type) and the arrow between them, then the corresponding backbone node is an empty set, which is still included in the structural pattern network \(\mathcal{P}_{\mathcal{S}}\). \(\Diamond\) **Definition A.9**.: Each node \(\widetilde{\mathcal{A}}\) in the _appendage pattern network_\(\mathcal{P}_{\mathcal{A}}\) is a component in the condensation of appendage homeostasis subnetworks of \(\mathcal{G}\), and it is called an _appendage component_. There is an arrow connecting appendage components \(\widetilde{\mathcal{A}}_{1}\) and \(\widetilde{\mathcal{A}}_{2}\) if there are nodes \(\tau_{1}\in\widetilde{\mathcal{A}}_{1}\) and \(\tau_{2}\in\widetilde{\mathcal{A}}_{2}\) such that \(\tau_{1}\rightarrow\tau_{2}\in\mathcal{G}\). \(\Diamond\) **Definition A.10**.: Given an appendage component \(\widetilde{\mathcal{A}}\) whose nodes have arrows from or to simple nodes in \(\mathcal{G}\), then 1. Suppose \(\widetilde{\mathcal{V}}\) is the most downstream node in \(\mathcal{P}_{\mathcal{S}}\), such that there is a simple node \(\sigma\in\mathcal{V}\) and an appendage path from a node \(\tau\in\widetilde{\mathcal{A}}\) to the node \(\sigma\). We choose uniquely an arrow from \(\widetilde{\mathcal{A}}\) to the node in the structural pattern network \(\widetilde{\mathcal{V}}\in\mathcal{P}_{\mathcal{S}}\). 2. Suppose \(\widetilde{\mathcal{V}}\) is the most upstream node in \(\mathcal{P}_{\mathcal{S}}\), such that there is a simple node \(\sigma\in\mathcal{V}\) and an appendage path from the node \(\sigma\) to a node \(\tau\in\widetilde{\mathcal{A}}\). We choose uniquely an arrow from the node in the structural pattern network \(\widetilde{\mathcal{V}}\in\mathcal{P}_{\mathcal{S}}\) to \(\widetilde{\mathcal{A}}\). \(\Diamond\) **Definition A.11**.: The _homeostasis pattern network_\(\mathcal{P}\) of \(\mathcal{G}\) is the network whose nodes are the union of the nodes of the structural pattern network \(\mathcal{P}_{\mathcal{S}}\) and the appendage pattern network \(\mathcal{P}_{\mathcal{A}}\). The arrows of \(\mathcal{P}\) are the arrows of \(\mathcal{P}_{\mathcal{S}}\), the arrows of \(\mathcal{P}_{\mathcal{A}}\), and the arrows between them. \(\Diamond\) Besides the super-simple nodes, there is a correspondence between the homeostasis subnetworks and their homeostasis pattern networks. Each structural subnetwork corresponds to a backbone node. Each appendage subnetwork corresponds to an appendage component. Let \(\mathcal{V}\subset\mathcal{G}\) be a homeostasis subnetwork and let \(\widetilde{\mathcal{V}}\in\mathcal{P}\) be the corresponding node in the homeostasis pattern network. For any node \(\nu\in\mathcal{G}\) or a subset of nodes \(\mathcal{N}\subset\mathcal{G}\), we let \(\widetilde{\mathcal{V}}\Rightarrow\nu\) (resp. \(\mathcal{N}\)) denote that \(\mathcal{V}\)_induces_\(\nu\) (resp. \(\mathcal{N}\)). We start with some results of homeostasis patterns in [9]. **Theorem A.12** ([9]).: _Suppose \(\widetilde{\mathcal{A}}\in\mathcal{P}_{\mathcal{A}}\) is an appendage component, and \(\widetilde{\mathcal{L}}\in\mathcal{P}_{\mathcal{S}}\) is a backbone node in the homeostasis pattern network \(\mathcal{P}\), then_ 1. (Structural Homeostasis \(\Rightarrow\) Structural Subnetwork)__\(\widetilde{\mathcal{L}}\) _induces every node of the structural pattern network_ \(\mathcal{P}_{S}\) _downstream from_ \(\widetilde{\mathcal{L}}\)_, but no other nodes of_ \(\mathcal{P}_{S}\)_._ 2. (Structural Homeostasis \(\Rightarrow\) Appendage Subnetwork) _Let_ \(\widetilde{\mathcal{V}}\rightarrow\widetilde{\mathcal{A}}\in\mathcal{P}\) _with_ \(\widetilde{\mathcal{V}}\in\mathcal{P}_{\mathcal{S}}\)_._ \(\widetilde{\mathcal{L}}\Rightarrow\widetilde{\mathcal{A}}\) _if and only if_ \(\widetilde{\mathcal{V}}\) _is strictly downstream from_ \(\widetilde{\mathcal{L}}\)_._ 3. (Appendage Homeostasis \(\Rightarrow\) Structural Subnetwork) _Let \(\widetilde{\mathcal{A}}\rightarrow\widetilde{\mathcal{V}}\in\mathcal{P}\) with \(\widetilde{\mathcal{V}}\in\mathcal{P}_{\mathcal{S}}\). \(\widetilde{\mathcal{A}}\) induces every super-simple node downstream from \(\widetilde{\mathcal{V}}\), but no other super-simple nodes. Further, \(\widetilde{\mathcal{A}}\Rightarrow\widetilde{\mathcal{L}}\) if and only if \(\widetilde{\mathcal{V}}\) is strictly upstream from \(\widetilde{\mathcal{L}}\)._ 4. (Appendage Homeostasis \(\Rightarrow\) Appendage Subnetwork) _Let \(\widetilde{\mathcal{A}}_{1}\) and \(\widetilde{\mathcal{A}}_{2}\) be distinct appendage components. Let \(\widetilde{\mathcal{A}}_{1}\rightarrow\widetilde{\mathcal{V}}_{1},\widetilde {\mathcal{V}}_{2}\rightarrow\widetilde{\mathcal{A}}_{2}\in\mathcal{P}\) with \(\widetilde{\mathcal{V}}_{1},\widetilde{\mathcal{V}}_{2}\in\mathcal{P}_{ \mathcal{S}}\). \(\widetilde{\mathcal{A}}_{1}\Rightarrow\widetilde{\mathcal{A}}_{2}\) if and only if \(\widetilde{\mathcal{A}}_{1}\) is upstream from \(\widetilde{\mathcal{A}}_{2}\) and every path from \(\widetilde{\mathcal{A}}_{1}\) to \(\widetilde{\mathcal{A}}_{2}\) contains a super-simple node which is downstream from \(\widetilde{\mathcal{V}}_{1}\) and upstream from \(\widetilde{\mathcal{V}}_{2}\)._ ## Appendix B Infinitesimal Homeostasis in PRN and GRN We now apply the infinitesimal homeostasis results of Wang et al. [65] to the PRN \(\mathcal{R}\). We show that these results can be determined directly from graph theory on the GRN \(\mathcal{G}\) itself. We start with the observation that, even though a GRN has no associated ODE, it is formally an abstract input-output network. Hence, we can apply all the combinatorial constructions explained in subsection A.2 to a GRN \(\mathcal{G}\) and its associated PRN \(\mathcal{R}\) (possible self-couplings of GRNs have no effect when carrying out these procedures). The main goal of this section is to show how these combinatorial constructions relate to each other. ### Simple Paths in the PRN We start by describing how the simple paths in the GRN relate to the simple paths in the PRN. In turn, this allows us describe how to relate simple nodes, super simple nodes and appendage nodes among the two networks **Lemma B.1**.: _Let \(\mathcal{G}\) be an input-output GRN and \(\mathcal{R}\) be the associated input-output PRN._ 1. _A path in_ \(\mathcal{G}\)__ \[\sigma_{1}\rightarrow\cdots\rightarrow\sigma_{m}\] _is a simple if and only if the corresponding path_ \[\sigma_{1}^{R}\rightarrow\sigma_{1}^{P}\rightarrow\cdots\rightarrow\sigma_{m }^{R}\rightarrow\sigma_{m}^{P}\] _is a simple path in_ \(\mathcal{R}\)__ 2. _A path in_ \(\mathcal{G}\)__ \[\iota\rightarrow\sigma_{1}\rightarrow\cdots\rightarrow\sigma_{m}\to o\] _is an_ \(\iota o\)_-simple path if and only if the corresponding path_ \[\iota^{R}\rightarrow\iota^{P}\rightarrow\sigma_{1}^{R}\rightarrow\sigma_{1}^{P }\rightarrow\cdots\rightarrow\sigma_{m}^{R}\rightarrow\sigma_{m}^{P} \to o^{R}\to o^{P}\] _is an_ \(\iota^{R}o^{P}\)_-simple path in_ \(\mathcal{R}\)_._ Proof.: (a) Recall that in an input-output network, an \(\iota o\)-simple path is a simple path from the input node to the output node. Suppose there is an \(\iota o\)-simple path in \(\mathcal{G}\) as follows \[\iota\rightarrow\sigma_{1}\rightarrow\cdots\rightarrow\sigma_{m}\to o\] (B.1) Note that self-coupling is never an arrow in a \(\iota o\)-simple path in \(\mathcal{G}\). Thus, the \(\iota o\)-simple path (B.1) lifts uniquely to the following \(\iota^{R}o^{P}\)-simple path in \(\mathcal{R}\): \[\iota^{R}\rightarrow\iota^{P}\rightarrow\sigma_{1}^{R}\rightarrow\sigma_{1 }^{P}\rightarrow\cdots\rightarrow\sigma_{m}^{R}\rightarrow\sigma_{m}^{P} \to o^{R}\to o^{P}\] (B.2) On the other hand, since every \(\iota^{R}o^{P}\)-simple path consists of 2-node blocks \(j^{R}\to j^{P}\) for several GRN-nodes \(j\), an \(\iota^{R}o^{P}\)-simple path in the PRN is always a lift of an \(\iota o\)-simple path in the GRN. (b) Similar as in item (a), it is clear that every simple path in \(\mathcal{G}\) lifts uniquely to a simple path in \(\mathcal{R}\). Since every simple path also consists of 2-node blocks \(j^{R}\to j^{P}\) for several GRN-nodes \(j\), every simple path in \(\mathcal{R}\) is always a lift of a simple path in \(\mathcal{G}\). **Remark B.2**.: No self-coupling arrow lies on a simple path in the GRN. Similarly, the arrow \(\sigma^{P}\rightarrow\sigma^{R}\) never lies on a simple path in the PRN, otherwise such a path would contain \(\sigma^{R}\rightarrow\sigma^{P}\rightarrow\sigma^{R}\) and hence would not be a simple path. **Lemma B.3**.: _Let \(\mathcal{G}\) be an input-output GRN and \(\mathcal{R}\) be the associated input-output PRN._ 1. _Node_ \(\tau\) _is a simple node in_ \(\mathcal{G}\) _if and only if the nodes_ \(\tau^{R},\tau^{P}\) _are simple nodes in_ \(\mathcal{R}\)_._ 2. _Node_ \(\tau\) _is a super-simple node in_ \(\mathcal{G}\) _if and only if the nodes_ \(\tau^{R},\tau^{P}\) _are super-simple nodes in_ \(\mathcal{R}\)_._ 3. _Node_ \(\tau\) _is an appendage node in_ \(\mathcal{G}\) _if and only if the nodes_ \(\tau^{R},\tau^{P}\) _are appendage nodes in_ \(\mathcal{R}\)_._ Proof.: (a) Suppose that the node \(\tau\) is simple in \(\mathcal{G}\), then there is an \(\iota o\)-simple path in \(\mathcal{G}\) containing \(\tau\), such that \[\iota\rightarrow\sigma_{1}\rightarrow\cdots\rightarrow\tau\rightarrow\cdots\to o\] (B.3) By Lemma B.1 (b), there exists an \(\iota^{R}o^{P}\)-simple path lifted from (B.3) in the PRN \(\mathcal{R}\) as \[\iota^{R}\rightarrow\iota^{P}\rightarrow\sigma_{1}^{R}\rightarrow\sigma_{1 }^{P}\rightarrow\cdots\rightarrow\tau^{R}\rightarrow\tau^{P}\rightarrow \cdots\to o^{R}\to o^{P}\] (B.4) Thus, we obtain that both nodes \(\tau^{R},\tau^{P}\) are simple nodes in \(\mathcal{R}\). On the other hand, assume both nodes \(\tau^{R},\tau^{P}\) are simple nodes in \(\mathcal{R}\), thus there exists an \(\iota^{R}o^{P}\)-simple path containing \(\tau^{R},\tau^{P}\) in \(\mathcal{R}\). Using Lemma B.1, this \(\iota^{R}o^{P}\)-simple path is a lift of an \(\iota o\)-simple path in the GRN containing \(\tau\). This implies the node \(\tau\) is simple in \(\mathcal{G}\). (b) The node \(\tau\) is super-simple in \(\mathcal{G}\) if and only if every \(\iota o\)-simple path in \(\mathcal{G}\) contains \(\tau\). From Lemma B.1 (b), every \(\iota^{R}o^{P}\)-simple paths in \(\mathcal{R}\) is a lift of an \(\iota o\)-simple path in \(\mathcal{G}\). Thus, \(\tau\) is super-simple in \(\mathcal{G}\) is equivalent to every \(\iota^{R}o^{P}\)-simple paths must contain both \(\tau^{R}\) and \(\tau^{P}\), which represents both nodes \(\tau^{R},\tau^{P}\) are super-simple nodes in \(\mathcal{R}\). (c) Since we know all nodes are either appendage or simple in \(\mathcal{G}\) and \(\mathcal{R}\). Suppose \(\tau\) is an appendage node. From part (a) we see that neither \(\tau^{R},\tau^{P}\) are simple nodes in \(\mathcal{R}\). **Lemma B.4**.: _The set \(\{\iota=\rho_{1},\rho_{2},\ldots,\rho_{q},\rho_{q+1}=o\}\) of super-simple nodes in \(\mathcal{G}\) is well ordered by the order of their appearance on any \(\iota\)o-simple path. We denote the ordering of the super-simple nodes by_ \[\rho_{1}\prec\cdots\prec\rho_{q}\prec\rho_{q+1}\] (B.5) _Then the ordering of the super-simple nodes \(\{\rho_{1}^{R},\rho_{1}^{P},\ldots,\rho_{q+1}^{R},\rho_{q+1}^{P}\}\) in \(\mathcal{R}\) is_ \[\rho_{1}^{R}\prec\rho_{1}^{P}\prec\cdots\prec\rho_{q+1}^{R}\prec\rho_{q+1}^{P}\] (B.6) _Moreover, \(\mathcal{L}(\rho_{i}^{R},\rho_{i}^{P})=\{\rho_{i}^{R}\to\rho_{i}^{P}\}\) for \(i=1,2,\ldots,q+1\)._ Proof.: From Lemma B.3, we get \(\rho_{1}^{R},\rho_{1}^{P},\ldots,\rho_{q+1}^{R},\rho_{q+1}^{P}\) are the super-simple nodes in \(\mathcal{R}\). Since the order of super-simple nodes in \(\mathcal{G}\) is given in (B.5), all \(\iota\)o-simple paths follow \[\rho_{1}\rightsquigarrow\rho_{2}\rightsquigarrow\cdots\rightsquigarrow\rho_{q} \rightsquigarrow\rho_{q+1}\] (B.7) where \(\rho_{j}\rightsquigarrow\rho_{j+1}\) indicates a simple path from \(\rho_{j}\) to \(\rho_{j+1}\). Note that every \(\iota^{R}o^{P}\)-simple path in \(\mathcal{R}\) is always a lift of an \(\iota\)o-simple path in \(\mathcal{G}\). Each \(\iota^{R}o^{P}\)-simple path must satisfy \[\rho_{1}^{R}\rightsquigarrow\rho_{1}^{P}\rightsquigarrow\cdots\rightsquigarrow \rho_{q+1}^{R}\rightsquigarrow\rho_{q+1}^{P}\] (B.8) and we prove (B.6). For any \(i=1,2,\ldots,q+1\), the mRNA node \(\rho_{i}^{R}\) only goes to the protein node \(\rho_{i}^{P}\), and it is also the only node which has an arrow to \(\rho_{i}^{P}\) in \(\mathcal{R}\). Therefore, as adjacent super-simple nodes \(\rho_{i}^{R},\rho_{i}^{P}\), we get that \(\rho_{i}^{R}\to\rho_{i}^{P}\) is the only simple path, so \(\mathcal{L}(\rho_{i}^{R},\rho_{i}^{P})=\{\rho_{i}^{R}\to\rho_{i}^{P}\}\). **Lemma B.5**.: _Let \(\sigma_{1},\sigma_{2}\) be two distinct appendage nodes in \(\mathcal{G}\). Suppose there is an appendage path between \(\sigma_{1}\) and \(\sigma_{2}\). Then \(\sigma_{1}^{P},\sigma_{2}^{R}\) are appendage nodes, and there exists an appendage path connecting \(\sigma_{1}^{P}\) and \(\sigma_{2}^{R}\) in \(\mathcal{R}\)._ Proof.: Since \(\sigma_{1},\sigma_{2}\) are appendage nodes in \(\mathcal{G}\), we obtain that \(\sigma_{1}^{P},\sigma_{2}^{R}\) are appendage nodes in \(\mathcal{R}\) from Lemma B.3. Assume the appendage path connecting \(\sigma_{1}\) and \(\sigma_{2}\) is \[\sigma_{1}\to\tau_{1}\to\cdots\to\tau_{m}\to\sigma_{2}\] where \(\{\tau_{i}\}_{i=1}^{m}\) are appendage nodes in \(\mathcal{G}\). Then there exists a corresponding path in \(\mathcal{R}\), such that \[\sigma_{1}^{P}\to\tau_{1}^{R}\to\tau_{1}^{P}\to\cdots\to\tau_{m}^{R}\to\tau_{m }^{P}\to\sigma_{2}^{R}\] Again from Lemma B.3, \(\{\tau_{i}^{R},\tau_{i}^{P}\}_{i=1}^{m}\) are all appendage nodes in \(\mathcal{R}\). Therefore, we find the appendage path connecting two nodes \(\sigma_{1}^{P}\) and \(\sigma_{2}^{R}\). ### Homeostasis Subnetworks in GRN In this section we prove the first main result of the paper, which gives the relation between the homeostasis subnetworks of the GRN and the associated PRN. **Theorem B.6**.: _Let \(\mathcal{G}\) be an input-output GRN with associated input-output PRN \(\mathcal{R}\)._ 1. _Every_ \(\mathcal{G}\)_-structural subnetwork_ \(\mathcal{L}(\rho_{i},\rho_{i+1})\)_, where_ \(\rho_{i},\rho_{i+1}\) _are consecutive super-simple nodes, corresponds to a_ \(\mathcal{R}\)_-structural subnetwork_ \(\mathcal{L}(\rho_{i}^{P},\rho_{i+1}^{R})\)_._ 2. _Every super-simple node_ \(\rho\in\mathcal{G}\) _corresponds to a_ \(\mathcal{R}\)_-Haldane subnetwork_ \(\mathcal{L}(\rho^{R},\rho^{P})=\{\rho^{R}\to\rho^{P}\}\)_, or_ \(\mathcal{L}(\rho^{R},\rho^{P})=\{\rho^{R}\leftrightarrows\rho^{P}\}\) _if_ \(\rho\) _has a self-coupling._ 3. _Every non-single appendage node_ \(\mathcal{G}\)_-subnetwork_ \(\mathcal{A}\) _corresponds to a_ \(\mathcal{R}\)_-appendage subnetwork_ \(\mathcal{A}^{\mathcal{R}}\)_. In particular, a non-single appendage node_ \(\mathcal{G}\)_-subnetwork_ \(\{\tau\!\! (b) Assume \(\tau\in\mathcal{L}(\rho_{i},\rho_{i+1})\) is an appendage node in \(\mathcal{G}\), then there exists a cycle consisting of at least one non-super-simple simple and appendage nodes including \(\tau\), that is, \[\tau\rightarrow\sigma_{1}\rightarrow\cdots\rightarrow\sigma_{m}\rightarrow\tau\] (B.9) where \(\sigma_{j}\in\mathcal{L}(\rho_{i},\rho_{i+1})\) for \(j=1,\ldots,m\) and at least one node in cycle is non-super-simple simple. Now, we can assume (w.l.o.g.) that \(\sigma_{1}\in\mathcal{L}(\rho_{i},\rho_{i+1})\), and obtain the following cycle in \(\mathcal{R}\): \[\tau^{R}\rightarrow\tau^{P}\rightarrow\sigma_{1}^{R}\rightarrow\cdots \rightarrow\sigma_{m}^{P}\rightarrow\tau^{R}\] (B.10) It is clear that \(\sigma_{1}^{R}\neq\rho_{i+1}^{R}\) and \(\sigma_{1}^{P}\neq\rho_{i}^{P}\). From part \((a)\), we derive \(\{\sigma_{1}^{R},\sigma_{1}^{P}\}\subset\mathcal{L}(\rho_{i}^{P},\rho_{i+1}^{ R})\) in \(\mathcal{R}\). This works for all other non-super-simple simple nodes in (B.9) as well. Hence, (B.10) is a cycle consisting of at least two non-super-simple simple and appendage nodes including \(\tau^{R},\tau^{P}\). Therefore, \(\{\tau^{R},\tau^{P}\}\subset\mathcal{L}(\rho_{i}^{P},\rho_{i+1}^{R})\) in \(\mathcal{R}\). Now, suppose that both appendage nodes \(\{\tau^{R},\tau^{P}\}\subset\mathcal{L}(\rho_{i}^{P},\rho_{i+1}^{R})\) in \(\mathcal{R}\). There must exist a cycle in \(\mathcal{L}(\rho_{i}^{P},\rho_{i+1}^{R})\) containing \(\tau^{R},\tau^{P}\) but \(\rho_{i}^{P},\rho_{i+1}^{R}\). By Lemma B.3 and item \((a)\), this must be a lift of a cycle in \(\mathcal{L}(\rho_{i},\rho_{i+1})\) containing \(\tau\) but \(\rho_{i},\rho_{i+1}\). (c) Suppose that \(\tau\) is a single appendage node in \(\mathcal{G}\), then \(\mathcal{A}_{j}=\{\tau\}\) and there is no cycle containing \(\tau\) in \(\mathcal{A}_{\mathcal{G}}\). By item (b), since the node \(\tau\) is not self-coupling, there is no cycle containing \(\tau^{R}\) and \(\tau^{P}\) in \(\mathcal{A}_{\mathcal{R}}\). Therefore \(\tau^{R}\) and \(\tau^{P}\) form two separate appendage subnetworks in \(\mathcal{R}\). Next, suppose that both appendage nodes \(\{\tau^{R}\}\) and \(\{\tau^{P}\}\) form two appendage subnetworks in \(\mathcal{R}\). Thus, there is no cycle containing \(\tau^{R}\) and \(\tau^{P}\) in \(\mathcal{A}_{\mathcal{R}}\). From item (b), it follows that node \(\tau\) isn't self-coupling, and all cycles in \(\mathcal{A}_{\mathcal{G}}\) exclude \(\tau\). (d) We first consider an appendage subnetwork \(\mathcal{A}_{j}=\{\tau\}\) consisting of a single appendage node with self-coupling. It follows from item (c) that there is no cycle containing \(\tau^{R}\) and \(\tau^{P}\) in \(\mathcal{A}_{\mathcal{R}}\), except the cycle \(\tau^{R}\rightarrow\tau^{P}\rightarrow\tau^{R}\) from self-coupling on \(\tau\). Then \(\{\tau^{R},\tau^{P}\}=\mathcal{A}_{j}^{\mathcal{R}}\) form an appendage subnetwork in \(\mathcal{R}\). Next, suppose that the appendage node \(\tau\subset\mathcal{A}_{j}\) in \(\mathcal{G}\), there exists a cycle consisting of appendage nodes including \(\tau\) in \(\mathcal{A}_{j}\subseteq\mathcal{A}_{\mathcal{G}}\) as follows \[\tau\rightarrow\tau_{1}\rightarrow\cdots\rightarrow\tau_{n}\rightarrow\tau\] (B.11) where \(\tau_{i}\in\mathcal{A}_{j}\) for \(i=1,\ldots,n\). From items (b) and (c), this lifts uniquely to a corresponding cycle in \(\mathcal{A}_{\mathcal{R}}\): \[\tau^{R}\rightarrow\tau^{P}\rightarrow\tau_{1}^{R}\rightarrow\cdots \rightarrow\tau_{n}^{P}\rightarrow\tau^{R}\] (B.12) thus \(\{\tau^{R},\tau^{P}\}\subset\mathcal{A}_{j}^{\mathcal{R}}\) in \(\mathcal{R}\). The converse direction of item (d) follows from items (b) and (c). Proof of Theorem b.6.: Let \(\mathcal{G}\) be an input-output GRN with \(n+2\) nodes and with \(q+1\) super-simple nodes \(\iota=\rho_{1}\prec\cdots\prec\rho_{q}\prec\rho_{q+1}=o\). Then \(\mathcal{G}\) has \(q\) structural homeostasis subnetworks (self-couplings can be kept during these procedures) \[\mathcal{L}(\rho_{1},\rho_{2}),\;\mathcal{L}(\rho_{2},\rho_{3}),\;\ldots,\; \mathcal{L}(\rho_{q-1},\rho_{q}),\;\mathcal{L}(\rho_{q},,\rho_{q+1})\] (B.13) where \(\mathcal{L}(\rho_{i},\rho_{i+1})=\mathcal{L}^{\prime}(\rho_{i},\rho_{i+1})\,\cup \,\mathcal{B}_{i,i+1}\), with \(\mathcal{B}_{i,i+1}\) consisting of all appendage path components in \(\mathcal{A}_{\mathcal{G}}\) that violate the no cycle condition with respect to simple nodes in \(\mathcal{L}^{\prime}(\rho_{i},\rho_{i+1})\). Moreover, \(\mathcal{G}\) has \(r+s\) appendage homeostasis subnetworks \[\mathcal{A}^{\prime}_{1},\ldots,\mathcal{A}^{\prime}_{r},\ \mathcal{A}_{1}, \ldots,\mathcal{A}_{s}\] (B.14) where \(\mathcal{A}^{\prime}_{i}=\{\tau_{i}\}\) and \(\tau_{i}\) is a single appendage node for \(1\leq i\leq r\). First, we consider appendage homeostasis subnetworks. In \(\mathcal{G}\), the appendage homeostasis subnetworks \(\mathcal{A}^{\prime}_{i}\) consists of a single appendage node \(\tau_{i}\) for \(1\leq i\leq r\). From Lemma B.7(c), nodes \(\tau_{i}^{R}\) and \(\tau_{i}^{P}\) form two appendage homeostasis subnetworks in \(\mathcal{R}\), which correspond to two irreducible blocks of the form \[\text{R-null-degradation: }\left[f_{\tau_{i}^{R},\tau_{i}^{R}}\right]\ \text{ and }\ \text{P-null-degradation: }\left[f_{\tau_{i}^{P},\tau_{i}^{P}}\right]\] For the rest of appendage homeostasis subnetworks in \(\mathcal{G}\), we assume that \[\mathcal{A}_{j}:=\{\sigma_{j_{1}},\ldots,\sigma_{j_{k}}\},\text{ for }1\leq j\leq s\] Applying Lemma B.7(d), we obtain the corresponding appendage homeostasis subnetworks \(\mathcal{A}^{\mathcal{R}}_{j}\) in \(\mathcal{R}\) as follows (including appendage nodes with self-coupling) \[\mathcal{A}^{\mathcal{R}}_{j}=\{\sigma^{R}_{j_{1}},\sigma^{P}_{j_{1}},\ldots, \sigma^{R}_{j_{k}},\sigma^{P}_{j_{k}}\}\] Now we deal with structural homeostasis subnetworks. Given super-simple nodes \(\iota=\rho_{1}\prec\cdots\prec\rho_{q}\prec\rho_{q+1}=o\) in \(\mathcal{G}\), from Lemma B.4 we have that \(\rho_{1}^{R},\rho_{1}^{P},\ldots,\rho_{q+1}^{R},\rho_{q+1}^{P}\) are all super-simple nodes in \(\mathcal{R}\), and \(\mathcal{L}(\rho_{i}^{R},\rho_{i}^{P})=\langle\rho_{i}^{R},\rho_{i}^{P}\rangle\) for \(1\leq i\leq q+1\). Clearly, \(\mathcal{L}(\rho_{i}^{R},\rho_{i}^{P})\) has the input node \(\rho_{i}^{R}\) and the output node \(\rho_{i}^{P}\), thus it corresponds to the following irreducible components in \(B\): \[\text{$\mathcal{R}$-Haldane: }\left[f_{\rho_{i}^{P},\rho_{i}^{R}}\right]\] For other structural homeostasis subnetworks in \(\mathcal{G}\), we assume that \[\mathcal{L}(\rho_{j},\rho_{j+1}):=\{\rho_{j},\rho_{j+1},\sigma_{j_{1}},\ldots, \sigma_{j_{k}}\},\text{ for }1\leq j\leq q\] Using Lemma B.7, we obtain the corresponding structural homeostasis subnetworks in \(\mathcal{R}\) \[\mathcal{L}(\rho_{i}^{P},\rho_{i+1}^{R})=\{\rho_{i}^{P},\rho_{i+1}^{R},\sigma _{i_{1}}^{R},\sigma_{i_{1}}^{P},\ldots,\sigma_{i_{l}}^{R},\sigma_{i_{l}}^{P}\}\] In summary, from the doubling of nodes, it follows that the associated PRN \(\mathcal{R}\) has \(2q\) structural homeostasis subnetworks \[\mathcal{L}(\rho_{1}^{R},\rho_{1}^{P}),\ \mathcal{L}(\rho_{1}^{P},\rho_{2}^{R}), \ \ldots,\ \mathcal{L}(\rho_{q}^{P},\rho_{q+1}^{R}),\ \mathcal{L}(\rho_{q+1}^{R},\rho_{q+1}^{P})\] (B.15) where \(\mathcal{L}(\rho_{i}^{R},\rho_{i}^{P})=\{\rho_{i}^{R}\to\rho_{i}^{P}\}\), or \(\mathcal{L}(\rho_{i}^{R},\rho_{i}^{P})=\{\rho_{i}^{R}\rightleftarrows\rho_{i}^ {P}\}\) if the super-simple node \(\rho\) has a self-coupling, for \(1\leq i\leq q+1\) are \(\mathcal{R}\)-Haldane subnetworks. Moreover, \(\mathcal{R}\) has \(2r+s\) appendage homeostasis subnetworks \[\mathcal{A}^{R}_{1},\mathcal{A}^{P}_{1},\ldots,\mathcal{A}^{R}_{r},\mathcal{A} ^{P}_{r},\ \mathcal{A}^{\mathcal{R}}_{1},\ldots,\mathcal{A}^{\mathcal{R}}_{s}\] (B.16) where \(\mathcal{A}^{R}_{i}=\{\tau_{i}^{R}\}\), \(\mathcal{A}^{P}_{i}=\{\tau_{i}^{P}\}\) for \(1\leq i\leq r\). This gives a complete correspondence between the homeostasis subnetworks of a GRN and its associated PRN. ### Enumerating Homeostasis Subnetworks in GRN and PRN The following algorithm is used to find the different homeostasis subnetworks of a GRN \(\mathcal{G}\), with an input node \(\iota\) and an output node \(o\), and the homeostasis subnetworks of the associated PRN \(\mathcal{R}\). Step 0:Reduce the input-output network to a core GRN network \(\mathcal{G}\) and then let \(\mathcal{R}\) be the RPN of the core GRN. This process is the same as first forming the RPN of the GRN and then reducing to the core network. Step 1:Identify the \(\iota o\)-simple paths in the core GRN network \(\mathcal{G}\) and the simple nodes \(\sigma\), the super-simple nodes \(\rho\), and the appendage nodes \(\tau\) of \(\mathcal{G}\). The self-couplings of simple \(\mathcal{G}\)-nodes can be removed. It follows from Lemma B.1 that \(\{\sigma^{R},\sigma^{P}\}\) are simple nodes, \(\{\rho^{R},\rho^{P}\}\) are super-simple nodes, and \(\{\tau^{R},\tau^{P}\}\) are appendage nodes of \(\mathcal{R}\). Step 2:Determine the appendage homeostasis subnetworks of \(\mathcal{G}\). Specifically, the appendage subnetwork \(\mathcal{A}_{\mathcal{G}}\) of \(\mathcal{G}\) can be written uniquely as the disjoint union \[\mathcal{A}_{\mathcal{G}}=(\mathcal{A}^{\prime}_{1}\;\dot{\cup}\;\cdots\; \dot{\cup}\;\mathcal{A}^{\prime}_{r})\;\;\dot{\cup}\;\;(\mathcal{A}_{1}\; \dot{\cup}\;\cdots\;\dot{\cup}\;\mathcal{A}_{s})\;\;\dot{\cup}\;\;(\mathcal{ B}_{1}\;\dot{\cup}\;\cdots\;\dot{\cup}\;\mathcal{B}_{t})\] (B.17) where each \(\mathcal{A}^{\prime}_{i}\) consists of a single appendage node, each \(\mathcal{A}_{i}\) is a no cycle appendage path component, and each \(\mathcal{B}_{i}\) is an appendage path component that violates the no cycle condition. The appendage homeostasis subnetworks of \(\mathcal{G}\) are \[\mathcal{A}^{\prime}_{1},\ldots,\mathcal{A}^{\prime}_{r},\mathcal{A}_{1}, \ldots,\mathcal{A}_{s}\] (B.18) By Lemma B.7, the corresponding appendage subnetwork \(\mathcal{A}_{\mathcal{R}}\) of \(\mathcal{R}\) can be written as \[\mathcal{A}_{\mathcal{R}}=(\mathcal{A}^{R}_{1}\;\dot{\cup}\;\mathcal{A}^{P}_{ 1}\;\dot{\cup}\;\cdots\;\dot{\cup}\;\mathcal{A}^{R}_{r}\;\dot{\cup}\;\mathcal{ A}^{P}_{r})\;\;\dot{\cup}\;\;(\mathcal{A}^{\mathcal{R}}_{1}\;\dot{\cup}\; \cdots\;\dot{\cup}\;\mathcal{A}^{\mathcal{R}}_{s})\;\;\dot{\cup}\;\;(\mathcal{ B}^{\mathcal{R}}_{1}\;\dot{\cup}\;\cdots\;\dot{\cup}\;\mathcal{B}^{\mathcal{R}}_{t})\] (B.19) where \[\begin{split}&\{\tau^{R},\tau^{P}\}\subset\mathcal{A}^{\mathcal{R }}_{j}\;(\text{or}\;\mathcal{B}^{\mathcal{R}}_{j})\;\;\text{if}\;\;\tau\in \mathcal{A}_{j}\;(\text{or}\;\mathcal{B}_{j})\\ &\mathcal{A}^{R}_{j}=\{\tau^{R}\},\;\mathcal{A}^{P}_{j}=\{\tau^{P} \}\;\;\text{if}\;\;\mathcal{A}^{\prime}_{j}=\{\tau\}\end{split}\] (B.20) The appendage homeostasis subnetworks of \(\mathcal{R}\) are \[\mathcal{A}^{R}_{1},\mathcal{A}^{P}_{1},\ldots,\mathcal{A}^{R}_{r},\mathcal{ A}^{P}_{r},\mathcal{A}^{\mathcal{R}}_{1},\ldots,\mathcal{A}^{\mathcal{R}}_{s}\] (B.21) In particular, if \(\tau\) is an appendage node with self-coupling which gives an appendage subnetwork \(\{\tau\dot{\sim}\}\) of \(\mathcal{G}\) then the corresponding appendage network of \(\mathcal{R}\) is \(\{\tau^{R}\rightleftarrows\tau^{P}\}\). Step 3:Determine the structural homeostasis subnetworks of \(\mathcal{G}\). Let \[\iota=\rho_{1}\prec\cdots\prec\rho_{q}\prec\rho_{q+1}=o\] (B.22) be the ordered set of super-simple nodes of \(\mathcal{G}\). The super-simple subnetworks of \(\mathcal{G}\) are \[\mathcal{L}^{\prime}(\rho_{1},\rho_{2}),\ \mathcal{L}^{\prime}(\rho_{2}, \rho_{3})\,\ldots,\ \mathcal{L}^{\prime}(\rho_{q-1},\rho_{q}),\ \mathcal{L}^{\prime}(\rho_{q},\rho_{q+1})\] (B.23) Then, the structural subnetworks of \(\mathcal{G}\) are \[\mathcal{L}(\rho_{1},\rho_{2}),\ \mathcal{L}(\rho_{2},\rho_{3})\,\ldots,\ \mathcal{L}(\rho_{q-1},\rho_{q}),\ \mathcal{L}(\rho_{q},,\rho_{q+1})\] (B.24) where \(\mathcal{L}(\rho_{i},\rho_{i+1})=\mathcal{L}^{\prime}(\rho_{i},\rho_{i+1})\cup \mathcal{B}\), with \(\mathcal{B}\) consists of all appendage path components that violate the no cycle condition with simple nodes in \(\mathcal{L}^{\prime}(\rho_{i},\rho_{i+1})\). By Lemma B.4, the ordered set of super-simple nodes of \(\mathcal{R}\) are \[\iota=\rho_{1}^{R}\prec\rho_{1}^{P}\prec\cdots\prec\rho_{q+1}^{R}\prec\rho_{q +1}^{P}=o\] By Lemma B.7 the super-simple subnetworks of \(\mathcal{R}\) are \[\mathcal{L}^{\prime}(\rho_{1}^{R},\rho_{1}^{P}),\mathcal{L}^{\prime}(\rho_{1 }^{P},\rho_{2}^{R}),\ \ldots,\ \mathcal{L}^{\prime}(\rho_{q}^{P},\rho_{q+1}^{R}),\ \mathcal{L}^{\prime}(\rho_{q+1}^{R},\rho_{q+1}^{P})\] (B.25) where \[\mathcal{L}^{\prime}(\rho_{i}^{R},\rho_{i}^{P})=\langle\rho_{i}^{R},\rho_{i}^ {P}\rangle\ \ \text{and}\ \ \{\tau^{R},\tau^{P}\}\subset\mathcal{L}^{\prime}(\rho_{i}^{P},\rho_{i+1}^{R}) \ \ \text{if}\ \ \tau\in\mathcal{L}^{\prime}(\rho_{i},\rho_{i+1})\] (B.26) Then, the structural subnetworks of \(\mathcal{R}\) are \[\mathcal{L}(\rho_{1}^{R},\rho_{1}^{P}),\mathcal{L}(\rho_{1}^{P},\rho_{2}^{R} ),\ \ldots,\ \mathcal{L}(\rho_{q}^{P},\rho_{q+1}^{R}),\ \mathcal{L}(\rho_{q+1}^{R},\rho_{q+1}^{P})\] (B.27) where \(\mathcal{L}(\rho_{i}^{R},\rho_{i}^{P})=\{\rho_{i}^{R}\to\rho_{i}^{P}\}\), or \(\mathcal{L}(\rho_{i}^{R},\rho_{i}^{P})=\{\rho_{i}^{R}\rightleftarrows\rho_{i} ^{P}\}\), and \(\mathcal{L}(\rho_{i}^{P},\rho_{i+1}^{R})=\mathcal{L}^{\prime}(\rho_{i}^{R}, \rho_{i}^{P})\cup\mathcal{B}^{\mathcal{R}}\), with \(\mathcal{B}^{\mathcal{R}}\) consisting of all appendage path components that violate the no cycle condition with respect to simple nodes in \(\mathcal{L}^{\prime}(\rho_{i}^{R},\rho_{i}^{P})\). ## Appendix C Homeostasis Patterns in PRN and GRN In this section, we obtain the homeostasis in the PRN and show how they relate to the homeostasis patterns in the GRN. ### Homeostasis Pattern Networks We start by showing the relation between the _homeostasis pattern network_\(\mathcal{P}(\mathcal{G})\) of the GRN and the _homeostasis pattern network_\(\mathcal{P}(\mathcal{R})\) of the associated PRN. We start by relating the nodes in both networks. First, we establish the relation among the nodes in structural pattern network \(\mathcal{P}_{\mathcal{S}}\) and the appendage components \(\mathcal{P}_{\mathcal{A}}\) in both homeostasis pattern networks \(\mathcal{P}(\mathcal{G})\) and \(\mathcal{P}(\mathcal{R})\). **Lemma C.1**.: _Let \(\mathcal{G}\) be an input-output GRN and \(\mathcal{R}\) be the associated input-output PRN._ (a): _The nodes of the structural pattern network_ \(\mathcal{P}_{\mathcal{S}}(\mathcal{G})\) _are_ \[\iota=\rho_{1},\ \widetilde{\mathcal{L}}_{1},\ \rho_{2},\ \widetilde{\mathcal{L}}_{2}, \ \ldots,\ \widetilde{\mathcal{L}}_{q},\ \rho_{q+1}=o\] (C.1) _where_ \(\widetilde{\mathcal{L}}_{j}\cup\{\rho_{j},\rho_{j+1}\}=\mathcal{L}(\rho_{j}, \rho_{j+1})\) _for_ \(1\leq j\leq q\)_._ _The appendage components of the appendage pattern network_ \(\mathcal{P}_{\mathcal{A}}(\mathcal{G})\) _are_ \[\widetilde{\mathcal{A}}^{\prime}_{1},\ldots,\widetilde{\mathcal{A}}^{\prime}_ {r},\ \widetilde{\mathcal{A}}_{1},\ldots,\widetilde{\mathcal{A}}_{s}\] (C.2) _where_ \(\widetilde{\mathcal{A}}^{\prime}_{i}=\widetilde{\mathcal{A}}^{\prime}_{i}\) _for_ \(1\leq i\leq r\)_, and_ \(\widetilde{\mathcal{A}}_{j}=\widetilde{\mathcal{A}}_{j}\) _for_ \(1\leq j\leq s\)_._ (b): _The nodes of the structural pattern network_ \(\mathcal{P}_{\mathcal{S}}(\mathcal{R})\) _are_ \[\iota=\rho_{1}^{R},\ \widetilde{\mathcal{L}}(\rho_{1}^{R},\rho_{1}^{P}),\ \rho_{1}^{P},\ \widetilde{\mathcal{L}}(\rho_{1}^{P},\rho_{2}^{R}),\ \rho_{2}^{R},\ \ldots,\ \widetilde{ \mathcal{L}}(\rho_{q+1}^{R},\rho_{q+1}^{P}),\ \rho_{q+1}^{P}=o\] (C.3) _where the backbone nodes_ \(\widetilde{\mathcal{L}}\) _are obtained by_ \[\begin{split}&\widetilde{\mathcal{L}}(\rho_{i}^{R},\rho_{i}^{P})= \emptyset,\ \text{for}\ i=1,\ldots,q+1\\ &\widetilde{\mathcal{L}}(\rho_{j}^{P},\rho_{j+1}^{R})\cup\{\rho_ {j}^{P},\rho_{j+1}^{R}\}=\mathcal{L}(\rho_{j}^{P},\rho_{j+1}^{R}),\ \text{for}\ j=1,\ldots,q\end{split}\] (C.4) _The appendage components of the appendage pattern network_ \(\mathcal{P}_{\mathcal{A}}(\mathcal{R})\) _are_ \[\widetilde{\mathcal{A}}_{1}^{R},\ldots,\widetilde{\mathcal{A}}_{r}^{P},\ \widetilde{\mathcal{A}}_{1}^{\mathcal{R}},\ldots,\widetilde{\mathcal{A}}_{s}^ {\mathcal{R}}\] (C.5) _where_ \(\widetilde{\mathcal{A}}_{i}^{R}=\mathcal{A}_{i}^{R},\widetilde{\mathcal{A}}_{i }^{P}=\mathcal{A}_{i}^{P}\) _for_ \(1\leq i\leq r\)_, and_ \(\widetilde{\mathcal{A}}_{j}^{\mathcal{R}}=\mathcal{A}_{j}^{\mathcal{R}}\) _for_ \(1\leq j\leq s\)_._ Proof.: (a) It follows straightly from the Definition A.8 and A.9. (b) The super-simple nodes in \(\mathcal{R}\) are \[\rho_{1}^{R},\rho_{1}^{P},\ldots,\rho_{q+1}^{R},\rho_{q+1}^{P}\] and there are two types of structural homeostasis subnetworks \[\begin{split}&\mathcal{L}(\rho_{i}^{R},\rho_{i}^{P})=\{\rho_{i}^{R}, \rho_{i}^{P}\},\ \text{for}\ i=1,\ldots,q+1\\ &\mathcal{L}(\rho_{j}^{P},\rho_{j+1}^{R})\supseteq\{\rho_{j}^{P},\rho_{j+1}^{R}\},\ \text{for}\ j=1,\ldots,q\end{split}\] (C.6) Note that every structural homeostasis subnetwork \(\mathcal{L}(\rho_{i}^{R},\rho_{i}^{P})\) only consists of two adjacent super-simple nodes, thus the corresponding backbone node \(\widetilde{\mathcal{L}}(\rho_{i}^{R},\rho_{i}^{P})\) is an empty set. Next, from Definitions A.8 and A.9, we obtain the rest of the backbone nodes of \(\mathcal{P}_{\mathcal{S}}(\mathcal{R})\) and the appendage components of \(\mathcal{P}_{\mathcal{A}}(\mathcal{R})\) in the PRN \(\mathcal{R}\). Next, we clarify the relation between the arrows in both homeostasis pattern networks. **Lemma C.2**.: _Let \(\mathcal{G}\) be an input-output GRN and \(\mathcal{R}\) be the associated input-output PRN._ 1. _The arrows of the structural pattern network_ \(\mathcal{P}_{\mathcal{S}}(\mathcal{R})\) _are_ \[\rho_{1}^{R}\to\widetilde{\mathcal{L}}(\rho_{1}^{R},\rho_{1}^{P})\to\rho_{1}^{ P}\to\widetilde{\mathcal{L}}(\rho_{1}^{P},\rho_{2}^{R})\to\rho_{2}^{R}\to \dots\to\widetilde{\mathcal{L}}(\rho_{q+1}^{R},\rho_{q+1}^{P})\to\rho_{q+1}^{P}\] (C.7) 2. _Consider two distinct appendage components_ \(\widetilde{\mathcal{A}}_{1},\widetilde{\mathcal{A}}_{2}\) _in_ \(\mathcal{P}_{\mathcal{A}}(\mathcal{G})\)_. Suppose_ \(\widetilde{\mathcal{A}}_{1}\to\widetilde{\mathcal{A}}_{2}\in\mathcal{P}_{ \mathcal{A}}(\mathcal{G})\)_, then the corresponding arrows in_ \(\mathcal{P}_{\mathcal{A}}(\mathcal{R})\) _are:_ 1. _If_ \(\widetilde{\mathcal{A}}_{1}\in\{\widetilde{\mathcal{A}}_{i}^{\prime}\}_{i=1}^ {r}\) _and_ \(\widetilde{\mathcal{A}}_{2}\in\{\widetilde{\mathcal{A}}_{j}\}_{j=1}^{s}\)_, then_ \[\widetilde{\mathcal{A}}_{1}^{R}\to\widetilde{\mathcal{A}}_{1}^{P}\to\widetilde {\mathcal{A}}_{2}^{R}\] (C.8) 3. _If_ \(\widetilde{\mathcal{A}}_{2}\in\{\widetilde{\mathcal{A}}_{i}^{\prime}\}_{i=1}^ {r}\) _and_ \(\widetilde{\mathcal{A}}_{1}\in\{\widetilde{\mathcal{A}}_{j}\}_{j=1}^{s}\)_, then_ \[\widetilde{\mathcal{A}}_{1}^{R}\to\widetilde{\mathcal{A}}_{2}^{R}\to\widetilde {\mathcal{A}}_{2}^{P}\] (C.9) 4. _If_ \(\widetilde{\mathcal{A}}_{1},\widetilde{\mathcal{A}}_{2}\in\{\widetilde{ \mathcal{A}}_{i}^{\prime}\}_{i=1}^{r}\)_, then_ \[\widetilde{\mathcal{A}}_{1}^{R}\to\widetilde{\mathcal{A}}_{1}^{P}\to\widetilde {\mathcal{A}}_{2}^{R}\to\widetilde{\mathcal{A}}_{2}^{P}\] (C.10) 5. _If_ \(\widetilde{\mathcal{A}}_{1},\widetilde{\mathcal{A}}_{2}\in\{\widetilde{ \mathcal{A}}_{j}\}_{j=1}^{s}\)_, then_ \[\widehat{\mathcal{A}}_{1}^{\mathcal{R}}\to\widetilde{\mathcal{A}}_{2}^{\mathcal{ R}}\] (C.11) Proof.: (a) From Definition A.8 the arrows of \(\mathcal{P}_{\mathcal{S}}(\mathcal{G})\) satisfy \[\iota\to\widetilde{\mathcal{L}}_{1}\to\rho_{2}\to\widetilde{\mathcal{L}}_{2} \to\dots\to\widetilde{\mathcal{L}}_{q}\to o\] (C.12) where \(\iota=\rho_{1}\prec\rho_{2}\prec\dots\prec\rho_{q+1}=o\) are the super-simple nodes in \(\mathcal{G}\). Applying Lemma B.4, we get the super-simple nodes in \(\mathcal{R}\) under the following order, \[\rho_{1}^{R}\prec\rho_{1}^{P}\prec\dots\prec\rho_{q+1}^{R}\prec\rho_{q+1}^{P}\] (C.13) Again from Definition A.8, we get (C.7). (b) For (C.8), it follows from \(\widetilde{\mathcal{A}}_{1}\in\{\widetilde{\mathcal{A}}_{i}^{\prime}\}_{i=1}^ {r}\) and \(\widetilde{\mathcal{A}}_{2}\in\{\widetilde{\mathcal{A}}_{j}\}_{j=1}^{s}\) that we can assume \(\widetilde{\mathcal{A}}_{1}=\{\tau_{1}\}\) with \(\tau_{1}\) a single appendage node in \(\mathcal{G}\). Then we get the corresponding appendage components in \(\mathcal{P}_{\mathcal{A}}(\mathcal{R})\) as \[\widetilde{\mathcal{A}}_{1}^{R}=\{\tau_{1}^{R}\},\;\widetilde{\mathcal{A}}_{1 }^{P}=\{\tau_{1}^{P}\},\;\widetilde{\mathcal{A}}_{2}^{\mathcal{R}}\] From \(\widetilde{\mathcal{A}}_{1}\to\widetilde{\mathcal{A}}_{2}\) in \(\mathcal{P}_{\mathcal{A}}(\mathcal{G})\), there exists a node \(\sigma\in\widetilde{\mathcal{A}}_{2}\), such that \(\tau_{1}\to\sigma\). Using Lemma B.7, we have \(\{\sigma^{R},\sigma^{P}\}\in\widetilde{\mathcal{A}}_{2}^{\mathcal{R}}\). Under the gene coupling in the PRN, we get arrows \(\tau_{1}^{R}\to\tau_{1}^{P}\) and \(\tau_{1}^{P}\to\sigma^{R}\), which implies that \[\widetilde{\mathcal{A}}_{1}^{R}\to\widetilde{\mathcal{A}}_{1}^{P}\to\widetilde {\mathcal{A}}_{2}^{\mathcal{R}}\] For (C.9), since \(\widetilde{\mathcal{A}}_{2}\in\{\widetilde{\mathcal{A}}^{\prime}_{i}\}_{i=1}^{r}\) and \(\widetilde{\mathcal{A}}_{1}\in\{\widetilde{\mathcal{A}}_{j}\}_{j=1}^{s}\), we assume \(\widetilde{\mathcal{A}}_{2}=\{\tau_{2}\}\) with \(\tau_{2}\) is a single appendage node in \(\mathcal{G}\). Then we get the corresponding appendage components in \(\mathcal{P}_{\mathcal{A}}(\mathcal{R})\) as \[\widetilde{\mathcal{A}}_{1}^{\mathcal{R}},\ \widetilde{\mathcal{A}}_{2}^{R}=\{ \tau_{2}^{R}\},\ \widetilde{\mathcal{A}}_{2}^{P}=\{\tau_{2}^{P}\}\] Again from \(\widetilde{\mathcal{A}}_{1}\to\widetilde{\mathcal{A}}_{2}\) in \(\mathcal{P}_{\mathcal{A}}(\mathcal{G})\), there exists a node \(\sigma\in\widetilde{\mathcal{A}}_{1}\), such that \(\sigma\to\tau_{2}\). Using Lemma B.7, we have \(\{\sigma^{R},\sigma^{P}\}\in\widetilde{\mathcal{A}}_{1}^{\mathcal{R}}\). Under the gene coupling in the PRN, we get arrows \(\tau_{2}^{R}\to\tau_{2}^{P}\) and \(\sigma^{R}\to\sigma^{P}\to\tau_{2}^{R}\), and obtain \[\widetilde{\mathcal{A}}_{1}^{\mathcal{R}}\to\widetilde{\mathcal{A}}_{2}^{R}\to \widetilde{\mathcal{A}}_{2}^{P}\] We omit the proofs of (C.10) and (C.11), since they follow directly from the above. **Lemma C.3**.: _Let \(\mathcal{G}\) be an input-output GRN and \(\mathcal{R}\) be the associated input-output PRN. Consider \(\widetilde{\mathcal{A}}\in\mathcal{P}_{\mathcal{A}}(\mathcal{G})\) and \(\mathcal{V}\in\mathcal{P}_{\mathcal{S}}(\mathcal{G})\). Suppose \(\widetilde{\mathcal{A}}\to\mathcal{V}\in\mathcal{P}(\mathcal{G})\), then the corresponding arrows in \(\mathcal{P}(\mathcal{R})\) are_ 1. _If_ \(\widetilde{\mathcal{A}}\in\{\widetilde{\mathcal{A}}^{\prime}_{i}\}_{i=1}^{r}\) _and_ \(\mathcal{V}=\rho_{i}\)_, then_ \[\widetilde{\mathcal{A}}^{R}\to\widetilde{\mathcal{A}}^{P}\to\rho_{i}^{R}\] (C.14) 2. _If_ \(\widetilde{\mathcal{A}}\in\{\widetilde{\mathcal{A}}_{j}\}_{j=1}^{s}\) _and_ \(\mathcal{V}=\widetilde{\mathcal{L}}_{i}\)_, then_ \[\widetilde{\mathcal{A}}^{\mathcal{R}}\to\widetilde{\mathcal{L}}(\rho_{i}^{P}, \rho_{i+1}^{R})\] (C.15) 3. _If_ \(\widetilde{\mathcal{A}}\in\{\widetilde{\mathcal{A}}_{j}\}_{j=1}^{s}\) _and_ \(\mathcal{V}=\rho_{i}\)_, then_ \[\widetilde{\mathcal{A}}^{\mathcal{R}}\to\rho_{i}^{R}\] (C.16) 4. _If_ \(\widetilde{\mathcal{A}}\in\{\widetilde{\mathcal{A}}^{\prime}_{i}\}_{i=1}^{r}\) _and_ \(\mathcal{V}=\widetilde{\mathcal{L}}_{i}\)_, then_ \[\widetilde{\mathcal{A}}^{R}\to\widetilde{\mathcal{A}}^{P}\to\widetilde{\mathcal{ L}}(\rho_{i}^{P},\rho_{i+1}^{R})\] (C.17) Proof.: (a) From \(\widetilde{\mathcal{A}}\in\{\widetilde{\mathcal{A}}^{\prime}_{i}\}_{i=1}^{r}\) and \(\mathcal{V}=\{\rho_{i}\}\), we may assume that \(\widetilde{\mathcal{A}}=\{\tau_{1}\}\) with \(\tau_{1}\) is a single appendage node in \(\mathcal{G}\). Then we get the corresponding appendage components in \(\mathcal{P}_{\mathcal{A}}(\mathcal{R})\) as \[\widetilde{\mathcal{A}}^{R}=\{\tau_{1}^{R}\},\ \widetilde{\mathcal{A}}^{P}=\{ \tau_{1}^{P}\}\] and the nodes in \(\mathcal{P}_{\mathcal{S}}(\mathcal{R})\) as \[\rho_{i}^{R},\ \widetilde{\mathcal{L}}(\rho_{i}^{R},\rho_{i}^{P}),\ \rho_{i}^{P}\] From the arrow \(\tau_{1}\to\rho_{i}\), \(\rho_{i}\) is the most upstream simple node in \(\mathcal{G}\) that allows an appendage path from \(\tau_{1}\). Using appendage paths correspondence in Corollary B.5, we obtain that \(\rho_{i}^{R}\) is the most upstream simple node in \(\mathcal{R}\) which allows an appendage path from \(\widetilde{\mathcal{A}}^{P}\). Together with the gene coupling in the PRN, we obtain arrows \[\widetilde{\mathcal{A}}^{R}\to\widetilde{\mathcal{A}}^{P}\to\rho_{i}^{R}\] (b) From \(\widetilde{\mathcal{A}}\in\{\widetilde{\mathcal{A}}_{j}\}_{j=1}^{s}\) and \(\widetilde{\mathcal{L}}=\widetilde{\mathcal{L}}_{i}\), we get the corresponding appendage component \(\widetilde{\mathcal{A}}^{\mathcal{R}}\) in \(\mathcal{P}_{\mathcal{A}}(\mathcal{R})\) and the backbone node \(\widetilde{\mathcal{L}}(\rho_{i}^{P},\rho_{i+1}^{R})\) in \(\mathcal{P}_{\mathcal{S}}(\mathcal{R})\). Moreover, from the arrow \(\widetilde{\mathcal{A}}\to\widetilde{\mathcal{L}}_{i}\) in \(\mathcal{P}(\mathcal{G})\), there exists two gene nodes \(\tau\in\widetilde{\mathcal{A}}\) and \(\sigma\in\widetilde{\mathcal{L}}_{i}\), such that \[\tau\to\sigma\] where \(\sigma\) is the most upstream simple node which allows an appendage path from nodes in appendage component \(\widetilde{\mathcal{A}}\). Using Lemma B.7 and the gene coupling in PRN, we have \[\{\tau^{R},\tau^{P}\}\in\widetilde{\mathcal{A}}^{\mathcal{R}},\;\{\sigma^{R}, \sigma^{P}\}\in\widetilde{\mathcal{L}}(\rho_{i}^{P},\rho_{i+1}^{R}),\;\text{ and }\tau^{P}\to\sigma^{R}\] Again from Corollary B.5, we obtain that \(\sigma^{R}\) is the most upstream simple node in \(\mathcal{R}\) which allows an appendage path from \(\widetilde{\mathcal{A}}^{\mathcal{R}}\). Thus, we conclude that \[\widetilde{\mathcal{A}}^{\mathcal{R}}\to\widetilde{\mathcal{L}}(\rho_{i}^{P}, \rho_{i+1}^{R})\] The remaining items follow directly from the first two items, so we omit the proof. **Lemma C.4**.: _Let \(\mathcal{G}\) be an input-output GRN and \(\mathcal{R}\) be the associated input-output PRN. Consider \(\mathcal{V}\in\mathcal{P}_{\mathcal{S}}(\mathcal{G})\) and \(\widetilde{\mathcal{A}}\in\mathcal{P}_{\mathcal{A}}(\mathcal{G})\). Suppose \(\mathcal{V}\to\widetilde{\mathcal{A}}\in\mathcal{P}(\mathcal{G})\), then the corresponding arrows in \(\mathcal{P}(\mathcal{R})\) are_ 1. _If_ \(\mathcal{V}=\rho_{i}\) _and_ \(\widetilde{\mathcal{A}}\in\{\widetilde{\mathcal{A}}_{i}\}_{i=1}^{r}\)_, then_ \[\rho_{i}^{P}\to\widetilde{\mathcal{A}}^{R}\to\widetilde{\mathcal{A}}^{P}\] (C.18) 2. _If_ \(\mathcal{V}=\widetilde{\mathcal{L}}_{i}\) _and_ \(\widetilde{\mathcal{A}}\in\{\widetilde{\mathcal{A}}_{j}\}_{j=1}^{s}\)_, then_ \[\widetilde{\mathcal{L}}(\rho_{i}^{P},\rho_{i+1}^{R})\to\widetilde{\mathcal{A}} ^{\mathcal{R}}\] (C.19) 3. _If_ \(\mathcal{V}=\rho_{i}\) _and_ \(\widetilde{\mathcal{A}}\in\{\widetilde{\mathcal{A}}_{j}\}_{j=1}^{s}\)_, then_ \[\rho_{i}^{P}\to\widetilde{\mathcal{A}}^{\mathcal{R}}\] (C.20) 4. _If_ \(\mathcal{V}=\widetilde{\mathcal{L}}_{i}\) _and_ \(\widetilde{\mathcal{A}}\in\{\widetilde{\mathcal{A}}_{i}\}_{i=1}^{r}\)_, then_ \[\widetilde{\mathcal{L}}(\rho_{i}^{P},\rho_{i+1}^{R})\to\widetilde{\mathcal{A}} ^{R}\to\widetilde{\mathcal{A}}^{P}\] (C.21) Proof.: (a) Similarly as in Lemma C.3, we assume \(\widetilde{\mathcal{A}}=\{\tau_{1}\}\) with \(\tau_{1}\) is a single appendage node in \(\mathcal{G}\), and get the corresponding appendage components in \(\mathcal{P}_{\mathcal{A}}(\mathcal{R})\) as \[\widetilde{\mathcal{A}}^{R}=\{\tau_{1}^{R}\},\;\widetilde{\mathcal{A}}_{1}^{P} =\{\tau_{1}^{P}\}\] and the nodes in \(\mathcal{P}_{\mathcal{S}}(\mathcal{R})\) as \[\rho_{i}^{R},\;\widetilde{\mathcal{L}}(\rho_{i}^{R},\rho_{i}^{P}),\;\rho_{i}^{P}\] From the arrow \(\rho_{i}\to\tau_{1}\), \(\rho_{i}\) is the most downstream simple node in \(\mathcal{G}\) that allows an appendage path to \(\tau_{1}\). Using appendage paths correspondence in Corollary B.5, we obtain that \(\rho_{i}^{P}\) is the most downstream simple node in \(\mathcal{R}\) that allows an appendage path to \(\widetilde{\mathcal{A}}^{R}\). Together with the gene coupling in the PRN, we obtain arrows \[\rho_{i}^{P}\to\widetilde{\mathcal{A}}^{R}\to\widetilde{\mathcal{A}}^{P}\] (b) Follows from Lemma C.3, we get the corresponding appendage component \(\widetilde{\mathcal{A}}^{\mathcal{R}}\) in \(\mathcal{P}_{\mathcal{A}}(\mathcal{R})\) and the backbone node \(\widetilde{\mathcal{L}}(\rho_{i}^{P},\rho_{i+1}^{R})\) in \(\mathcal{P}_{\mathcal{S}}(\mathcal{R})\). Moreover, from the arrow \(\widetilde{\mathcal{L}}_{i}\to\widetilde{\mathcal{A}}\) in \(\mathcal{P}(\mathcal{G})\), there exists two gene nodes \(\sigma\in\widetilde{\mathcal{L}}_{i}\) and \(\tau\in\widetilde{\mathcal{A}}\), such that \[\sigma\to\tau\] where \(\sigma\) is the most downstream simple node in \(\mathcal{G}\) that allows an appendage path to nodes in appendage component \(\widetilde{\mathcal{A}}\). Using Lemma B.7 and the gene coupling in PRN, we have \[\{\sigma^{R},\sigma^{P}\}\in\widetilde{\mathcal{L}}(\rho_{i}^{P},\rho_{i+1}^{ R}),\ \{\tau^{R},\tau^{P}\}\in\widetilde{\mathcal{A}}^{\mathcal{R}},\text{ and }\sigma^{P}\to\tau^{R}\] Again from Corollary B.5, we obtain that \(\sigma^{P}\) is the most downstream simple node in \(\mathcal{R}\) that allows an appendage path to \(\widetilde{\mathcal{A}}^{\mathcal{R}}\). Thus, we conclude that \[\widetilde{\mathcal{L}}(\rho_{i}^{P},\rho_{i+1}^{R})\to\widetilde{\mathcal{A} }^{\mathcal{R}}\] Items (c) and (d) follow directly from the above. ### Homeostasis Inducing in GRN and PRN Before stating the second main result of the paper we need more terminology. **Definition C.5**.: Let \(\mathcal{G}\) be a GRN and \(\mathcal{R}\) the associated PRN. Let \(\mathcal{K}\) be a homeostasis subnetwork or a super-simple node of \(\mathcal{G}\). 1. We say that \(\mathcal{K}\) is _homeostasis inducing_ if \(h_{\mathcal{K}^{\mathcal{R}}}\equiv\det(B^{\mathcal{R}})=0\) at \(\mathcal{I}_{0}\), where \(\mathcal{K}^{\mathcal{R}}\) and \(B^{\mathcal{R}}\) are the corresponding homeostasis subnetwork and homeostasis block in the PRN. If \(\mathcal{K}=\{\tau\}\), for a single appendage node \(\tau\), then \(\mathcal{K}^{\mathcal{R}}=\{\tau^{R}\}\) and \(B^{\mathcal{R}}=[f_{\tau^{R},\tau^{R}}]\). 2. If \(\mathcal{K}=\{\rho\}\) is a super-simple node of \(\mathcal{G}\), let \(\rho\Rightarrow\nu\in\mathcal{G}\) denote that nodes \(\nu^{R},\nu^{P}\in\mathcal{R}\) are generically homeostatic whenever \(\rho\) is homeostasis inducing. Given a subset of nodes \(\mathcal{N}\subset\mathcal{G}\), \(\rho\Rightarrow\mathcal{N}\) if for every node \(\nu\in\mathcal{N}\), \(\rho\Rightarrow\nu\). 3. If \(\mathcal{K}\) is a homeostasis subnetwork of \(\mathcal{G}\), let \(\widetilde{\mathcal{K}}\) be the corresponding node in \(\mathcal{P}(\mathcal{G})\). Then, let \(\widetilde{\mathcal{K}}\Rightarrow\nu\in\mathcal{G}\) denote that nodes \(\nu^{R},\nu^{P}\in\mathcal{R}\) are generically homeostatic whenever \(\mathcal{K}\) is homeostasis inducing. Given a subset of nodes \(\mathcal{N}\subset\mathcal{G}\), \(\widetilde{\mathcal{K}}\Rightarrow\mathcal{N}\) if for every node \(\nu\in\mathcal{N}\), \(\widetilde{\mathcal{K}}\Rightarrow\nu\). \(\lozenge\) Now we are ready to use the homeostasis pattern network \(\mathcal{P}(\mathcal{G})\) to characterize the homeostasis patterns in both the GRN \(\mathcal{G}\) and the associated PRN \(\mathcal{R}\). **Theorem C.6**.: _Let \(\mathcal{G}\) be a GRN and \(\mathcal{P}(\mathcal{G})\) its homeostasis pattern network. Suppose that \(\widetilde{\mathcal{A}}\in\mathcal{P}_{\mathcal{A}}(\mathcal{G})\) is an appendage component, and \(\mathcal{V}_{s}\in\mathcal{P}_{\mathcal{S}}(\mathcal{G})\) is a backbone node. Then_ 1. (Structural Homeostasis \(\Rightarrow\) Structural Subnetwork) \(\mathcal{V}_{s}\) _induces precisely every node of_ \(\mathcal{P}_{\mathcal{S}}(\mathcal{G})\) _strictly downstream from_ \(\mathcal{V}_{s}\)_. Moreover, if_ \(\mathcal{V}_{s}=\{\rho\}\) _is a super-simple node, then_ \(\rho^{P}\in\mathcal{R}\) _is also homeostatic._ 2. (Structural Homeostasis \(\Rightarrow\) Appendage Subnetwork) _Let_ \(\mathcal{V}\rightarrow\widetilde{\mathcal{A}}\in\mathcal{P}(\mathcal{G})\) _with_ \(\mathcal{V}\in\mathcal{P}_{\mathcal{S}}(\mathcal{G})\)_._ \(\mathcal{V}_{s}\Rightarrow\widetilde{\mathcal{A}}\) _if and only if_ \(\mathcal{V}\) _is strictly downstream from_ \(\mathcal{V}_{s}\)_. If_ \(\mathcal{V}_{s}=\{\rho\}\) _is a super-simple node, then_ \(\rho\Rightarrow\widetilde{\mathcal{A}}\) _if and only if_ \(\mathcal{V}\) _is downstream from_ \(\mathcal{V}_{s}\)_._ 3. (Appendage Homeostasis \(\Rightarrow\) Structural Subnetworks) _Let_ \(\widetilde{\mathcal{A}}\rightarrow\mathcal{V}\in\mathcal{P}(\mathcal{G})\) _with_ \(\mathcal{V}\in\mathcal{P}_{\mathcal{S}}(\mathcal{G})\)_._ \(\widetilde{\mathcal{A}}\Rightarrow\mathcal{V}_{s}\) _if and only if_ \(\mathcal{V}\) _is strictly upstream from_ \(\mathcal{V}_{s}\)_. If_ \(\mathcal{V}_{s}=\{\rho\}\) _is a super-simple node, then_ \(\widetilde{\mathcal{A}}\Rightarrow\rho\) _if and only if_ \(\mathcal{V}\) _is upstream from_ \(\rho\)_._ 4. (Appendage Homeostasis \(\Rightarrow\) Appendage Subnetworks) _Let_ \(\widetilde{\mathcal{A}}_{1},\widetilde{\mathcal{A}}_{2}\in\mathcal{P}_{ \mathcal{A}}(\mathcal{G})\) _be distinct appendage components. Let_ \(\widetilde{\mathcal{A}}_{1}\rightarrow\mathcal{V}_{1},\mathcal{V}_{2} \rightarrow\widetilde{\mathcal{A}}_{2}\in\mathcal{P}(\mathcal{G})\) _with_ \(\mathcal{V}_{1},\mathcal{V}_{2}\in\mathcal{P}_{\mathcal{S}}(\mathcal{G})\)_._ \(\widetilde{\mathcal{A}}_{1}\Rightarrow\widetilde{\mathcal{A}}_{2}\) _if and only if_ \(\widetilde{\mathcal{A}}_{1}\) _is upstream from_ \(\widetilde{\mathcal{A}}_{2}\) _and every path from_ \(\widetilde{\mathcal{A}}_{1}\) _to_ \(\widetilde{\mathcal{A}}_{2}\) _in_ \(\mathcal{P}(\mathcal{G})\) _contains a super-simple node which is downstream from_ \(\mathcal{V}_{1}\) _and upstream from_ \(\mathcal{V}_{2}\)_. Moreover, if_ \(\widetilde{\mathcal{A}}_{1}=\{\tau\}\) _is a single appendage node, then P-null-degradation induces_ \(\tau^{R}\in\mathcal{R}\)_, but R-null-degradation does not induce_ \(\tau^{P}\in\mathcal{R}\)_._ Proof.: Here we apply Lemmas C.1 - C.4 to obtain the nodes and arrows in \(\mathcal{P}(\mathcal{R})\), including the arrows in \(\mathcal{P}_{\mathcal{S}}(\mathcal{R})\) or \(\mathcal{P}_{\mathcal{A}}(\mathcal{R})\), and the arrows between \(\mathcal{P}_{\mathcal{S}}(\mathcal{R})\) and \(\mathcal{P}_{\mathcal{A}}(\mathcal{R})\). Then we use the results from Theorem A.12. (a) Under assumptions on the GRN \(\mathcal{G}\), Lemmas C.1 and C.2 show that the nodes of the structural pattern network \(\mathcal{P}_{\mathcal{S}}(\mathcal{R})\) satisfy \[\rho_{1}^{R}\rightarrow\widetilde{\mathcal{L}}(\rho_{1}^{R},\rho_{1}^{P}) \rightarrow\rho_{1}^{P}\rightarrow\widetilde{\mathcal{L}}(\rho_{1}^{P},\rho_ {2}^{R})\rightarrow\rho_{2}^{R}\rightarrow\cdots\rightarrow\widetilde{\mathcal{ L}}(\rho_{q+1}^{R},\rho_{q+1}^{P})\rightarrow\rho_{q+1}^{P}\] Suppose \(\mathcal{V}_{s}\) is homeostasis inducing. Using Theorem A.12\((a)\), we derive that \(\mathcal{V}_{s}\) induces precisely every node of the structural pattern network strictly downstream from \(\mathcal{V}_{s}\). Moreover, if \(\mathcal{V}_{s}=\{\rho\}\) is a super-simple node it follows that homeostasis is induced by \(\mathcal{R}\)-Haldane \(\big{[}f_{\rho^{P},\rho^{R}}\big{]}\), thus \(\rho^{P}\in\mathcal{R}\) is also homeostatic. (b) Given \(\mathcal{V}\rightarrow\widetilde{\mathcal{A}}\in\mathcal{P}(\mathcal{G})\), Lemma C.4 shows the corresponding arrows from \(\mathcal{P}_{\mathcal{S}}(\mathcal{R})\) to \(\mathcal{P}_{\mathcal{A}}(\mathcal{R})\). Thus, if \(\mathcal{V}=\{\rho\}\) is a super-simple node, then \[\rho_{i}^{P}\rightarrow\widetilde{\mathcal{A}}^{R}\rightarrow\widetilde{ \mathcal{A}}^{P}\ \ \text{or}\ \ \rho_{i}^{P}\rightarrow\widetilde{\mathcal{A}}^{\mathcal{R}}\] If \(\mathcal{V}=\widetilde{\mathcal{L}}_{i}\) is a backbone node, then \[\widetilde{\mathcal{L}}(\rho_{i}^{P},\rho_{i+1}^{R})\rightarrow\widetilde{ \mathcal{A}}^{R}\rightarrow\widetilde{\mathcal{A}}^{P}\ \ \text{or}\ \ \widetilde{\mathcal{L}}(\rho_{i}^{P},\rho_{i+1}^{R}) \rightarrow\widetilde{\mathcal{A}}^{\mathcal{R}}\] Using Lemma C.2 and Theorem A.12\((b)\), we get \(\mathcal{V}_{s}\Rightarrow\widetilde{\mathcal{A}}\) if and only if \(\mathcal{V}\) is strictly downstream from \(\mathcal{V}_{s}\). Moreover, if \(\mathcal{V}_{s}=\{\rho\}\) is a super-simple node, \(\mathcal{V}\) needs only to be downstream from \(\rho\). (c) From Lemma C.3, given \(\widetilde{\mathcal{A}}\rightarrow\mathcal{V}\in\mathcal{P}(\mathcal{G})\), we obtain the corresponding arrows from \(\mathcal{P}_{\mathcal{A}}(\mathcal{R})\) to \(\mathcal{P}_{\mathcal{S}}(\mathcal{R})\). Hence, if \(\mathcal{V}=\{\rho\}\) is a super-simple node, then \[\widetilde{\mathcal{A}}^{R}\rightarrow\widetilde{\mathcal{A}}^{P}\to \rho_{i}^{R}\ \ \text{or}\ \ \widetilde{\mathcal{A}}^{\mathcal{R}}\to\rho_{i}^{R}\] If \(\mathcal{V}=\widetilde{\mathcal{L}}_{i}\) is a backbone node, then \[\widetilde{\mathcal{A}}^{R}\rightarrow\widetilde{\mathcal{A}}^{P}\to \widetilde{\mathcal{L}}(\rho_{i}^{P},\rho_{i+1}^{R})\ \ \text{or}\ \ \widetilde{\mathcal{A}}^{\mathcal{R}}\to \widetilde{\mathcal{L}}(\rho_{i}^{P},\rho_{i+1}^{R})\] Together with Lemma C.2 and Theorem A.12\((c)\), we derive \(\widetilde{\mathcal{A}}\Rightarrow\mathcal{V}_{s}\) if and only if \(\mathcal{V}\) is strictly upstream from \(\mathcal{V}_{s}\). Moreover, if \(\mathcal{V}_{s}=\{\rho\}\) is a super-simple node, we need only that \(\mathcal{V}\) is upstream from \(\rho\). (d) First, suppose \(\widetilde{\mathcal{A}}_{1},\widetilde{\mathcal{A}}_{2}\in\mathcal{P}_{ \mathcal{A}}(\mathcal{G})\) are distinct appendage components. From Lemmas C.2 - C.4, \(\widetilde{\mathcal{A}}_{1}\) being upstream from \(\widetilde{\mathcal{A}}_{2}\) in \(\mathcal{P}(\mathcal{G})\) is equivalent to the corresponding node of \(\widetilde{\mathcal{A}}_{1}\) being upstream from the corresponding node of \(\widetilde{\mathcal{A}}_{2}\) in \(\mathcal{P}(\mathcal{R})\). In addition, Lemma B.1 shows that every path from \(\widetilde{\mathcal{A}}_{1}\) to \(\widetilde{\mathcal{A}}_{2}\) in \(\mathcal{P}(\mathcal{G})\) containing a super-simple node is equivalent to every path from the corresponding node of \(\widetilde{\mathcal{A}}_{1}\) to the corresponding node of \(\widetilde{\mathcal{A}}_{2}\) in \(\mathcal{P}(\mathcal{R})\) containing a super-simple node in PRN \(\mathcal{R}\). Using Theorem A.12\((d)\), we conclude this part. Second, suppose \(\widetilde{\mathcal{A}}_{1}=\{\tau\}\) is a single appendage node and homeostasis inducing, that is, the homeostasis is induced by either R-null-degradation \(\left[f_{\tau^{R},\tau^{R}}\right]\) or P-null-degradation \(\left[f_{\tau^{P},\tau^{P}}\right]\). Since \(\tau^{R}\rightarrow\tau^{P}\in\mathcal{P}(\mathcal{R})\) is an appendage path, from Theorem A.12\((d)\) we get that R-null-degradation doesn't induce \(\tau^{P}\in\mathcal{R}\). On the other side, since \(\tau\) is a single appendage node, \(\tau^{P}\rightarrow\tau^{R}\notin\mathcal{P}(\mathcal{R})\). Theorem B.6 and Lemmas C.3 - C.4 imply that there exists at least one path from \(\tau^{P}\) to \(\tau^{R}\) and every such path must contain a super-simple node in \(\mathcal{P}(\mathcal{R})\), therefore P-null-degradation induces \(\tau^{R}\in\mathcal{R}\). The following Lemma shows the connection between the occurrence of homeostasis on the mRNA node and on the protein node associated to the same gene node in GRN. **Lemma C.7**.: _Let \(\mathcal{G}\) be an input-output GRN and \(\mathcal{R}\) be the associated input-output PRN. Suppose that infinitesimal homeostasis occurs in the PRN at \(\mathcal{I}_{0}\), induced by a homeostasis subnetwork \(\mathcal{K}\) of \(\mathcal{R}\). Then_ 1. _If_ \(\nu\) _is neither a super-simple nor single appendage node in_ \(\mathcal{G}\)_, with_ \(\nu^{R},\nu^{P}\notin\mathcal{K}\)_, then_ \(\nu^{R}\) _is homeostatic if and only if_ \(\nu^{P}\) _is homeostatic in_ \(\mathcal{R}\)_._ 2. _If_ \(\rho\) _is a super-simple node in_ \(\mathcal{G}\)_, with_ \(\rho^{R},\rho^{P}\notin\mathcal{K}\)_, then generically_ \(\rho^{R}\) _is homeostatic if and only if_ \(\rho^{P}\) _is homeostatic in_ \(\mathcal{R}\)_._ 3. _If_ \(\tau\) _is a single appendage node in_ \(\mathcal{G}\)_, with_ \(\tau^{R},\tau^{P}\notin\mathcal{K}\)_, then generically_ \(\tau^{R}\) _is homeostatic if and only if_ \(\tau^{P}\) _is homeostatic in_ \(\mathcal{R}\) Proof.: (a) Using Lemma C.1 and the fact that the GRN-node \(\nu\) is neither a super-simple nor single appendage node in \(\mathcal{G}\), we have that \(\nu^{R},\nu^{P}\in\mathcal{R}\) belong to the same homeostasis subnetwork of \(\mathcal{R}\), distinct from \(\mathcal{K}\). Thus, we conclude the result from Theorem C.6. (b) By Lemma C.1 and Lemma C.2 and the fact that \(\rho\) is a super-simple node in \(\mathcal{G}\), we have \[\rho^{R}\to\widetilde{\mathcal{L}}(\rho^{R},\rho^{P})\to\rho^{P}\] (C.22) where \(\rho^{R},\widetilde{\mathcal{L}}(\rho^{R},\rho^{P}),\rho^{P}\in\mathcal{P}_{ \mathcal{S}}(\mathcal{R})\) and \(\widetilde{\mathcal{L}}(\rho^{R},\rho^{P})=\emptyset\). Assume that \(\rho^{R}\) is homeostatic in the PRN. Then, we have the steady-state equation \(f_{\rho^{P}}(\rho^{R},\rho^{P})=0\) and implicit differentiation gives \[\frac{d}{d\mathcal{I}}f_{\rho^{P}}\big{|}_{\mathcal{I}=\mathcal{I}_{0}}=f_{ \rho^{P},\rho^{R}}(\rho^{R})^{\prime}+f_{\rho^{P},\rho^{P}}(\rho^{P})^{\prime}=0\] (C.23) Generically, we can assume \(f_{\rho^{P},\rho^{P}}\neq 0\) (i.e. no null degradation, since \(\rho\) is not appendage), and get \[(\rho^{P})^{\prime}=0\] This shows that \(\rho^{P}\) is homeostatic in the PRN. On the other hand, assume that \(\rho^{P}\) is homeostatic in the PRN. Again, (C.23) holds. Generically, we can assume \(f_{\rho^{P},\rho^{R}}\neq 0\) (i.e. no Haldane homeostasis, since \(\rho^{R},\rho^{P}\notin\mathcal{K}\)), and obtain \[(\rho^{R})^{\prime}=0\] This shows that \(\rho^{R}\) is homeostatic in the PRN. We skip the proof of item (c), since it is analogous to the proof of item (b). **Remarks C.8**.: In Lemma C.7, items (b) and (c) characterize homeostatic nodes that are not contained in the homeostasis subnetwork \(\mathcal{K}\) inducing homeostasis. Now, it is easy to see what happens when they do belong to \(\mathcal{K}\). 1. Suppose that \(\rho\) is a super-simple node in \(\mathcal{G}\). On one hand, if \(\rho^{R}\in\mathcal{K}\) then \(\mathcal{R}\)-Haldane homeostasis occurs at \(\mathcal{I}_{0}\) (i.e. \(f_{\rho^{P},\rho^{R}}=0\)). In this case, \(\rho^{R}\) is not homeostatic but \(\rho^{P}\) is homeostatic. On the other hand, if \(\rho^{P}\in\mathcal{K}\) then \(\rho^{R}\) and \(\rho^{P}\) fail to be simultaneously homeostatic. In both cases, \(\rho\) is not GRN-homeostatic. 2. Suppose that \(\tau\) is a single appendage node in \(\mathcal{G}\). On one hand, if \(\tau^{P}\in\mathcal{K}\) then \(P\)-null-degradation occurs at \(\mathcal{I}_{0}\) (i.e. \(f_{\tau^{P},\tau^{P}}=0\)). In this case, \(\tau^{P}\) is not homeostatic but \(\tau^{R}\) is homeostatic. On the other hand, if \(\tau^{R}\in\mathcal{K}\) (\(R\)-null-degradation) then \(\tau^{R}\) and \(\tau^{P}\) fail to be simultaneously homeostatic. In both cases, \(\tau\) is not GRN-homeostatic. \(\diamondsuit\)
2309.11075
Large-scale Kinetic Simulations of Colliding Plasmas within a Hohlraum of Indirect Drive Inertial Confinement Fusions
The National Ignition Facility has recently achieved successful burning plasma and ignition using the inertial confinement fusion (ICF) approach. However, there are still many fundamental physics phenomena that are not well understood, including the kinetic processes in the hohlraum. Shan et al. [Phys. Rev. Lett, 120, 195001, 2018] utilized the energy spectra of neutrons to investigate the kinetic colliding plasma in a hohlraum of indirect drive ICF. However, due to the typical large spatial-temporal scales, this experiment could not be well simulated by using available codes at that time. Utilizing our advanced high-order implicit PIC code, LAPINS, we were able to successfully reproduce the experiment on a large scale of both spatial and temporal dimensions, in which the original computational scale was increased by approximately 7 to 8 orders of magnitude. When gold plasmas expand into deuterium plasmas, a kinetic shock is generated and propagates within deuterium plasmas. Simulations allow us to observe the entire progression of a strong shock wave, including its initial formation and steady propagation. Although both electrons and gold ions are collisional (on a small scale compared to the shock wave), deuterium ions seem to be collisionless. This is because a quasi-monoenergetic spectrum of deuterium ions can be generated by reflecting ions from the shock front, which then leads to the production of neutrons with unusual broadening due to beam-target nuclear reactions. This work displays an unprecedented kinetic analysis of an existing experiment, shedding light on the mechanisms behind shock wave formation. It also serves as a reference for benchmark simulations of upcoming new simulation codes and may be relevant for future research on mixtures and entropy increments at plasma interfaces.
Tianyi Liang, Dong Wu, Xiaochuan Ning, Lianqiang Shan, Zongqiang Yuan, Hongbo Cai, Zhengmao Sheng, Xiantu He
2023-09-20T05:40:15Z
http://arxiv.org/abs/2309.11075v1
Large-scale Kinetic Simulations of Colliding Plasmas within a Hohlraum of Indirect Drive Inertial Confinement Fusions ###### Abstract The National Ignition Facility has recently achieved successful burning plasma and ignition using the inertial confinement fusion (ICF) approach. However, there are still many fundamental physics phenomena that are not well understood, including the kinetic processes in the hohlraum. Shan et al. [Phys. Rev. Lett, 120, 195001, 2018] utilized the energy spectra of neutrons to investigate the kinetic colliding plasma in a hohlraum of indirect drive ICF. However, due to the typical large spatial-temporal scales, this experiment could not be well simulated by using available codes at that time. Utilizing our advanced high-order implicit PIC code, LAPINS, we were able to successfully reproduce the experiment on a large scale of both spatial and temporal dimensions, in which the original computational scale was increased by approximately 7 to 8 orders of magnitude. When gold plasmas expand into deuterium plasmas, a kinetic shock is generated and propagates within deuterium plasmas. Simulations allow us to observe the entire progression of a strong shock wave, including its initial formation and steady propagation. Although both electrons and gold ions are collisional (on a small scale compared to the shock wave), deuterium ions seem to be collisionless. This is because a quasi-monoenergetic spectrum of deuterium ions can be generated by reflecting ions from the shock front, which then leads to the production of neutrons with unusual broadening due to beam-target nuclear reactions. This work displays an unprecedented kinetic analysis of an existing experiment, shedding light on the mechanisms behind shock wave formation. It also serves as a reference for benchmark simulations of upcoming new simulation codes and may be relevant for future research on mixtures and entropy increments at plasma interfaces. ## I Introduction The recent experiments carried out at the National Ignition Facility (NIF) [1; 2; 3; 4; 5; 6] have successfully validated the feasibility of controllable inertial confinement fusion (ICF). This achievement is considered a noteworthy milestone in the endeavor to attain inexhaustible and environmentally friendly sources of energy. This remarkable achievement can be attributed to the collaborative efforts involving various technological advancements and physical insights. Some notable contributions include the utilization of high-density-carbon (HDC) capsules in low-gas-fill hohlraums [7] and the implementation of the "BigFoot" (BF) [8] scheme. Moreover, a comprehensive set of scaling laws [9] and evaluation metrics [10] based on the analysis of experimental data have been consolidated, serving as crucial tools in comprehending the underlying physical mechanisms of ICF and providing guidance for the development of pertinent experimental designs. However, from a theoretical standpoint, there remain numerous foundational physics concepts that are not yet comprehensively understood, particularly those that are closely tied to kinetic and non-equilibrium processes. In vacuum or near-vacuum hohlraums [11], as depicted in Fig. 1 (a), the expansion of high-Z plasma from the inner wall and its collision with the blow-off from the capsule or the low-density-fill gas [12; 13], can lead to the dominance of kinetic effects. This is due to the ion-ion mean free paths being larger than the size of the interaction regions. The existence of these kinetic effects has been substantiated through experiments and simulations [14; 15; 16], and their influence on the implosion process of ICF has been partially elucidated [17]. To attain high-gain laser fusion, it is imperative to thoroughly comprehend and regulate any notable physics phenomena that may arise during ICF processes. Therefore, the study of kinetic effects and their impact on energy depositions, implosion symmetries, and other inherent plasma properties holds significant importance. Recently, in an experiment conducted by Shan et al. [14], a kinetic colliding plasma was observed within a hohlraum of indirect drive ICF. This observation was made by measuring the energy spectra of neutrons. The width of the spectra in this experiment is unusually large. To explain this, they conducted collisionless particle-in-cell (PIC) simulations to investigate the interactions between the gold plasma and the low-density-fill gas. It was found that the kinetic shock wave reflects upstream ions and results in beam-target fusion. This phenomenon could, in turn, explain the broadening of the neutron energy spectrum. However, the PIC simulation is somewhat ideal due to the absence of collisional effects and nuclear reactions. Moreover, the scale of the PIC simulation was only several micrometers, much smaller than the scale of real physics, which is hundreds of micrometers in the experiment. As the conventional PIC method is constrained by computational resources, making it unsuitable for large-scale simulations, scholars have put forward hybrid approaches that combine the advantages of both fluid and kinetic PIC methods [18, 19, 20]. While in the context of the fluid component, the authors of this study still rely on empirical coefficients, such as the flux limiter and electron-ion coupling coefficients, for their treatment. Additionally, the authors make use of various approximations in different forms. The utilization of the hybrid approach assumes that the electrons must adhere to the condition of fluid approximation, wherein the mean free paths of the electrons should be smaller than the spatial resolution of the simulation. But in hohlraums or other laboratory astrophysics experiments, where the electron density ranges \(10^{18}\,\mathrm{cm}^{-3}\) to \(10^{20}\,\mathrm{cm}^{-3}\), the behavior of electrons cannot be accurately described using fluid dynamics. As a result, hybrid approaches may not be applicable in these scenarios. To handle a wide range of time scales, space scales, and densities, we developed a high-order implicit multidimensional PIC method. With an appropriate arrangement of space and time, the proposed method can significantly minimize numerical errors. Additionally, the utilization of a higher-order interpolation method can effectively reduce numerical noises. This reduction in numerical noise is particularly useful for simulating large-scale kinetic processes using the PIC method. In this paper, the experiment conducted by Shan et al. [14] was simulated and analyzed using the high-order implicit PIC code LAPINS. The simulation performed was analyzed in a large-scale set-up that closely resembles the real experiment. In our simulations, not only the density distributions and ion reflections associated with the shock wave were figured out, but also the neutron spectra that arise from the nuclear reactions between the reflected ions and the capsule were self-consistently generated. Considering the collision effects, the expansion of gold plasma manifests as a central rarefaction wave, instead of an isothermal one simulated in the reference [14]. When the gold plasma expands into the deuterium plasma, it initiates the generation of a kinetic shock wave, which subsequently propagates within the deuterium plasma. The shock wave under consideration is situated in the intermediate region between collisionless and collisional regimes. The primary mechanisms accountable for dissipation in this particular context involve both the reflections of upstreaming deuterium ions and collisional processes. Deuterium ions exhibiting quasi-monoenergetic spectra can be produced through the collisionless electrostatic shock wave mechanism, resulting in the generation of neutrons with significantly broadened energy distribution due to beam-target nuclear reactions. The paper is organized as follows. A brief introduction to the high-order implicit PIC method and the pairwise nuclear reaction algorithm of the LAPINS code is described in Section II. In Section III, the simulation results are presented and analyzed in detail. Finally, the discussion and conclusion are displayed in Section IV. Figure 1: Subgraph (a) depicts the schematic diagram of the colliding plasmas within a hohlraum. It illustrates various phenomena such as collisional interaction, shock formation, collisionless ion reflection, and nuclear reactions. Subgraph (b) represents the simulation setup. Simulation methods The LAPINS code has undergone significant advancements over the years, including the incorporation of collision effects [21], the consideration of quantum degeneracy [22], and the adoption of a hybrid-kinetic approach [23]. In this section, a brief introduction to the module of nuclear reactions [24], and the high-order implicit PIC method [25] utilized in our simulation. ### Implicit PIC model In the LAPINS code, a high-order implicit multidimensional PIC method has been devised to effectively address the complexities of astrophysics and dense plasmas. The spatial-temporal arrangement is established by employing Yee's algorithm in conjunction with a leapfrog algorithm to simulate the propagation of electromagnetic fields and the advancement of particles. Specifically, the charge density is positioned at the Yee cell centers, while the electric fields and current density are staggered upwards to the cell faces. Additionally, the magnetic fields are located at the cell edges. This arrangement considers the discretization of the Faraday and Ampere equations. It can be demonstrated that the equation \(\nabla\times B=0\) is consistently fulfilled when it is initially valid. Our field solver algorithm efficiently tackles the numerical instabilities commonly encountered in explicit PIC methods when using relaxed time steps and grid resolution. The algorithm used in this study effectively addresses the problem of numerical cooling, a common issue in standard implicit PIC methods, by employing a pseudo-electric-field approach. The violation of Gauss's law in the cell at time \(t^{n+1}\) is denoted as \(F^{n+1}=\nabla\cdot E^{n+1}-\rho^{n+1}\). It is used to calculate the additional pseudo electric field \(E_{\mathrm{psd}}^{n+1}=d\Delta t\nabla F^{n+1}\), with the introduction of a customized dimensionless number \(d\). Consequently, in each time increment, the overall electric field \(E^{n+1}\) exerted on particles needs to be updated by adding the pseudo electric field, which can be expressed as \(E^{n+1}=E^{n+1}+E_{\mathrm{psd}}^{n+1}\). The particle pusher algorithm is a combination of the standard Boris particle pusher and the Newton-Krylov iteration method. This algorithm has the potential to greatly enhance precision accuracy, surpassing the standard Boris particle pusher by several orders of magnitude. Additionally, it offers a substantial reduction in iteration time compared to the pure Newton-Krylov method. For further information, readers are encouraged to consult our recent publication [25]. ### Model of nuclear reaction We have successfully developed a model for pairwise nuclear reactions involving weighted particles at relativistic energies [24]. It is worth mentioning that the particle-pairing routine used for fusion reactions is identical to the one employed for binary collisions, known as the Takizuka-Abe algorithm. In every spatial cell, pairs of particles engaging in nuclear reactions are randomly selected. Energy and momentum exchanges are calculated for every pair of particles and performed in the center-of-momentum (CM) frame. Our model is additionally applicable in the context of the relativistic regime. In the CM frame, the probability of the reaction, denoted as \(P_{\mathrm{ab}}\), is determined by the calculation of \[P_{\mathrm{ab}}=n_{\mathrm{min}}\sigma_{\mathrm{vrel,CM}}\gamma_{\mathrm{CM}} \Delta\mathrm{t}, \tag{1}\] where \(n_{\mathrm{min}}\) is the minimum density between particle species \(a\) and \(b\), \(\sigma_{\mathrm{ab}}\) denotes the cross-section of nuclear reaction, and \(\Delta t\) is the simulation time step, which is multiplied by a factor of \(\gamma_{\mathrm{CM}}\) when taken into account in the CM frame. The nuclear reaction cross sections used in the program are obtained from the International Atomic Energy Agency (IAEA) and are stored within the code as Legendre polynomial coefficients. To calculate these values, the relative velocity between the two particles in the CM frame is required and provided by \[\mathrm{v}_{\mathrm{rel,CM}}=\left|\frac{\mathbf{v}_{\mathrm{a,CM}}-\mathbf{v }_{\mathrm{b,CM}}}{1-\mathbf{v}_{\mathrm{a,CM}}\cdot\mathbf{v}_{\mathrm{b,CM }}}\right|. \tag{2}\] Therefore, the yield of nuclear reaction for each pair of macro-particles is determined as \[Y_{\mathrm{ab}}=w_{\mathrm{min}}P_{\mathrm{ab}}, \tag{3}\] where \(w_{\mathrm{min}}\) represents the minimum weight between macro-particles \(a\) and \(b\). To enhance the calculation accuracy, we have introduced a variable parameter \(F_{\mathrm{multi}}\) to increase the probability of the reaction while decreasing the weight of the products, denoted as \(w_{\mathrm{min}}\to w_{\mathrm{min}}/F_{\mathrm{multi}}\) and \(P_{\mathrm{ab}}\to P_{\mathrm{ab}}F_{\mathrm{multi}}\), while keeping \(Y_{\mathrm{ab}}\) unaltered. For details of the nuclear fusion scheme, one can refer to our recent paper and other relevant publications [24]. ## III Simulation results To conduct a more in-depth analysis of the experimental data [14], the LAPINS code is employed, utilizing parameters that closely resemble those of the experimental setup. This simulation involves the collision interaction between the ablated gold plasma and deuterium gas, as depicted in Fig. 1 (b). The simulation box has a length of \(L=3\) mm, which is resolved by 3000 cells. Each cell contains 1000 macro-particles for electrons and 400 macro-particles for various ions. Simulation times in this study are on the order of nanoseconds, suggesting that the scale of the simulation is approximately 7 to 8 orders of magnitude larger than that of traditional PIC codes, which typically operate on a temporal scale of picoseconds and a spatial scale of micrometers. The gold plasma is located on the left side of the simulation box. The ionization degree of gold ions is fixed at \(Z=50\), while the initial ion temperature is \(T_{\rm Au}=100\:\rm eV\). The initial electron temperature of the gold plasma is \(T_{e1}=3000\:\rm eV\), while the electron density is \(n_{e1}=Zn_{\rm Au}=1.0\times 10^{21}\:\rm cm^{-3}\). Full-ionized deuterium plasma is situated on the right side of the simulation box, with a density of \(n_{e0}=n_{\rm D}=2.0\times 10^{19}\:\rm cm^{-3}\) and temperature of \(T_{\rm D}=T_{\rm e0}=100\:\rm eV\). Electrostatic shock structure is generated from the expansion of gold plasma into deuterium plasma. The gold plasma is commonly recognized as the downstream region located behind the shock, whereas the deuterium plasma is considered the upstream region situated ahead of the shock. An absorbing layer is designated to function as the capsule characterized by a high density of \(2.0\times 10^{21}\:\rm cm^{-3}\) and a low temperature of \(1\:\rm eV\). The speed of sound in deuterium plasma is \(c_{s}=\sqrt{kT_{\rm e0}/m_{\rm D}}=126.4\:\rm km/s\). It is important to note that there is no initial drift velocity for any species in the plasmas. In the initial stages of the expansion, the hotter electrons move faster than the gold ions. This differential motion generates an electrostatic field characterized by charge separation, commonly referred to as the sheath electrostatic field. This electric field plays an important role in both the expansion of the gold plasma and the reflection of the deuterium plasma. Plasma expansion is a phenomenon wherein the internal energy of plasma is converted into kinetic energy. When considering the collisional effect, the expansion behaves as a centered rarefaction wave [26] instead of an isothermal one, as the deuterium plasma acts as a piston. During the process of expansion, the velocity of the gold plasma exhibits a linear increase until it reaches its maximum value. It is illustrated in Fig. 2 (d)-(f), which depict the phase space distributions of gold ions and deuterium ions at various time intervals. The collision frequency between deuterium and gold ions, \(\nu_{\rm D,Au}=4/3\pi^{1/2}(Z_{\rm Au}Z_{\rm D}/4\pi\varepsilon_{0})^{2}n_{\rm D }m_{\rm D}^{-1/2}T_{\rm D}^{-3/2}\ln\Lambda\), is estimated to be in the range of \(10^{12}\sim 10^{13}/\rm s\), and the mean free path is approximately \(l_{\rm D,Au}\approx 0.15\:\mu\rm m\). In the context of the penetration region, the presence of intense collisions between deuterium ions and gold ions hinders the entry of deuterium ions into the downstream region. Therefore, during the initial formation of the shock wave, the shock strength denoted as \(\delta=n_{\rm D,max}/n_{\rm D}\) and referred to as the density compression ratio, rapidly reaches a value of 6. Subsequently, after approximately Figure 2: The subgraphs (a)-(c) depicted in the left part illustrate the temporal evolution of the density profiles of electrons, deuterium ions (D\({}^{+}\)), and gold ions (Au\({}^{50+}\)), respectively. The black dashed line depicted the trajectory of the shock front, characterized by a velocity of \(V_{sh}=662\:\rm km/s\), while the dotted line represents the trajectory with double the velocity, \(2V_{sh}\). The subgraphs (d)-(f) on the right side depict the phase space distributions of gold ions and deuterium ions at different times. The white-black colormap and colorful colormap are used to represent the phase density of gold ions and deuterium ions, respectively. 1 ns of simulation time, it gradually decreases to 3.5 and remains stable. The decrease can be attributed to the expansion of the dense deuterium plasma in the shock region to the upstream deuterium plasma. In the stable stages, the shock strength \(\delta\) remains below the upper limit determined by the Rankine-Hugoniot equations, indicating that \(\delta\) is less than 4. One of the notable kinetic properties exhibited by electrostatic shock is the reflection of upstream ions. The electric potential difference, \(\Delta\phi\), can reflect any ions with kinetic energies lower than it, as expressed by \(m_{i}v_{\mathrm{i}}^{2}/2<e\Delta\phi\)[27; 28]. In our simulations, it is evident that the reflection of deuterium ions can be categorized into two distinct phases. Firstly, there is a pronounced acceleration of the sheath electric field at the initial stage, which contributes to the high-energy portion of the reflected ions. Secondly, there is a consistent reflection of the electrostatic shock wave, with a speed that can reach up to twice the speed of the shock wave (\(2V_{\mathrm{sh}}\)) in the laboratory frame. The velocity of the shock, denoted as \(V_{\mathrm{sh}}\) was determined based on the temporal evolution in the density distribution of electrons and ions depicted in Fig. 2 (a)-(c). The measured velocity of the shock wave is \(V_{\mathrm{sh}}=662\,\mathrm{km/s}\) and the Mach number is \(M=V_{\mathrm{sh}}/c_{\mathrm{s}}=5.25\). In the phase space depicted in Fig. 2 (d)-(f), the reflected deuterium ions are accelerated to velocities exceeding \(2000\,\,\mathrm{km/s}\), with a steady velocity of approximately \(1300\,\,\mathrm{km/s}\), which is nearly twice \(V_{\mathrm{sh}}\). In the theoretical framework of the KdV-B equation, the phenomenon of steady reflection can be understood as a form of dissipation. The collisional effect does not provide sufficient dissipation due to the low density of the deuterium plasma. As a result, reflection compensates for dissipation, which plays an important role in forming the shock as well as dispersion and nonlinearity. When dissipation, dispersion, and nonlinearity are balanced, the shock wave can propagate steadily, otherwise, it will collapse or disappear [28]. Diagnosing the evolution of the total energy spectra of deuterium ions over time, as depicted in Fig. 3 (a), it is evident that the energetic ions generated through reflection exhibit substantial deviations from the initial narrow Maxwell distribution. A time-integral diagnostic plane is established at the location of \(z=1200\,\,\mu\mathrm{m}\) to record the energy spectra of reflected ions, which can arrive at the absorbing layer, as shown in Fig. 3 (b). Deuterium ions exhibiting quasi-monoenergetic spectra are observed, reaching a maximum energy of approximately \(50\,\,\mathrm{keV}\), with a full width at half maximum (FWHM) \(\Delta E_{\mathrm{D}}=30\,\mathrm{keV}\). The spectra undergo broadening when the shock wave passes through the diagnostic plane. When these reflected ions are deposited inside the capsule, significant low-mode asymmetry may occur in the implosion process [29], as well as the beam-target nuclear reactions. To simulate the deposition of reflected ions inside the capsule, an absorbing layer located at \(2000\,\,\mu\mathrm{m}\) is set to capture the energetic ions accelerated by the shock wave. As the probability of the nuclear reaction \(\mathrm{D(D,n)^{3}He}\) is very small for reflected ions with a speed of several hundred kilometers per second and a temperature of \(1\,\,\mathrm{eV}\) in the plasma layer, we have set the parameter \(F_{\mathrm{multi}}\) as \(10000\) using more than 1 million deuterium macro-particles. The statistical and fitting data of the neutron spectra are shown in Fig. 4 (b). The peak energy is \(2.45\,\,\mathrm{MeV}\) with a FWHM of \(\Delta E_{n}=0.3\,\,\mathrm{MeV}\). The relationship between FWHM and \(T_{\mathrm{D}}\) in thermonuclear conditions can be described as \(\Delta E_{n}=2\sqrt{\ln 2\langle E_{n}\rangle T_{\mathrm{D}}}=82.5T_{ \mathrm{D}}^{1/2}\)[30], where \(T_{\mathrm{D}}\) is in keV and \(\langle E_{n}\rangle=2.45\,\,\mathrm{MeV}\). If we assume that this unusual broadening is due to thermonuclear fusion, the estimated temperature (\(T_{\mathrm{D}}\)) is approximately \(13\,\,\mathrm{keV}\), which is significantly higher than the temperature achievable for the high-density absorbing layer (\(1\,\mathrm{eV}\)). By utilizing double-differential cross-sections, \(\mathrm{d^{2}\sigma/\mathrm{dEd}\Omega}\), the energies of emitted neutrons can be observed as a function of scattering angle \(\theta\) in the CM frame, denoted as, \[E_{n}=\frac{E_{\mathrm{D}}m_{n}}{4m_{\mathrm{D}}}\left[\frac{2m_ {\mathrm{D}}-m_{n}}{m_{n}}\left(2\frac{Q}{E_{\mathrm{D}}}+1\right)\right.\] \[\left.+2\sqrt{\frac{2m_{\mathrm{D}}-m_{n}}{m_{n}}\left(2\frac{Q} {E_{\mathrm{D}}}+1\right)\cos\theta+1}\right], \tag{4}\] where \(E_{\mathrm{D}}\) is the incident energy of deuterium ions, and \(Q=3.27\,\,\mathrm{MeV}\) is the released energy of \(\mathrm{D(D,n)^{3}He}\) nuclear reaction. The numerical results obtained from Eq. 4 for various incident energies of deuterium ions are shown in Fig. 4 (c). When using the value of our simulations, e.g. \(E_{\mathrm{D}}=50\,\,\mathrm{keV}\), we find that \(\Delta E_{n,\theta}=E_{n}(\theta=0)-E_{n}(\theta=\pi)=497\,\,\mathrm{keV}\), which is well consistent with the spectra, as shown in Fig. 4 (b). The spectra exhibit natural broadening due to beam-target reactions when the neutrons are counted by integrating the full solid angle. This approach was used in our simulation. As illustrated in Fig. 4 (a), during the experiment, the Figure 3: The temporal evolution of the energy spectra of deuterium ions. The total spatial energy spectra are presented in subgraph (a), while the time-integrated energy spectra obtained at the location of \(z=1200\,\,\mu\mathrm{m}\) are displayed in subgraph (b). neutron spectrometer was arranged in a narrow solid angle configuration to capture neutrons with varying scattering angles \(\theta\) emitted from the entire hohlraum. This experimental setup is mathematically equivalent to our one-dimensional simulations, where all neutrons are accounted for by integrating over the full solid angle. When comparing the FWHM of the neutron spectra in our simulations with that of the experiment data (represented by light blue circles in Fig. 4 (b)), a high level of consistency is observed. ## IV Discussion and Conclusion In a hohlraum experiment, the determination of the state of the gold plasma poses a significant challenge, as it directly influences the properties of the shock wave and the behavior of the reflected ions. To elucidate the impacts, we manipulated the electron temperature \(T_{e1}\) of the gold plasma. The phase space distributions are shown in Fig. 5 (a)-(c). As \(T_{e1}\) decreases, the ions accelerated by the sheath electrostatic field have lower energy and constitute a smaller portion of the reflected ions. The velocity of the shock wave, \(V_{\mathrm{sh}}\) also decreases. According to the collisionless laminar shock wave kinetic theory [31], a shock wave can arise from the collision between two plasma slabs with different temperatures and densities. Ion reflection occurs when the electrostatic potential surpasses the kinetic energy of upstream ions. There exists a critical Mach numbers \(M_{\mathrm{cr}}\) for reflecting ions, that can be calculated numerically by [32], \[M_{\mathrm{cr}}^{2}=\frac{1}{1+\Gamma}\left[\frac{\sqrt{2}M_{ \mathrm{cr}}}{\sqrt{\pi}}+e^{\frac{M_{\mathrm{cr}}^{2}}{2}}\mathrm{Erfc}\frac {M_{\mathrm{cr}}}{\sqrt{2}}-1\right.+\Gamma\Theta\times\] \[\left.\left(\frac{\sqrt{2}M_{\mathrm{cr}}}{\sqrt{\pi\Theta}}+e^{ \frac{M_{\mathrm{cr}}^{2}}{2\Theta}}\mathrm{Erfc}\frac{M_{\mathrm{cr}}}{ \sqrt{2\Theta}}+\frac{4M_{\mathrm{cr}}^{3}}{3\sqrt{2\pi\Theta^{3}}}-1\right) \right], \tag{5}\] where \(\Gamma=n_{e1}/n_{e0}\), and \(\Theta=T_{e1}/T_{e0}\), are the ratios of upstream and downstream electron densities and temperatures. In the simulation, the \(\Gamma=51\) is fixed. When \(M^{2}\gg 1\) and \(M^{2}\gg\sqrt{\Theta}\), there is an upper limit to the Mach number [31], \[M_{\mathrm{max}}\simeq\frac{3(\Gamma+1)}{\Gamma}\sqrt{\frac{\pi\Theta}{8}}. \tag{6}\] The Mach numbers estimated in our simulations via different \(\Theta\) are presented in Fig. 5 (d), which is well located between two lines. The Mach number can be mathematically represented as a function of \(\Theta\) and \(\Gamma\). Consequently, the characteristics of the shock wave are primarily determined by the distinct properties of electrons in the upstream and downstream regions. The corresponding neutron spectra are shown in Fig. 4 (b). As the energy of the reflected ions decreases, the neutron energy spectra become narrower, aligning with the theoretical prediction of the beam-target reaction. The FWHM of the neutron spectra obtained from the experiments can serve as an indicator for estimating the electron temperature of the gold plasmas. To summarize, Shan et al. [14] conducted a study on a kinetic colliding plasma within a hohlraum of indirect drive ICF by measuring the energy spectra of neutrons. However, due to the typical large spatial-temporal scales involved, this experiment could not be accurately simulated using available codes at that time. The experiment was successfully replicated at large spatial and temporal scales using our high-order implicit PIC code, LAPINS. The computational scale of our simulations is approximately 7 to 8 orders of magnitude greater than that of traditional PIC codes. When gold plasmas expand into deuterium plasmas, a kinetic shock is generated and propagates within deuterium plasmas. Through the utilization of simulations, we can observe the complete progression of a strong shock wave, encompassing its initial formation and subsequent steady propagation. Although both electrons and gold ions are collisional, deuterium ions appear to be collisionless. The quasi-monoenergetic spectra of deuterium ions can be generated by reflecting ions from the shock front. This process leads to the production of neutrons with unusual broadening due to beam-target nuclear reactions. This study provides an Figure 4: Subgraph (a) is a top-view schematic diagram of the setup of a neutron spectrometer in a hohlraum experiment. Subgraph (b) represents the comparison of the neutron spectra obtained in the simulation with that of the experimental data. The solid lines of different colors are the simulated data over different electron temperatures \(T_{e1}\), fitted with Gaussian profiles. The experimental data presented by Shan et al. [14] is depicted as light blue circles. Subgraph (c) represents the numerical results obtained from Eq. 4 for various incident energies of deuterium ions, denoted as \(E_{\mathrm{D}}\). unprecedented kinetic analysis of an existing experiment, which contributes to our understanding of the mechanisms underlying the formation of shock waves. It can be relevant for future research on mixtures and entropy increments at plasma interfaces. ###### Acknowledgements. This work is supported by National Natural Science Foundation of China (Grants No. 12075204 and No. 11875235), Science and Technology on Plasma Physics Laboratory Foundation of Chinese Academy of Engineering Physics, the Strategic Priority Research Program of Chinese Academy of Sciences (XDA250050500), and Shanghai Municipal Science and Technology Key Project (No. 22JC1401500). Dong Wu thanks the sponsorship from Yangyang Development Fund.
2309.15213
Constraining the LIGO/Virgo AGN channel with black hole spins
Merging black holes (BH) are expected to produce remnants with large dimensionless spin parameters ($a_{\rm spin} \sim 0.7$). However, gravitational wave (GW) observations with LIGO/Virgo suggest that merging BH are consistent with modestly positive but not high spin ($a_{\rm spin} \sim 0.2$), causing tension with models suggesting that high mass mergers are produced by hierarchical merger channels. Some BH also show evidence for strong in-plane spin components. Here we point out that \emph{spin down} of BH due to eccentric prograde post-merger orbits within the gas of an active galactic nucleus (AGN) disk can yield BH with masses in the upper mass gap, but only modestly positive $a_{\rm spin}$, and thus observations of BH with low spin \emph{do not} rule out hierarchical models. We also point out that the fraction of BBH mergers with significant in-plane spin components is a strong test of interactions between disk binary black holes (BBH) and nuclear spheroid orbiters. Spin magnitude and spin tilt constraints from LIGO/Virgo observations of BBH are an excellent test of dynamics of black holes in AGN disks, disk properties and the nuclear clusters interacting with AGN.
B. McKernan, K. E. S. Ford
2023-09-26T19:17:14Z
http://arxiv.org/abs/2309.15213v1
# Constraining the LIGO/Virgo AGN channel with black hole spins ###### Abstract Merging black holes (BH) are expected to produce remnants with large dimensionless spin parameters (\(a_{\rm spin}\sim 0.7\)). However, gravitational wave (GW) observations with LIGO/Virgo suggest that merging BH are consistent with modestly positive but not high spin (\(a_{\rm spin}\sim 0.2\)), causing tension with models suggesting that high mass mergers are produced by hierarchical merger channels. Some BH also show evidence for strong in-plane spin components. Here we point out that _spin down_ of BH due to eccentric prograde post-merger orbits within the gas of an active galactic nucleus (AGN) disk can yield BH with masses in the upper mass gap, but only modestly positive \(a_{\rm spin}\), and thus observations of BH with low spin _do not_ rule out hierarchical models. We also point out that the fraction of BBH mergers with significant in-plane spin components is a strong test of interactions between disk binary black holes (BBH) and nuclear spheroid orbiters. Spin magnitude and spin tilt constraints from LIGO/Virgo observations of BBH are an excellent test of dynamics of black holes in AGN disks, disk properties and the nuclear clusters interacting with AGN. keywords: accretion disks-accretion-galaxies: active -gravitational waves-black hole physics ## 1 Introduction Binary black hole (BBH) mergers observed in gravitational waves (GW) with LIGO-Virgo can originate from multiple channels, including from the death of massive binary stars (e.g. Belczynski et al., 2010; de Mink & Mandel, 2016), or from BH that pair-up dynamically after formation (e.g. Antonini, 2014; Rodriguez et al., 2016; Fragione et al., 2019). A promising dynamics channel is BBH mergers in AGN disks (e.g. McKernan et al., 2014; Bartos et al., 2017; Stone et al., 2017, see also Arca Sedda et al. (2023) for a recent review). Broad expectations for this channel include: efficient IMBH (\(>100M_{\odot}\)) formation (e.g. McKernan et al., 2012; Bellovary et al., 2016; Yang et al., 2019; Secunda et al., 2019; Tagawa et al., 2020), occasional asymmetric mass mergers (McKernan et al., 2020; Tagawa et al., 2020) and possibly residual orbital eccentricity in the LIGO band (Samsing et al., 2022). Black hole (BH) masses can be used to discriminate between merger channels. For example, since massive stars are not believed to directly produce BH \(\sim 50-120M_{\odot}\), GW detections of progenitor BH in this "upper mass gap are suggestive of BH that are themselves merger products (e.g. Gerosa & Berti, 2019; Tagawa et al., 2021; Ford & McKernan, 2022; Gayathri et al., 2023). Likewise an observed pile-up of BH at \(\sim 40M_{\odot}\)(Abbott et al., 2021) might be associated with the lower end of the expected upper mass gap. The fact that the global peak of the mass distribution for BH involved in BBH mergers is at low mass (around \(10M_{\odot}\)) is another fascinating clue. Since neutron stars (\(\sim 1.4M_{\odot}\)) natal kicks are observed \(\leq 800\rm km/s\), relatively low mass BH (e.g. \(\sim 10M_{\odot}\)) are likely formed with modest non-zero kicks \(\leq 110\rm km/s(\,M_{\rm NS}/1.4M_{\odot})(\,M_{\rm BH}/10M_{\odot})^{-1}\)(Coleman & Burrows, 2022). A deep potential well could help retain lighter kicked BH in order to promote their subsequent merger, driving up the merger rate at generally low BH mass. Black hole spins can also provide important clues to BBH channel origins (e.g. Zevin et al., 2020; Kimball et al., 2020; Galaudage et al., 2021) as well as specifically testing AGN channel models (Vajpej et al., 2022). The effective spin (\(\chi_{\rm eff}\)) distribution observed by LIGO/Virgo is biased to positive values (Abbott et al., 2021), which is not expected for spherically symmetric dynamical models (i.e. mergers in clusters). Observed spins are also not uniformly aligned with each other and the BBH orbital angular momentum, which naively might be expected in some stellar origin scenarios. Rather, the spin distribution observed has some dynamical characteristics, including some weight at \(\chi_{\rm eff}<0\) and some events with evidence for strong in-plane spin components, but with an overall symmetry-breaking (positive) bias. Some BBH have also been observed to possess strong in-plane spin components (Varma et al., 2022). Also observed is a fascinating anti-correlation between \(\chi_{\rm eff}\) and mass ratio (\(q\)) in BBH mergers (Callister et al., 2021). Such a anti-correlation is hard to generate in both field and dynamical channels. However, AGN can yield such an effect if more massive BH spend longer (and spin-up) in AGN disks and if there is a bias against retrograde BBH (e.g. McKernan et al., 2022; Wang et al., 2021). Santini et al. (2023) also find such an effect emerges straightforwardly assuming prograde BBH mergers happen at an AGN disk migration trap. Here we focus on BH spin as a test of models of the AGN channel. In particular, we discuss why hierarchical mergers in AGN can (eventually) result in BH with low spin and why observations of BH with low spin _do not_ rule out a hierarchical origin. We also briefly discuss the implications of strong in-plane spin components for the AGN channel. Finally, we discuss the implications of the observed spin distributions for models of AGN disks and the dynamics and accretion history of the embedded populations within them. ## 2 Black hole spin It is still unclear what spins BH are born with. Observations of BH within our own Galaxy indicate moderate to high (\(a\sim 0.3-0.9\)) BH spins (Reynolds, 2019), but these BH accrete from X-ray binary companions, and so do not provide an unbiased diagnostic of BH spin at birth (e.g. Fishbach & Kalogera, 2022). It has been proposed that BH are born with very low spin \(a\sim 0.01\)(Fuller & Ma, 2019). However BH in LIGO observations have BH spin magnitudes typically an order of magnitude larger than this (Abbott et al., 2021). Depending on the initial BH spin, accretion onto the BH after birth can alter the spin magnitude and/or torque the BH spin alignment. Figure 1 illustrates the effect of accretion direction on spin magnitude (length of spin vector) and orientation (direction of spin vector), assuming locally disk-like accretion geometry. Prograde accretion (top panel) increases an initially positive spin magnitude (\(a_{\rm spin}\)) and torques spin orientation towards the orbital angular momentum of the accretion flow. Retrograde accretion (bottom panel) decreases \(a_{\rm spin}\to 0\) and then \(a_{\rm spin}<0\) and drives orientation towards anti-alignment. The timescale of this process depends on the rate of accretion. Bogdanovic et al. (2007) find that BH can be torqued into alignment with a large-scale gas flow once \(\sim 1-10\%\) of the BH mass has been accreted. Since the Eddington mass doubling rate is \(\sim 40\)Myr, a period of \(\sim[1,{\rm few}]\)Myr of accretion at the Eddington rate should be sufficient to torque BH spins into alignment with a massive accretion flow. Thus, a population of embedded objects in AGN should be biased towards positive spin alignments, depending on how long the AGN disk persists. There is much more certainty about the spins of merging BH. Numerical relativity results show that much of the BBH orbital angular momentum at merger goes into the spin of the resulting merged BH. Thus, the merged spin (\(a_{\rm merged}\)) can be written as (e.g. Tichy & Marronetti, 2008) \[a_{\rm merged} \approx 0.686(5.04\nu-4.16\nu^{2}) \tag{1}\] \[+ 0.4\left(\frac{a_{1}}{(0.632+1/q_{\rm bin})^{2}}+\frac{a_{2}}{(0. 632+q_{\rm bin})^{2}}\right) \tag{2}\] where \(q_{\rm bin}=M_{2}/M_{1}\) is the binary mass ratio, \(\nu=q_{\rm bin}/(1+q_{\rm bin})^{2}\) is the symmetric mass ratio and \(a_{1},a_{2}\) are the binary component spin parameters. For most cases, with modest spins, moderate mass ratios (\(q_{\rm bin}\sim 1\)), \(a_{\rm merged}\sim 0.7\). A population of merging BH that include the products of prior mergers would therefore be expected to have spins \(a\sim 0.7\). However, as we shall point out below, the role of gas accretion and torquing changes this basic conclusion, at least in the AGN channel. ## 3 Damping timescales Gas in AGN disks acts to damp prograde orbital eccentricity (e.g. McKernan et al., 2012) but pumps orbital eccentricities of retrograde orbits (Secunda et al., 2021). For prograde orbits in protoplanetary disks with small orbital eccentricity (\(e<2h\)), \(e\) decays exponentially over time \(\tau_{e}\approx h^{2}\tau_{\rm mig}\), where \(\tau_{\rm mig}\) is the migration timescale, \(h=H/r\) is the disk aspect ratio with \(H\) the disk scale height and \(r\) the radius of the orbiter in the disk (Papaloizou & Larwood, 2000). At larger eccentricities, decay goes as \(\dot{e}\propto e^{-2}\)(Bitsch & Kley, 2010). Rescaling the disk damping timescale \(t_{\rm damp}\)(Tanaka & Ward, 2004) to an embedded BH in an AGN disk we find \[t_{\rm damp}=\frac{M_{\rm SMBH}^{2}h^{4}}{\dot{m}_{\rm BH}\Sigma a^{2}\Omega} \tag{3}\] where \(M_{\rm SMBH}\) is the supermassive black hole (SMBH) mass, \(m_{\rm BH}\) is the embedded black hole mass, \(\Sigma\) is the disk surface density, \(a\) is now the orbital semi-major axis and \(\Omega\) the Keplerian orbital frequency. The strong dependence on the aspect ratio of the disk (\(h^{4}\)) in \(t_{\rm damp}\) implies we should expect prohibitively long orbital damping times either in the outer cooler disk or in a puffed up, hot inner disk. Since damped circularized orbits will preferentially form BBH in dynamical encounters (Secunda et al., 2021; Rowan et al., 2022; Li et al., 2023; DeLaurentis et al., 2023), this suggests that BBH formation is more likely during encounters in the thinnest, densest regions of AGN disks. We can usefully parameterize \(t_{\rm damp}\) as \[t_{\rm damp}\sim 0.1{\rm Myr}\left(\frac{q}{10^{-7}}\right)^{-1}\left(\frac{h}{0.03} \right)^{4}\left(\frac{\Sigma}{10^{6}{\rm kgm}^{-2}}\right)^{-1}\left(\frac{a} {10^{4}r_{g}}\right)^{-1/2} \tag{4}\] where \(q=m_{\rm BH}/M_{\rm SMBH}\) is the mass ratio of the embedded BH to the SMBH, \(\Sigma\sim 10^{5}{\rm kgm}^{-2}\) is a surface density consistent with moderately dense outer regions (\(a\sim 10^{4}r_{g}\)) of a Sirko & Goodman (2003) model AGN disk, where \(r_{g}=GM_{\rm SMBH}/c^{2}\) is the gravitational radius. Note that more massive BH (larger \(q\)) have orbits damped faster. This is important, since it implies that more massive BH in the AGN channel should on average spend more time spinning up and torquing into alignment with the disk gas than less massive BH. Such an effect could also help explain the bias towards positive \(\chi_{\rm eff}\) observed in the more massive component of BBH (Callister et al., 2021). From eqn. (4), in the cool outer part of this disk model (\(>3\times 10^{4}r_{g}\)), \(t_{\rm damp}\) is long \(\geq 1{\rm Myr}(q/10^{-7})^{-1}\). However, at the thinnest part of this disk model \(t_{\rm damp}\) is very short \[t_{\rm damp}\sim{\rm kyr}\left(\frac{q}{10^{-7}}\right)^{-1}\left(\frac{h}{0.0 1}\right)^{4}\left(\frac{\Sigma}{10^{7}{\rm kgm}^{-2}}\right)^{-1}\left(\frac {a}{10^{3}r_{g}}\right)^{-1/2}. \tag{5}\] But in the radiation-pressure dominated innermost disk, the disk puffs up again and \[t_{\rm damp}\sim 0.6{\rm Myr}\left(\frac{q}{10^{-7}}\right)^{-1}\left(\frac{h}{0.05} \right)^{4}\left(\frac{\Sigma}{10^{6}{\rm kgm}^{-2}}\right)^{-1}\left(\frac{a} {10^{2}r_{g}}\right)^{-1/2}. \tag{6}\] Thus, orbital eccentricity damping is most efficient in the thinnest parts of the Sirko & Goodman (2003) model between \(\sim[10^{2},10^{4}]r_{g}\). In the Thompson et al. (2005) disk model, we also find damping timescales are either very short \(t_{\rm damp}\sim{\rm kyr}\) in the very thin mid-disk region (\(h/R\sim 10^{-3}\)) and prohibitively long \(t_{\rm damp}>{\rm Myr}\) otherwise (since \(h/R\sim 0.05\) in both the inner and outer disk regions). With \(t_{\rm damp}\), the scaling timescale for eccentricity damping, we can now estimate how long orbital damping can take in these disk models. At small initial orbital eccentricity (\(e_{0}<2h\)), assuming exponential decay (Papaloizou & Larwood, 2000) \[e(t)=e_{0}{\rm exp}(-{\rm t}/t_{\rm damp}) \tag{7}\] and so within \(\sim 2-3t_{\rm damp}\), \(e_{0}\) is damped to approximately circular (\(e<0.01\)) 1. From eqn. 4 if \(e_{0}\sim 0.06\) at \(a\sim 10^{4}r_{g}\) in a Sirko & Goodman (2003) disk, the eccentricity is damped by gas to \(e<0.01\) within \(\sim 0.5\)Myr. Footnote 1: assuming there are no additional orbital perturbations from dynamical encounters. At large eccentricities (\(e>2h\)), and assuming orbital inclination is negligible, we can use the approximation of (Horn et al., 2012) \[t_{e}\sim\frac{t_{\rm damp}}{0.78}\left[1-0.14(e/h)^{2}+0.06(e/h)^{3}\right] \tag{8}\] to find the timescale on which large orbital eccentricity becomes damped. The last term in eqn. 8 dominates if \(e/h\gg 7/3\), i.e. across most of the disk models above for \(e>0.1\). For a thermal distribution of initital eccentricities, as the result of an equipartition of energy in a relaxed (either wholly or in part) nuclear star cluster, we expect an orbital distribution function \[f(e)de=2ede \tag{9}\] such that the median eccentricity is \(e\sim 1/\sqrt{2}\sim 0.7\) and there is a uniform probability distribution of \(e^{2}\). Thus, at modest disk thickness \(h\sim 0.05\), \(t_{e}\sim 175t_{\rm damp}(e_{0}/0.7)^{3}\) or \(t_{e}\sim 0.1\)Myr\((q/10^{-7})^{-1}(e_{0}/0.7)^{3}\) at the thinnest part of the Sirko & Goodman (2003) disk model around \(a\sim 10^{3}r_{g}\). The regions of AGN disks where orbits are most rapidly circularized (i.e. regions with small values of \(h\)) will be where the relative energy of BH encounters is small enough that binary formation is efficient (e.g. Li et al., 2022; Rowan et al., 2022; Li et al., 2023; DeLaurenitis et al., 2023). Thus, we expect new BBH to predominantly form in AGN disks in the thin mid-disk region. The subsequent migration of such BBH either inwards or outwards, away from this disk region, will make dynamical encounters with eccentric orbiters more likely. Such encounters will be capable of either hardening, softening/ionizing the BBH, depending on the details of the encounter (e.g. Leigh et al., 2018; Wang et al., 2021; Jernyn et al., 2022). Note also \(a\sim 10^{3}r_{g}\) is a plausible location for a migration trap (Bellovary et al., 2016) in such a disk, although Grishin et al. (2023) suggest that such traps occur at radii \(\times 3-5\) further out in the disk. ## 4 Retrograde accretion: spin-down of eccentric orbiters The direction of gas flow onto objects embedded in disks is a function of orbital eccentricity (e.g. Bailey et al., 2021; Li et al., 2022). Physically, embedded orbiters on nearly circular orbits experience inflow into their Hill sphere from co-orbital gas leading to prograde accretion via mini-disks. If the embedded orbiters have eccentric orbits, Keplerian shear leads to retrograde inflow overcoming the prograde circum-single disk. Figure 2 shows a cartoon sketch of these two modes. Fig. 2 (a) shows a top-down view of a near circular embedded BH orbiter. Arrows indicate the flow of gas in the disk (white) and in the frame of the orbiter (yellow). Gas flow on horseshoe orbits relative to the embedded orbiter is apparent, leading to (bottom left, a zoom-in of the orbiters' Hill sphere) a prograde flow of gas onto the embedded object, leading to spin-up (increasing the magnitude of \(a\)) and eventual torquing of spin into alignment with the AGN disk. Fig. 2(b) shows a top-down view of an eccentric orbiter. Horseshoe orbits of gas are no longer apparent and the background gas flow exhibits a Kepeleian retrograde shear. The bottom panel of Fig. 1 shows a zoom-in on the orbiters' Hill sphere indicating retrograde accretion, leading to spin-down (decreasing the magnitude of \(a\)) and eventual torquing of spin into anti-alignment with the AGN disk. Recently Chen et al. (2022) have demonstrated a bifurcation in accretion direction on orbiters embedded in gas disks. Orbiters on nearly circular orbits accrete from prograde mini-disks within their Hill sphere, whereas embedded orbiters on eccentric orbits above a transition value (\(e_{t}\)) accrete from _retrograde_ mini-disks, where \(e_{t}\) is (Chen et al., 2022) \[e_{t}>h\sqrt{(1+\lambda^{2}){\rm max}[1,3^{1/3}({\rm q}^{1/3}/{\rm h})^{2}]-1} \tag{10}\] with \(\lambda\sim 1.3\) a numerical constant, \(q_{t}=q/h^{3}\) is the thermal mass ratio and \(h\) is the disk scale-height. Effectively the bifurcation corresponds to \(e_{t}\geq\lambda h\) for \(q_{t}\lesssim 1\) (sub-thermal orbits) and \(e_{t}\geq(\lambda h)q_{t}^{1/3}\) for \(q_{t}>1\). In a Sirko & Goodman (2003) model disk, all orbits are sub-thermal (\(q_{t}<1\)) for masses \(<10^{2}M_{\odot}\) and super-thermal only in the thinnest regions of the disk for IMBH (\(>10^{2}M_{\odot}\)). In a Thompson et al. (2005) model disk, all orbits are super-thermal at the thinnest part of the disk, and sub-thermal elsewhere. As a rule of thumb therefore, if BH orbital eccentricity is roughly \(>\times 1.3\) the disk scale height of a Sirko & Goodman (2003) disk and most scale heights of a Thompson et al. (2005) disk, for sub-IMBH masses, it will accrete retrograde. Thus, BH orbital eccentricities \(e>0.08\) on average (\(\overline{h}\sim 0.05\)) in a Sirko & Goodman (2003) disk should drive retrograde accretion. Figure 1: Cartoon illustrating prograde and retrograde accretion onto BH embedded in an AGN. Top panel shows a gas minidisk (with prograde orbital angular momentum \(L_{\rm disk}\)) accreting onto a BH. Blue vector labelled \(a\) corresponds to an initial BH spin vector mis-aligned with the accretion flow. Dashed line shows the direction of torque of the BH spin over time, through decreasing angle \(\theta\) towards alignment with mini-disk, but also increasing spin magnitude (longer final blue vector parallel to \(L_{\rm disk}\)). Bottom panel is similar except the accretion minidisk has retrograde orbital angular momentum. BH spin at first _decreases_ in magnitude towards \(a=0\) (vanishing vector) and then grows increasingly negative (\(a<0\)) over time approaching full anti-alignment with the greater AGN disk (unlabelled downward pointing final spin vector). ## 5 Post-Merger Kicks The anisotropic emission of GW from a merging BBH means that the merged BH will recoil with a kick from the merger site (e.g. Centrella et al., 2010, and references therein). Maximum merger kick velocity is \(v_{\rm kick}\sim 5000\rm km/s\)(Campanelli et al., 2007; Gonzalez et al., 2007) for approximately equal mass merging BH, with maximal and anti-aligned spins. Kick velocity drops considerably as the mass ratio of the merging BBH (\(q_{BBH}=M_{2}/M_{1}\)) decreases since \(v_{\rm kick}\propto q_{BBH}^{2}\)(Centrella et al., 2010). Keplerian orbital velocities at the thinnest regions of the Sirko & Goodman (2003) disk model (\(a\sim 10^{3-4}r_{g}\)) span \(\sim 3\!-\!10\!\times\!10^{3}\rm km/s\). For a BBH merger in this region of the disk, kicks \(v_{\rm kick}\geq 30-100\rm km/s\) will generate orbital eccentricities \(e>0.01\). Thus, it is straightforward to generate eccentric orbits of merged BBH in AGN disks. Spin mis-alignments can drive larger kicks (\(\sim 10^{3}\rm km/s\)), as inferred from some LIGO BBH observations (Varma et al., 2022). Since we generally expect spin mis-alignment from random sortings in the AGN channel, the likelihood of large \(v_{\rm kick}\) in this channel is quite high. As a result, a sizeable fraction of merged BBH in the AGN channel should have modest \(e\sim[0.02,0.5]\) post-merger. From SS3 above, in a Sirko & Goodman (2003) disk model, we expect the gas disk will dampen orbital eccentricity to near circular in \(\leq 0.5\rm Myr\) in the thinnest regions of this disk. If \(1-10\%\) mass accretion is sufficient to torque a BH into alignment with the disk (Bogdanovic et al., 2007), then assuming Eddington-limited accretion, in a few Myr, a BH could be torqued from fully aligned (\(a_{\rm merged}\sim 0.7\)) into anti-alignment, via spin-down and driving spins to negative magnitudes (\(a_{\rm merged}\sim-0.9\)). In \(\leq 0.5\rm Myr\) therefore, significant spin-down from \(a_{\rm merged}\sim 0.7\) can occur. Differences in average spin magnitudes (\(a_{\rm spin}\)) for BH with masses in the upper mass gap \(\geq 50M_{\odot}\) (presumed to result from hierarchical mergers) and low mass BH (say \(\sim 10M_{\odot}\)) can help constrain this phenomenon in AGN disks. The fact that negative spins are not preferred among LIGO BBH observations, suggests for the AGN channel both that: (i) retrograde BBH are disfavored (avoiding the formation of a negative spin BH) (Wang et al., 2021; McKernan et al., 2022; Santini et al., 2023)2 and (ii) spin-down does not progress so far that by the time BBH form, most BH are not negative spin. This implies damping in the regions BBH formation and merger occurs must be very efficient and therefore since \(t_{\rm damp}\propto h^{4}\), these regions of the AGN disks must be geometrically thin. Footnote 2: Retrograde BBH experience eccentricity pumping and torquing towards disk alignment (LANL group; private communication, see also Lubow et al. (2015)). ## 6 \(\chi_{\rm P}\) in the AGN channel \(\chi_{\rm eff}\) is the projection of the mass-weighted spins of a BBH onto the binary orbital angular momentum (\(L_{\rm bin}\)) around its center of mass. Figure 2: (a) Top panel: Cartoon of accretion onto a BH embedded in an AGN disk on an approximately circular orbit. White arrow indicates AGN gas flow direction and direction of orbit of the embedded BH. Yellow arrows indicate the relative flow of gas in the frame of the embedded BH. (a) Bottom panel: Zoom in to Hill sphere of embedded BH shows prograde nature of accretion onto the BH and therefore spin-up and torquing into eventual alignment with the AGN disk. (b) Top panel: As in (a) but embedded BH is on an eccentric orbit and Keplerian shear dominates over co-orbital gas flow. (b) Bottom panel: Keplerian shear in the frame of the embedded BH, leads to retrograde accretion and therefore spin-down and torquing into eventual anti-alignment with the AGN disk. \(\chi_{\rm p}\) is the effective precession spin given by \[\chi_{\rm p}=\max\left[a_{1,\perp},\frac{q(4q+3)}{4+3q}a_{2,\perp}\right] \tag{11}\] where \(a_{1,\perp}\) is the component of spin perpendicular to the direction of \(L_{\rm bin}\) (The LIGO Scientific Collaboration et al., 2021). Several mergers observed by LIGO may have significantly non-zero \(\chi_{\rm p}\)(Varma et al., 2022), which appears to suggest a dynamical origin. Significant in-plane spin was first highlighted for the AGN channel by Tagawa et al. (2020) and Samsing et al. (2022) indicating what could be a'smoking gun' for an origin from the AGN channel. In the AGN channel, the most straightforward way of generating a strong \(\chi_{\rm p}\) component for BBH is to start with a primary BH spin that is relatively well aligned with the AGN disk and a secondary with random (modestly positive) BH spin. Such a configuration is appropriate for generating a (\(q\), \(\chi_{\rm eff}\)) anti-correlation as discussed in Callister et al. (2021); McKernan et al. (2022). Figure 3(a) depicts this binary set-up. Now introduce a close pass by a nuclear cluster object on a disk-crossing orbit (object \(m_{\rm j}\), with inclined orbital angular momentum (\(L_{3}\)) also depicted in Fig. 3(a)). Such an interaction conserves orbital angular momentum so the BBH orbital angular momentum (\(L_{\rm bin}\)) tilts to a new (resultant) BBH plane orientation and the BBH is kicked out of the disk. Fig. 3(b) shows the newly inclined BBH and Fig. 3(c) shows panel (b) rotated into the frame of \(L_{\rm bin}\) (so that \(L_{\rm bin}\) is now vertical) to illustrate the strong spin components in the BBH plane. Several conditions apply to this AGN channel scenario. First, there must be a close interaction with a spheroid orbiter. The cross-section for encounters with a spheroid orbiter of radius \(R_{\star}\) and mass \(M_{\star}\) is Leigh et al. (2018) \[\Gamma_{\rm NSC}\approx\sigma\rho\left(\frac{R_{\star}^{2}}{M_{\star}}\right) \left[1+\left(\frac{v_{\rm esc}}{\sigma}\right)^{2}\right] \tag{12}\] where \(\rho\sim M_{\rm NSC}/R_{\rm NSC}^{3}\) is the spheroidal volume density of NSC objects, \(\sigma\sim\sigma_{0}+(GM_{\rm SMBH}/r)^{1/2}\) is the velocity dispersion of the NSC, and \(v_{\rm esc}\) is the escape velocity from the BBH. For encounters at moderate disk radius, Keplerian dispersion dominates and this implies that an interaction such as in Fig. 3(a) most likely occurs in the innermost regions of the AGN disk, since the cross-section for interaction is highest there (e.g. Fig. 1 of Leigh et al. (2018)). Note that over a long AGN lifetime much of the spheroid component (particularly in inner regions, where disk-crossing is frequent) can be captured by the disk (Fabj et al., 2020; MacLeod & Lin, 2020; Nasim et al., 2023). Thus, such an interaction must occur early on in an AGN lifetime (\(<\)Myr). Second, this encounter must harden the binary since the BBH must persist to merger. The binary separation at the moment of encounter must therefore be (Leigh et al., 2018) \[a_{\rm bin}<12^{1/3}R_{H}\left(\frac{u_{\rm bin}}{m_{3}}\right)^{1/3} \tag{13}\] where \(R_{H}=a(q/3)^{1/3}\) is the BBH Hill radius, with \(a\) the BBH semi-major axis and \(q=M_{\rm bin}/M_{\rm SMBH}\) the BBH mass ratio, and \(\mu_{\rm bin}=M_{1}M_{2}/M_{\rm bin}\) is the binary reduced mass. Third, the BBH must also merge _before_ it is recaptured by the disk (e.g. Tagawa et al., 2020, emphasize this point), because the orbital angular momentum of the binary will be rapidly realigned with the angular momentum of the disk, once the binary is recaptured, returning us to the configuration of Fig. 3(a). A merger outside the disk could have a prompt EM counterpart, rather than a delayed EM counterpart as envisaged in (Graham et al., 2020), if the BBH has dragged gas with it as it passes through the AGN disk on each orbit (or it could lack any EM counterpart for lack of surrounding matter). ## 7 Discussion The AGN channel is fundamentally a dynamical channel with broken spherical symmetry, which yields hierarchical mass mergers more frequently (via retention of kicked BBH merger remnants) than any other LIGO channel. A BBH merger should naturally yield a high spin remnant, leading to the general expectation of seeing highly spinning, high mass BH progenitors for any hierarchical merger channel. However, in the AGN channel, we show here that the BH merger product is unlikely to retain that high spin for long. In particular, we point out that as long as the kick at merger yields a modest orbital eccentricity, BH produced in dynamical mergers in AGN disks must spin down (via retrograde accretion driven by Keplerian shear) while the BH orbital eccentricity is damped over time by disk gas. Going forward, the density of BH in AGN and the time between encounters (and therefore typical migration torques and timescales), as well as orbital damping timescales (a function of AGN gas density and scale height) should be constrained using Monte Carlo studies of this effect. Figure 3: Cartoon illustrating how a large \(\chi_{\rm p}\) component can arise in the AGN channel. See also (Samsing et al., 2022) for a comparable illustration. (\(M_{1}\), \(m_{2}\)) is a binary embedded in the AGN disk, with orbital angular momentum (\(L_{\rm bin}\)) aligned with that of the AGN disk (\(L_{\rm disk}\)). The spin of the heavier primary (\(M_{1}\)) is aligned with the disk, and that of the secondary (\(m_{2}\)) is mis-aligned. Such a binary arrangement is consistent with the (\(q\), \(\chi_{\rm eff}\)) correlation in Callister et al. (2021). Also depicted in (a) is a near-miss encounter by tertiary (\(m_{3}\)) from the nuclear spheroid population, with mis-aligned orbital angular momentum (\(L_{3}\)). Conservation of orbital angular momentum, leads to (b), where the BBH has been ejected from the disk with new (resultant) orbital angular momentum. Note however that the spins of the individual BH have not been torqued and remain oriented as in (a). (c) shows (b) but rotated so \(L_{\rm bin}\) is now vertical, showing clearly a strong spin component in-plane. An additional measurable parameter, \(\chi_{\rm P}\), has an expected distribution for standard dynamical channels, but AGN again distort this expectation due to separate populations and the unique disk symmetry where many BBH may easily form. A large \(\chi_{\rm P}\) component in a BBH merger can occur if the primary BH initially has spin strongly aligned with the AGN disk, but pre-merger the BBH experiences a close encounter with a spheroid orbiter (substantially inclined with respect to the orbit of the BBH center of mass of around the SMBH), likely in the inner disk. The result is a BBH kicked out of the AGN disk (see Fig. 3) with new orbital angular momentum direction, but un-torqued spin components, leading to significant in-plane spin. Such mergers are an excellent test of the rate of encounters between the spheroid and disk components of nuclear star clusters interacting with AGN. Again, Monte Carlo studies of this encounter type can strongly constrain the disk and spheroid population in unresolved nuclear star clusters hosting AGN. Negative \(\chi_{\rm eff}\) mergers are disfavoured in the AGN channel for several reasons: 1) BBH with \(L_{\rm bin}<0\) (retrograde orbit around their center of mass) are preferentially ionized or softened by moderately close tertiary encounters compared to prograde BBH (Wang et al., 2021). 2) Retrograde BBH are also likely to have their eccentricity pumped by gas and therefore spend more time on average at wider separations (Lai and Munoz, 2023), making them more likely to be softened or ionized in tertiary encounters. 3) Retrograde BBH with \(L_{\rm bin}\) not identically anti-aligned with \(L_{\rm disk}\) will experience an accretion torque towards alignment with disk gas over time, flipping \(L_{\rm bin}\) positive (Lubow et al., 2015). Nevertheless, some \(\chi_{\rm eff}<0\) mergers should happen occasionally in AGN disks (McKerran et al., 2022). In order to preserve a negative \(\chi_{\rm eff}\) binary to merger against dynamical prograde encounters, such a BBH is more likely to be massive, with \(q\sim 1\). Negative spin in a BBH in the AGN channel can occur if there is a capture, or exchange on close pass, between a BH on a long-lived eccentric orbit and a BH on a nearly circular orbit, and the semimajor axis at binary formation is small enough that the GW merger timescale is shorter than the timescale to torque \(L_{\rm bin}\) into alignment with the disk. The long-lived eccentric orbits required to form such BBH could persist in either the colder outer disk, or the puffed-up hot inner disk. In the colder outer disk, the rate of tertiary (potentially ionizing) encounters is significantly lower, so \(\chi_{\rm eff}<0\) BBH are more likely to survive to merger in that region. ## 8 Conclusions Spin information from LIGO BBH merger observations provides important clues to the underlying population and merger channel details. In particular, for the AGN channel, spin constraints (both \(\chi_{\rm eff}\) and \(\chi_{\rm P}\) distributions and correlations) are strongly constraining on the details of how gas and dynamics drive mergers of BH embedded in AGN gas disks. Here we point out that the AGN channel can produce hierarchical merger products that should _spin-down_ over time. Average BBH spin parameter estimates from LIGO-Virgo will help constrain the gas damping timescale in AGN and therefore \(\rho_{\rm disk}\), \(h\) as well as typical merger locations. We also point out that BBH mergers with large \(\chi_{\rm P}\) are straightforward to produce in the AGN channel due to interactions between a BBH and a non-disk component tertiary (a point also made by Tagawa et al. (2020) and Samsing et al. (2022)). ## 9 Acknowledgements. BM & KESF are supported by NSF AST-2206096 and NSF AST-1831415 and Simons Foundation Grant 533845 as well as Simons Foundation sabbatical support. The Flatiron Institute is supported by the Simons Foundation. Thanks to Lucy Reading-Ikkanda for her excellent illustrations. ## Data Availability Any data used in this analysis are available on reasonable request from the first author (BM).
2309.13676
BdSpell: A YOLO-based Real-time Finger Spelling System for Bangla Sign Language
In the domain of Bangla Sign Language (BdSL) interpretation, prior approaches often imposed a burden on users, requiring them to spell words without hidden characters, which were subsequently corrected using Bangla grammar rules due to the missing classes in BdSL36 dataset. However, this method posed a challenge in accurately guessing the incorrect spelling of words. To address this limitation, we propose a novel real-time finger spelling system based on the YOLOv5 architecture. Our system employs specified rules and numerical classes as triggers to efficiently generate hidden and compound characters, eliminating the necessity for additional classes and significantly enhancing user convenience. Notably, our approach achieves character spelling in an impressive 1.32 seconds with a remarkable accuracy rate of 98\%. Furthermore, our YOLOv5 model, trained on 9147 images, demonstrates an exceptional mean Average Precision (mAP) of 96.4\%. These advancements represent a substantial progression in augmenting BdSL interpretation, promising increased inclusivity and accessibility for the linguistic minority. This innovative framework, characterized by compatibility with existing YOLO versions, stands as a transformative milestone in enhancing communication modalities and linguistic equity within the Bangla Sign Language community.
Naimul Haque, Meraj Serker, Tariq Bin Bashar
2023-09-24T15:51:39Z
http://arxiv.org/abs/2309.13676v1
# BdSpell: A YOLO-based Real-time Finger Spelling System for Bangla Sign Language ###### Abstract In the domain of Bangla Sign Language (BdSL) interpretation, prior approaches often imposed a burden on users, requiring them to spell words without hidden characters, which were subsequently corrected using Bangla grammar rules due to the missing classes in BdSL36 dataset. However, this method posed a challenge in accurately guessing the incorrect spelling of words. To address this limitation, we propose a novel real-time finger spelling system based on the YOLOv5 architecture. Our system employs specified rules and numerical classes as triggers to efficiently generate hidden and compound characters, eliminating the necessity for additional classes and significantly enhancing user convenience. Notably, our approach achieves character spelling in an impressive 1.32 seconds with a remarkable accuracy rate of 98%. Furthermore, our YOLOv5 model, trained on 9147 images, demonstrates an exceptional mean Average Precision (mAP) of 96.4%. These advancements represent a substantial progression in augmenting BdSL interpretation, promising increased inclusivity and accessibility for the linguistic minority. This innovative framework, characterized by compatibility with existing YOLO versions, stands as a transformative milestone in enhancing communication modalities and linguistic equity within the Bangla Sign Language community. YOLOv5, real-time finger spelling, Bangla Sign Language, BdSL36 dataset, accessibility, inclusivity, linguistic equity, communication modalities ## 1 Introduction In a world increasingly connected through technology, accessibility, and inclusivity are of paramount importance. The Real-Time Bangla Finger Spelling for Sign Language project represents a significant endeavor to bridge communication gaps for individuals with hearing and speech impairments in Bangladesh. This pioneering project aims to develop an advanced computer vision system capable of accurately detecting and interpreting Bangla finger spelling gestures in real-time, empowering users to communicate effectively using sign language. The project leverages the power of YOLOv5 [1], a cutting-edge object detection algorithm renowned for its speed and precision. By harnessing YOLOv5's capabilities, the system seeks to enable real-time recognition of Bangla finger spelling gestures for digits and alphabets, paving the way for seamless communication and meaningful interactions for individuals with impairments. With a primary objective of creating a highly accurate and efficient computer vision model, the project places a strong emphasis on developing a robust dataset encompassing various hand orientations, lighting conditions, and backgrounds. This diversity ensures the system's adaptability to real-world scenarios and enhances its ability to accurately interpret Bangla finger spelling gestures. By evaluating the system's performance based on metrics such as Mean Average Precision, Precision, and Recall, the project ensures the model's reliability and effectiveness in avoiding false positives and negatives during real-time inference. Furthermore, prior approaches [2] in the domain of Bangla Sign Language (BdSL) interpretation often imposed a burden on users, requiring them to spell words with missing characters, which were subsequently corrected using Bangla grammar rules due to the missing classes in BdSL36 dataset. However, this method posed a challenge in accurately guessing the incorrect spelling of words. To address this limitation, we propose a novel real-time finger spelling system based on the YOLOv5 architecture. Our system employs specified rules and numerical classes as triggers to efficiently generate hidden and compound characters, eliminating the necessity for additional classes and significantly enhancing user convenience. Another YOLOv4 [4] based system was proposed which takes 60 sec for each character detection using a total of 49 different classes. Our approach achieves character spelling in an impressive 1.32 seconds with a remarkable accuracy rate of 98%. Furthermore, our YOLOv5 model, trained on 9147 images, demonstrates an exceptional mean Average Precision (mAP) of 96.4%. * Our YOLOv5 model, trained on 9147 images, demonstrates an exceptional Mean Average Precision (mAP) of 96.4%, showcasing its impressive accuracy. * Our proposed system eliminates the need for additional classes, significantly reducing the computational cost while enhancing user convenience in comparison to previous approaches. * Our approach achieves character spelling in an impressive 1.32 seconds with a remarkable accuracy rate of 98%, representing a substantial speed improvement over previous systems. * We implemented a technique involving thresholding the mean running cumulative of the confidence of the detection for spelling characters, further enhancing the accuracy and reliability of our system. This paper will delve into the project's methodology, outline the implementation steps, and discuss the dataset preparation, model training, and evaluation processes. Additionally, it will highlight the potential impact of the Real-Time Bangla Finger Spelling for Sign Language project in fostering inclusivity and empowering individuals with hearing and speech impairments to communicate effectively with others. Through this research endeavor, we aspire to contribute to building a more accessible and inclusive society that values effective communication for all, reaffirming the significance of technology in promoting a more connected and empathetic world. ## 2 Related Works Nanda et al. [5] explored real-time sign language conversion using the YOLOv3 algorithm. They focused on American Sign Language (ASL) and employed default hyperparameters for training. In [6], researchers tackled Thai Sign Language classification using the YOLO algorithm. Their dataset comprised 25 signs with 15,000 instances for training. They achieved an mAP of 82.06% in complex background scenarios. Arabic Sign Language Recognition and Speech Generation were studied in [14] using Convolution Neural Networks. The authors integrated Google Translator API for hand sign-to-letter translation and gTTs for speech generation. For Bangla Sign Language (BdSL) detection, [7] employed a method involving RGB-to-HSV color space conversion, followed by feature extraction using the Scale Invariant Feature Transform (SIFT) Algorithm for 38 Bangla signs. A simple neural network was used for Bangla alphabet classification in [15], employing the YCvCr color map for input images. They used the Canny edge detector and Freeman Chain Code for feature extraction. A real-time Banglades Sign Language Detection method using Faster R-CNN was proposed in [9], achieving an accuracy of 98.2% and a detection time of 90.03 milliseconds, using a dataset containing 10 different sign letter labels. Hossen et. al, [10] presented a Deep Convolutional Neural Network-based method for Bengali Sign Language Detection, utilizing a diverse dataset of 37 signs with various backgrounds and skin colors. In [11], a dataset comprising 7052 samples of 10 numerals and 23864 samples of 35 characters of Bangla Sign Language was introduced. The Convolutional Neural Network was employed for accurate classification. Sarker et. al, [12] proposed a Bangla Sign language-to-speech generation system using smart gloves, sensors, and a microcontroller. They employed Levenshtein distance for word matching in the database for sign recognition. In the domain of finger-spelling, Li et al. [13] presented a real-time finger-spelling recognition system using a convolutional neural network (CNN) architecture. They achieved high accuracy in recognizing fingers-spelling gestures in real-time scenarios. Another research [2] used Unveiling an Innovative Algorithm for Accurate Hand-Sign-Spelled Bangla Language Modelling. Authors used Bangla grammatical rules to get hidden characters. And they generated independent vowels using those rules. Dipon et al, [4] used YOLOv4 as the object detection model for Real-Time Bangla Sign Language Detection with Sentence and Speech Generation. Detection time of 60 seconds for every word. Rafiq et. al, proposed a real-time vision-based Bangla sign language detection system [16] using the YOLO algorithm. They used a dataset consisting of 10 Bangla signs and achieved an mAP of 92.5%. Talukder et. al, proposed a real-time Bangla sign language detection system using the YOLOv4 [2] jobject detection model. They used a dataset consisting of 49 different classes, including 39 Bangla alphabets, 10 Bangla digits, and three new proposed signs. They achieved an mAP of 95.6. These finger spelling recognition papers complement the project's focus on Real-Time Bangla Finger Spelling for Sign Language, contributing valuable insights and methodologies to the broader domain of sign language recognition and communication accessibility. ## 3 Dataset We utilized the BdSL36 dataset [3], purposefully curated for Bangladesh Sign Language recognition systems by Oishee Bintey Hoque et al. This dataset underwent meticulous preparation across five stages to ensure robustness and versatility. The process commenced with Image Collection, conducted through extensive research at a deaf school to identify 36 practical Bangla sign letters used in daily communication. Ten volunteers captured raw images using phone cameras or webcams. Subsequently, BdSL experts individually assessed and filtered the images to retain those aligning with the appropriate sign style. This curation yielded 1200 images across the classes. Raw Image-Data Augmentation addressed the need for accurate sign letter detection under varying conditions. Manual augmentation techniques, encompassing affine and perspective transformations, contrast adjustments, noise addition, cropping, blurring, rotation, and more, were applied, resulting in the BdSL36v1 dataset containing approximately 26,713 images with an average of 700 images per class. Few sample images of the dataset are shown in the Figure 1. Dataset preparation involved a comprehensive and adaptable approach to BdSL recognition. The meticulous stages of image collection, augmented data generation, and background augmentation ensure that the BdSL36 dataset authentically captures real-world scenarios, rendering it a valuable resource for advancing the field of sign language recognition and detection. Additionally, we further annotated 9,187 BdSL36 dataset images and split it into 6,427 training sets, 1,828 validation sets, and 932 testing sets. ## 4 Bangla Alphabets Bengali is written from left to right, like the majority of other languages, and there are no capital characters. The letters have a continuous line at the top, and there are conjuncts, upstrokes, and downstrokes in the script. There are a total of 60 characters including 11 vowels (Sworoborna), 39 consonants (Byanjanbarna), and 10 numerals. Sworoborna, shown in Figure 2 refers to those letters in Bengali that can be spoken on their own. These are the characters that represent individual vowel sounds and can be pronounced independently without the need for a consonant. These vowel characters are an integral part of the Bengali script and play a crucial role in forming words and conveying meaning. They are combined with consonant characters (Byanjanbarna) to create syllables, which ultimately from words. In the Bengali language, consonants are known as "Byanjonborno", shown in Figure 3. These are letters that cannot be pronounced on their own and need to be combined with vowels to create a complete sound. Bengali has a total of 32 consonant letters, which play a crucial role in forming meaningful words and expressions. When a consonant is combined with a vowel, it forms a syllable, which is the fundamental unit of pronunciation in Bengali. This combination of consonants and vowels allows speakers to articulate a wide range of sounds and convey various meanings. Bengali has its own set of numeric symbols to represent numbers and fractions, shown in Figure 4. These symbols are used for numerical representation in various contexts, such as writing numbers, expressing quantities, and indicating fractions. In Bengali, compound characters are formed by combining two or more consonants to create a single character. Some of the most commonly Figure 1: Example images of initially collected BdSL36 dataset. Each image represents a different BdSL sign letter. Images are serially organized according to their class label from left to right. Figure 2: Bangla Vowels: A Visual Representation of the Complete Set of Vowels in the Bengali Alphabet used compound characters are shown in Figure 5. In Bengali, compound characters are formed by combining two or more consonants to create a single character. Some of the most commonly used compound characters are shown in Figure 5. The formation of these compound characters has been illustrated using some examples in Figure 6. ## 5 Objection Detection Evaluation Metric In object detection tasks, evaluating the performance of a model is crucial to understanding its accuracy and effectiveness. One of the most commonly used evaluation metrics for object detection is Mean Average Precision (mAP). In this section, we will elaborate on how mAP works and the different components involved in its calculation. Figure 4: Bangla Numerals: Displaying the Full Range of Numerical Digits in the Bengali Script. Figure 3: Bangla Consonants: Illustrating the Entire Array of Consonant Characters in the Bengali Alphabet. ### From Prediction Score to Class Label In object detection, the model predicts bounding boxes around objects along with their corresponding class labels and confidence scores. The prediction score represents how confident the model is in its prediction. To convert this prediction score into a class label, a threshold is applied. If the confidence score for a prediction exceeds the threshold, it is classified as a positive detection with the associated class label; otherwise, it is considered a negative detection. Figure 5: Bangla Compound Characters: Essential Combinations in the Bengali Alphabet. Bangla Compound Characters, known as ”Yuktakshar” in Bengali, are formed by combining two or more basic characters from the script. These combinations create unique characters that represent specific phonetic sounds not present in the basic alphabet. They are pivotal in accurately transcribing words and phrases in the Bengali language. This figure showcases some of the most commonly used compound characters in the Bengali script, highlighting their significance in phonetic representation Figure 6: An illustration of consonant combinations resulting in compound characters in the Bengali script. The first column displays select compound characters, while the second column demonstrates the amalgamation of consonant characters that give rise to these combinations, showcasing the script’s phonetic intricacies. \[\text{Class Label}=\begin{cases}1&\text{if Confidence Score}>\text{Threshold}\\ 0&\text{otherwise}\end{cases}\] ### Detection Performance Metrics Precision is a metric that measures the accuracy of the model's predictions for a particular class. It is defined as the ratio of true positive (TP) detections to the sum of true positive and false positive (FP) detections: \[Precision(P)=\frac{TP}{TP+FP}.\] Recall, also known as True Positive Rate (TPR) or Sensitivity, measures the model's ability to find all the positive instances for a particular class. It is defined as the ratio of true positive detections to the sum of true positive and false negative (FN) detections: \[Recall(R)=\frac{TP}{TP+FN}\] The Precision-Recall (PR) curve is a graphical representation of the precision and recall values at various confidence score thresholds. It helps to visualize how precision and recall change as we vary the confidence threshold for positive detections. The PR curve is obtained by plotting precision on the y-axis and recall on the x-axis. Average Precision (AP) is a single scalar value that summarizes the Precision-Recall curve for a specific class. It is calculated by computing the area under the Precision-Recall curve. The mathematical formula for AP is as follows: \[AP=\int_{0}^{1}\text{precision}(r)\,dr\] ;where \(precision(r)\) is the precision at a given recall value \(r\) in the Precision-Recall curve. ### Intersection over Union (IoU) \(IoU\) is a critical concept in object detection evaluation. It measures the overlap between the predicted bounding box and the ground truth bounding box. IoU is computed as the ratio of the area of intersection between the two bounding boxes to the area of their union: \(IoU=\frac{\text{Area of Intersection}}{\text{Area of Union}}\) The example shown in Figure 7 shows how IoU is calculated. ### Mean Average Precision (mAP) \(mAP\) is the average of AP values calculated for all the classes in the dataset. It provides a comprehensive evaluation of the model's overall performance across different classes and confidence score thresholds. The mathematical formula for \(mAP\) is as follows: \(mAP=\frac{1}{N}\sum_{i=1}^{N}AP_{i}\) Where \(N\) is the number of classes and \(AP_{i}\) is the Average Precision for \(class_{i}\). By using these evaluation metrics and mathematical formulas, we can effectively assess the performance of an object detection model, identify areas of improvement, and fine-tune the model to achieve better accuracy and reliability in detecting objects of interest. ## 6 Methodology To develop a YOLO-based real-time finger spelling model for the BDSL36 dataset, which encompasses 36 recognized characters along with several derived characters, we will follow a systematic methodology. The BDSL36 dataset contains a diverse set of characters represented by Unicode values, each associated with a corresponding finger spelling label. To devise a comprehensive system capable of recognizing hidden or derived characters, we introduced a set of key components within our methodology. These components include **Recognized Character Detection**, **Independent Vowel Transformation**, **Hidden Character Generation**, and **Trigger Handling**, each playing a vital role in the fingerspelling process. 1. **Recognized Character Detection**: Recognized characters in our finger spelling recognition system are identified through the use of confidence scores \(c_{i}(t)\) generated by the YOLOv5 model at time \(t\) for the detection class \(i\). All the available recognized characters are shown in Figure 8. To qualify as a recognized character, the cumulative running mean of confidence must surpass a specified threshold \(\delta\), as determined by the formula for the running cumulative confidence: \(\sum_{t}^{T}\frac{1}{N}\cdot\sum_{i}^{N}c_{i}(t)>\delta\) Figure 7: In this illustrative image, the concept of IoU is demonstrated. The predicted bounding box (in red) and the ground truth bounding box (also in red) showcase the overlapping area used to compute IoU. A higher IoU value signifies accurate object localization, while a lower value indicates less precise detection. The \(N\) is the number of detections of class \(i\) at time frame \(t\). This threshold \(\delta\) ensures that only characters with consistently high confidence scores are selected. The recognized characters originate from the BDSL36 [3] dataset, having undergone extensive training for object detection. They serve as the cornerstone for identifying both overt and derived characters within our system, providing a robust foundation for accurate recognition. 2. **Independent Vowel Transformation**: In written Bangla, it's important to note that there are two sets of vowels: independent and dependent. Dependent vowels are essentially the same as independent vowels, but they can only be written after consonants in the Bangla language, hence the name "dependent vowels". Since our system utilizes only one set of vowels because of the limitation of the BdSL36 dataset, we assume by default that a recognized vowel is a dependent vowel. This assumption stems from the fact that dependent vowels are more frequently used in Bangla spelling due to their characteristic placement after consonants. This nuanced understanding enables our model to accurately transcribe and recognize vowels in the Bengali script. Independent vowels in our system are not directly recognized. They are derived from the dependent vowels, which are recognized by the model. These recognized vowels transition into independent vowels when trigger characters are detected following a recognized character or derived character. This distinction allows us to handle the recognition of vowels effectively. The sets of vowels and a simple demonstration of the transformation that has been shown in Figure 9. 3. **Hidden Character Generation**: Hidden characters are a unique aspect of our system. These characters are not provided in the BDSL36 dataset except for the independent variables. They play a pivotal role in accurately representing the Bengali script. These hidden characters, though not as widely recognized, are essential for spelling numerous Bangla words. We carefully define and create these hidden characters, ensuring they are not directly recognizable by the model. This distinctive feature empowers our system to bridge the gap between the limitations of existing datasets and the comprehensive representation of the Bengali language. Using the provided rules stated in Figure 9. 4. **Trigger Handling**: Triggers are specific characters ranging from T0 to T7, which are shown in the Figure 8. They operate exclusively in the Textual mode of our finger spelling system. The specific functions and roles of these trigger characters will be discussed in detail later in our methodology. They serve as key elements for recognizing derived characters and their dependencies. Detecting and handling these Trigger characters is a crucial part of our finger-spelling methodology.. By incorporating these components into our methodology, we ensure a holistic approach to finger-spelling recognition. Our system not only recognizes the characters available in the BDSL36 dataset but also effectively handles hidden characters, transitions dependent vowels into independent vowels when triggers are detected, and utilizes trigger characters for recognizing derived characters. This comprehensive approach enables us to create a robust and versatile finger spelling recognition system that accommodates the complexities of the Bengali language, where hidden and derived characteristics play significant roles in communication. ### BdSL Finger-Spelling To derive characters that are not present in the BDSL36 dataset, we can establish transformation rules based on the provided mappings in Figure 9. There are four types of characters that need to be derived: Independent Vowels, Hidden Characters, Compound Characters with two characters, and Compound Characters using three characters. #### 6.1.1 Single Character Transformation According to the provided figure 9, when someone finger spells any recognized dependent vowel character and then follows it with Trigger 1 (T1) finger-spelling, the dependent vowel character will be replaced with the corresponding independent vowel character. Additionally, hidden characters are derived from recognized characters when Trigger 4 (T4) is finger-spelled according to the figure 9. #### 6.1.2 Compound Character Derivation In this subsection, we explore the process of deriving compound characters from recognized characters using triggers T2 and T3, along with the Bangla compound character dictionary. These triggers allow us to create compound characters by combining individual characters. The dictionary and the derivation process are illustrated in Figure 11. The Figure 10 shows how to spell a compound character in real time Figure 8: This image showcases a comprehensive array of Bangla characters, thoughtfully organized into four distinct categories: recognized characters, derived characters, hidden characters, and Trigger characters. The top-right corner provides a visual taxonomy for easy reference. At the bottom, a specific example of a Trigger character is thoughtfully presented, offering a practical illustration of this unique character type in the Bangla script. #### 6.1.3 Real Time Finger-Spelling The methodology for developing the BdSL Fingerspelling model is designed to accurately recognize finger-spelled Bengali characters in real time. The BDSL36 dataset, comprising 36 recognized characters and their derivatives, forms the basis of this system. Key components within the methodology include Recognized Character Detection, Independent Vowel Transformation, Hidden Character Generation, and Trigger Handling. Recognized characters are identified based on confidence scores generated Figure 10: Formation of Compound Character ’kta’ by Combining ’ka’ and ’tta’ in Bengali Script. The illustration shows how a hand signer finger spell compound character after detecting the Trigger T2 which automatically remove the last two characters and append their corresponding compound character in our spelled word. Figure 9: In this visual representation, we observe the Transformation of Dependent Vowels to Independent when Trigger T1 is finger-spelled while hidden characters are derived from other characters using T4. In the lower right-hand corner of this image, we are presented with two illustrative examples. In the first one, we witness the transformation of the character ”/aa” into ”/AA” as a result of the influence of ’Trigger T1.’ and in the second one, we can observe the transformation of the sequence ”/a/A” into the character ”/ae.” This transformation is facilitated by the operation of ’T4.’ by the YOLOv5 model, ensuring consistent high confidence for selection. Independent vowels are assumed to be dependent, allowing for accurate transcription, while hidden characters play a crucial role in representing Bengali script. Transformation rules and triggers facilitate the derivation of characters not present in the BDSL36 dataset. Compound characters are also derived using triggers and a Bangla compound character dictionary. The comprehensive approach of this methodology addresses the complexities of the Bengali language, resulting in a robust and versatile finger-spelling recognition system. Our real-time finger spelling starts with a speller hand signing the recognized characters which in real-time contributes to the errors because of the ambient and other factors. In order to confirm a detection, we created a window that uses the confidence score, as explained in the above subsection. A detected character then passes through the trigger-handling module. Based on the Trigger, the recognized character is further transformed according to the flow chart shown in Figure 12. In our system, we can finger-spell either texts or numerals, which can be done in the textual or numeral mode respective. By default, we begin with textual mode, the mode will only change if the trigger T5 (recognized character 5) is detected switching to a numeral mode where any numeral recognized character will not be detected as a trigger. To get back to the previous mode 'aa' has to be detected which acts as trigger T5 in the numeral mode. Each recognized character detected or transformed character is then added to update a sentence. These steps are shown in the Figure Figure 11: In this visual representation, We are presented with a comprehensive overview of Bangla finger spelling. On the left side of the image, we can observe a visual representation of the compound characters, where two or more individual characters combine to form a unique and meaningful sign. the right side of the picture provides a detailed set of rules and guidelines. These rules outline the correct formation and usage of compound characters, both for two-character and three-character combinations. 12. Furthermore, we use trigger T0 to add space and trigger T6 to delete the last appended character. Using our setup and methodology explained above and in Figures [8 11], we further demonstrate the finger spelling using examples in Figure [13]. ## 7 Training YOLOv5 During the development of the project, the YOLOv5 model was trained on a dataset of Bangla Sign Language images to detect and classify different signs. After training, the model's performance was evaluated on a separate validation dataset to assess its accuracy and effectiveness. The validation process used the best-trained weights of the model, which were saved at the location runs/train/exp/weights/best.pt. These weights represent the model's parameters that achieved the highest level of performance during the training process. The model's architecture consists of 157 layers and is relatively lightweight, with 7,136,884 parameters. It's essential to have a model with a suitable number of parameters to strike a balance between accuracy and computational efficiency. In this case, the model's computational cost is measured in GFLOPs (Giga Floating Point Operations) and is found to be 16.2 GFLOPs, indicating a reasonable computational load. To evaluate the model's detection performance, it was tested on a validation set comprising 1,826 images, which collectively contained 1,827 instances of Bangla Sign Language signs. The model's performance is measured using several metrics that provide insights into its ability to recognize different signs. Figure 12: This image depicts an algorithm utilizing YOLO v5 object detection to recognize characters. Recognized characters can transition into either numeral or textual modes. Textual mode is determined by triggers: T5 for toggles, T1 for independent vowels, T2 for compound character 12, T4 for hidden characters, T6 for backspace, and T0 for spaces. ## 8 Results The primary evaluation metrics used are Precision (P), Recall (R), and Mean Average Precision (mAP) at various IoU (Intersection over Union) thresholds. Precision measures the accuracy of the model's predictions, reflecting the percentage of true positive detections among all the positive detections. Recall, on the other hand, measures the model's ability to find all the positive instances, indicating how well it avoids missing any relevant signs. The Mean Average Precision (mAP) is a comprehensive measure that considers the precision-recall trade-off across multiple IoU thresholds. It provides a holistic assessment of the model's performance for different classes and thresholds. In this evaluation, mAP is calculated at the standard IoU threshold of 0.5 and also across a range of IoU thresholds from 0.5 to 0.95. The overall performance of the model across all classes combined is quite promising. It achieved a precision of 69.2% and a recall of 83.6%, indicating that it can identify a significant portion of Bangla Sign Language signs accurately. The mAP at the standard IoU threshold of 0.5 is 84.3%, which is considered a strong performance. It means that the model's predictions are well-matched with the ground truth annotations. However, the more comprehensive mAP across IoU thresholds from 0.5 to 0.95 is 56.9%, which suggests that the model's performance may vary across different Figure 13: In our fingerspelling system for Bengali, we break down the intricate art of conveying Bengali words using hand shapes and finger movements. Each word is meticulously spelled out using a combination of handshapes, finger motions, and specific positions, all in accordance with the unique script and phonetic characteristics of the Bengali language. This table offers a comprehensive guide to the fingerspelling of common Bengali words, making it an invaluable resource for those learning sign language or communicating with individuals who rely on this method to express themselves in Bengali. levels of detection strictness. It is common to see a drop in mAP as the IoU threshold increases, as it requires stricter overlap between predicted and ground truth bounding boxes. Looking into the performance of individual classes, we can observe variations in the model's ability to recognize different signs. Some classes, such as "BHA," "BISHARGA," and "THA," achieved high precision, recall, and mAP scores, indicating that the model performs exceptionally well on these signs. On the other hand, there are classes like "NA" and "RA" with lower precision, recall, and mAP scores, suggesting that the model struggles more with recognizing these particular signs. In summary, the YOLOv5 model showed overall promising results for the Bangla Sign Language Detection project. It demonstrated the ability to detect and classify signs with good accuracy and achieved a high mAP score at the standard IoU threshold of 0.5. However, there are certain classes where the model's performance could be further enhanced. This might involve additional data collection for underrepresented classes, fine-tuning hyperparameters, or exploring other techniques to improve recognition accuracy. Continuous evaluation and refinement of the model will be essential to enhance its capabilities and make it more robust for real-world applications. Figure 14: In this graph, we explore the relationship between threshold values (\(\delta\)) and accuracy, considering two distinct measures: ”Accuracy (Sum of Detections)” and ”Accuracy (Cumulative Confidence).” As the threshold value varies along the X-axis, we observe changes in accuracy on the Y-axis. Lower threshold values (5 and 10) yield high accuracy levels in both measures, with ”Accuracy (Cumulative Confidence)” reaching a perfect score at 10. Accuracy remains consistently high as the threshold increases, even reaching 100% for ”Accuracy (Cumulative Confidence)” at threshold values of 20, 30, and 50. This visual representation elucidates how threshold values impact detection accuracy, offering insights into system performance. In this comparison table 8, we present an overview of different object detection models used for Bangla Sign Language detection, highlighting key attributes such as the model name, dataset used, dataset size, number of classes, and mean Average Precision (mAP) scores. Notably, our YOLOv5 model achieved a superior mAP of 96.40% on the BdSL 36 dataset comprising 9,147 images and 36 signs, outperforming the previous YOLOv4 model (95.6%) from [4]. This improved performance may be attributed to architectural enhancements, a larger and diverse training dataset, the use of data augmentation techniques, fine-tuning for Bangla Sign Language, consistent evaluation metrics, and potential algorithmic advancements. Notably, YOLOv5 showcases better precision in sign detection, making it a promising advancement in the field, although potential limitations should also be acknowledged for a comprehensive evaluation of its performance. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Model & Data & Dataset Size & Number of Classes & mAP \\ \hline YOLOv5 & BdSL-OD & 1,000 images & 30 signs & 95\% \\ YOLOv4 [4] & BdSL-OD & 12,500 images & 49 signs & 95.6\% \\ YOLOv5 (ours) & BdSL 36 & 9,147 images & 36 signs & 96.40\% \\ \hline \end{tabular} \end{table} Table 1: Comparison of Object Detection Models Figure 15: In this graph, we explore how different threshold values (\(\delta\)) affect detection time. Lower thresholds (5 and 10) result in faster detections, especially in ”Time (Cumulative Confidence).” However, as the threshold increases, both ”Time (Sum of Detections)” and ”Time (Cumulative Confidence)” take longer. Overall, this graph shows the balance between accuracy and detection speed based on threshold choices. The graph 18 displays the Precision-Recall performance for different classes, with the X-axis showing the classes, including both Bengali script and Romanized labels, and the Y-axis representing the mAP50 (mean Average Precision at 50%) and mAP50-95 (mean Average Precision from 50% to 95% overlap) scores. This graph offers valuable insights into the model's object detection capabilities. For each class, the mAP50 score indicates how well the model can accurately identify instances of that specific character or symbol. Higher bars indicate better performance, while lower ones suggest areas where the model may struggle. The mAP50-95 scores provide a more comprehensive evaluation, considering a broader range of overlap thresholds. Overall, this graph allows us to assess the precision and recall performance of the model across different classes, identifying its strengths and weaknesses in object detection tasks. ## 9 Conclusion Conclusions may be used to restate your hypothesis or research question, restate your major findings, explain the relevance and the added value of your work, highlight any limitations of your study, and describe future directions for research and recommendations. In some disciplines use of Discussion or 'Conclusion' is interchangeable. It is not mandatory to use both. Please refer to Journal-level guidance for any specific requirements. Figure 16: This F1-confidence curve in the YOLO model demonstrates a promising performance for object detection. Achieving an F1 score of 0.74 at a confidence threshold of 0.335 indicates a good balance between precision and recall for detecting objects across all classes. The curve suggests that the model can identify objects accurately while minimizing false positives, making it suitable for object detection tasks.
2309.03465
Light Meson Decays at BESIII
The world's largest sample of Jpsi events collected at BESIII detector offers a unique opportunity to investigate eta and eta' physics via the Jpsi radiative or hadronic decays with unprecedented precision. In recent years the BESIII experiment has made significant progresses in eta/eta' decays. A selection of recent highlights in light meson spectroscopy at BESIII are reviewed in this report.
Guofa Xu
2023-09-07T03:15:36Z
http://arxiv.org/abs/2309.03465v1
# Light Meson Decays at BESIII \({}^{*}\) ###### Abstract The world's largest sample of \(J/\psi\) events(\(10^{10}\)) collected at BESIII detector offers a unique opportunity to investigate \(\eta\) and \(\eta^{\prime}\) physics via the \(J/\psi\) radiative or hadronic decays with unprecedented precision. In recent years the BESIII experiment has made significant progresses in \(\eta/\eta^{\prime}\) decays. A selection of recent highlights in light meson spectroscopy at BESIII are reviewed in this report. pacs: 12.38.-b, 12.38.-b, 12.38.+d, 12.38.+d ## I Introduction After being discovered more than 50 years, the \(\eta/\eta^{\prime}\) meson still attract considerable both of the theoretical and experimental attention, because of it plays a central role in our understanding of quantum chromodynamics (QCD) at low energies. As a mixture of the lowest pseudoscalar singlet and octet, \(\eta/\eta^{\prime}\) have inspired a wide variety of physics issues, e.g., \(\eta-\eta^{\prime}\) mixing, the light quark masses, the fundamental discrete symmetries, as well as physics beyond the standard model (SM). Precision measurements of its decays provide important tests of effective field theories, such as chiral perturbation theory (ChPT) or the vector meson dominance (VMD) model. Moreover, it is also possible to search for new phenomena in rare or forbidden \(\eta/\eta^{\prime}\) decays. BEPCII is a double-ring multibunch \(e^{+}e^{-}\) collider running in the tau-charm energy region. The BESIII detector, described in detail in Ref.[1], has a geometrical acceptance of 93% of \(4\pi\). It consists of a drift chamber (MDC), a time-of-flight (TOF) system, and an electromagnetic calorimeter (EMC), all enclosed in a superconducting solenoid with 1.0 T (0.9 T in 2012) magnetic field. The small-cell helium based MDC provides the tracking of the charged particle and ionization energy loss (\(dE/dx\)) measurement. The single cell position resolution is 130 \(\mu m\) and the transverse momentum resolution is 0.5% at 1 GeV/c. The TOF system for particle identification (PID) is made of plastic scintillators. It has 80 ps time resolution in the barrel, and 110 ps in the end caps. The EMC is made of 6240 CsI (Tl) crystals. The energy resolution is 2.5% in the barrel and 5% in the end caps for 1.0 GeV photons. Outside the solenoid, a muon chamber system made of 1272 \(m^{2}\) resistive plate chambers detects muon tracks with momenta greater than 0.5 GeV, The BESIII experiment has collected a total of \(10^{10}\)\(J/\psi\) events. Via the \(J/\psi\) radiative decay, a sample of \(1.1\times 10^{7}\)\(\eta\) and \(5.2\times 10^{7}\)\(\eta^{\prime}\) can obtained. For \(\eta^{\prime}\), which is comparatively unexplored, BESIII can measure many decays for the first time and others with unrivaled precision. II Observation of \(\eta^{\prime}\rightarrow\pi^{+}\pi^{-}\pi^{+(0)}\pi^{-(0)}\) and \(\eta^{\prime}\to 4\pi^{0}\) The strong decays \(\eta^{\prime}\rightarrow\pi^{+}\pi^{-}\pi^{+(0)}\pi^{-(0)}\) are not suppressed by approximate symmetries, they are expected to be mediated by chiral anomalies, since an odd number (five) of pseudoscalar particles is involved. The projections of the fit to \(M_{\pi^{+}\pi^{-}\pi^{+(0)}\pi^{-(0)}}\) in the \(\eta^{\prime}\) mass region are shown in Figs. 1(a) and 1(b), where the shape of the sum of signal and background shapes are in good agreement with data. We obtain \(199\pm 16\) of \(\pi^{+}\pi^{-}\pi^{+}\pi^{-}\) events with a statistical significance of \(18\sigma\) and \(84\pm 16\) of \(\pi^{+}\pi^{-}\pi^{0}\pi^{0}\) events with a statistical significance of \(5\sigma\)[2]. Using the world average value of \(Br(J/\psi\rightarrow\gamma\eta\prime)\)[3], the branching fraction of \(\eta^{\prime}\rightarrow\pi^{+}\pi^{-}\pi^{+(0)}\pi^{-(0)}\) are determined to be \(Br(\eta^{\prime}\rightarrow\pi^{+}\pi^{-}\pi^{+}\pi^{-}=[8.53\pm 0.69(stat.) \pm 0.64(syst.)]\times 10^{-5})\) and \(Br(\eta^{\prime}\rightarrow\pi^{+}\pi^{-}\pi^{0}\pi^{0}=[1.82\pm 0.35(stat.) \pm 0.18(syst.)]\times 10^{-4})\), which are consistent with the theoretical predictions based on a combination of ChPT and VMD models, but not with the broken-SU\({}_{6}\times O_{3}\) quark model [4]. In theory, \(\eta^{\prime}\to 4\pi^{0}\) is a highly suppressed decay because of the S-wave CP-violation. In the light of an effective chiral Lagrangian approach, the S-wave CP-violation in \(\eta^{\prime}\to 4\pi^{0}\) is induced by the so-called \(\theta\)-term, which is an additional term in the QCD Lagrangian to account for the solution of the strong-CP problem. It was found that the S-wave CP-violation effect that contributed to this decay is at a level of \(10^{-23}\)[5]. While higher-order contributions, involving a D-wave pion loop or the production of two \(f_{2}\) tensor mesons, provide a CP-conserving route through which the decay can occur. By ignoring the tiny contribution from the latter process, calculations based on ChPT and VMD model predict the branching fraction caused by D-wave CP-conserving to be at the level of \(10^{-8}\)[6]. However, the theoretical prediction is not strictly based on the effective field theory due to the lack of knowledge at such a high order in the chiral expansion and the use of a model to make an estimation. One does not know the reliability of that model a priori. Therefore, a search for the decay \(\eta^{\prime}\to 4\pi^{0}\) is useful to check the reliability of it. Figure 1(c) shows the \(4\pi^{0}\) invariant mass spectrum in data, together with the total fit result and the contributions from non-peaking background and the peaking background \(J/\psi\to\gamma\eta^{\prime},\eta^{\prime}\to\pi^{0}\pi^{0}\eta,\eta\to 3\pi^{0}\). Also shown is the expected shape of the signal contribution, with arbitrary normalization. After our selection, there are no significant \(\eta^{\prime}\) signal is evident. With a Bayesian approach, the upper limit on the branching fraction is determined to be \(Br(\eta^{\prime}\to 4\pi^{0}<4.94\times 10^{-5})\) at 90% C.L. [7]. This corresponds to an improvement of a factor 6 compared to the previous best value from the GAMS-\(4\pi\) experiment [8]. However, the current limit is still far to reach the theoretical predication with a level of \(10^{-8}\). ## III Observation of \(\eta^{\prime}\to\rho^{+}\pi^{-}+c.c.\) The decays \(\eta^{\prime}\to\pi\pi\pi\) are isospin-violating processes. Because the electromagnetic contribution is strongly suppressed [9], they are induced dominantly by the strong interaction via the explicit breaking of chiral symmetry by the \(d-u\) quark mass difference. A Dalitz plot analysis based on the formalism of the isobar model [10] is performed. The resonant \(\pi-\pi\) S-wave (\(L=0\) for \(\sigma\)) and P-wave (\(L=1\) for \(\rho^{\pm}\)) amplitudes are described as in Ref. [11]. Projections of the data and fit results are displayed in Fig. 2. The data are well described by three components: \(P\) wave (\(\rho^{\pm}\pi^{\mp}\)), resonant \(S\) wave (\(\sigma\pi^{0}\)), and phase-space \(S\) wave (\(\pi\pi\pi\)). The interference between \(\sigma\) and the non-resonant term is large and strongly depends on the parametrization of \(\sigma\). Therefore we are unable to determine the individual contributions and consider only the sum of the \(S\)-wave amplitudes in this analysis. Using a combined amplitude analysis of \(\eta^{\prime}\to\pi^{+}\pi^{-}\pi^{0}\) and \(\eta^{\prime}\to\pi^{0}\pi^{0}\pi^{0}\) decays, the \(P\)-wave contribution from \(\rho^{\pm}\) is observed for the first time with high statistical significance. The pole position of \(\rho^{\pm}\), 775.49 (fixed)\(-i(68.5\pm 0.2)\) MeV, is consistent with previous measurements, and the branching fraction \({\cal B}r(\eta^{\prime}\to\rho^{\pm}\pi^{\mp})\) is determined to be \((7.44\pm 0.60\pm 1.26\pm 1.84)\times 10^{-4}\)[12]. ## III Observation of the Dalitz Decay \(\eta^{\prime}\to\gamma e^{+}e^{-}\) Electromagnetic (EM) Dalitz decays of light pseudoscalar mesons, \(P\to\gamma l^{+}l^{-}\) (\(P=\pi^{0}\), \(\eta\), \(\eta^{\prime}\); \(l=e,\mu\)), play an important role in revealing the structure of hadrons and the interaction mechanism between photons and hadrons [13]. If one assumes point-like particles, the decay rates can be exactly calculated by Quantum Electrodynamics (QED) [14]. Modifications to the QED decay rate due to the inner structure of the mesons are encoded in the transition form factor (TFF) \(F(q^{2})\), where \(q\) is the momentum transferred to the lepton pair, and \(q^{2}\) is the square of the invariant mass of the lepton pair. A recent summary and discussion of this subject can be found in Ref. [15]. We report the first observation of the \(\eta^{\prime}\to\gamma e^{+}e^{-}\) decay and the extraction of the TFF. The source of the \(\eta^{\prime}\) mesons are from radiative \(J/\psi\to\gamma\eta^{\prime}\) decays collected by the BESIII at the BEPCII \(e^{+}e^{-}\) collider. The \(\eta^{\prime}\to\gamma\gamma\) decay events in the same data sample are used for normalization. The combination of \(\gamma e^{+}e^{-}\) with invariant mass closest to \(m_{\eta^{\prime}}\) is taken to reconstruct the \(\eta^{\prime}\). The resulting \(M(\gamma e^{+}e^{-})\) distribution after the selection criteria is shown in Fig. 3 and exhibits a clear peak at the \(\eta^{\prime}\) mass. An unbinned extended maximum likelihood fit is performed to determine the signal yield. Using the \(\eta^{\prime}\to\gamma\gamma\) branching fraction value listed in PDG [3], we obtain the first measurement of the \(\eta^{\prime}\to\gamma e^{+}e^{-}\) branching fraction of \(Br(\eta^{\prime}\to\gamma e^{+}e^{-})=(4.69\pm 0.20(stat.)\pm 0.23(sys.))\times 10^{ -4}\). [16] The results of a least-squares fit with the single pole model is shown in Fig. 4, the parameters of the form factors are determined to be \(\Lambda_{\eta^{\prime}}=(0.79\pm 0.05)\) GeV, \(\gamma_{\eta^{\prime}}=(0.13\pm 0.06)\) GeV. From the fitted value of the parameter \(\Lambda_{\eta^{\prime}}\), the slope of the form factor is obtained to be \((1.60\pm 0.19)\) GeV\({}^{-2}\)[16], in agreement with the result \(b_{\eta^{\prime}}=(1.7\pm 0.4)\) GeV\({}^{-2}\) obtained in the process of \(\eta^{\prime}\to\gamma\mu^{+}\mu^{-}\)[13]. ## IV Observation of the Double Dalitz Decay \(\eta^{\prime}\to e^{+}e^{-}e^{+}e^{-}\) The double Dalitz decays \(P\to\ell^{+}\ell^{-}\ell^{\prime+}\ell^{\prime-}\), where \(P\) is a pseudoscalar meson (\(P=\pi^{0},\eta\), or \(\eta^{\prime}\)) while \(\ell\) and \(\ell^{\prime}\) are leptons (\(\ell,\ell^{\prime}=e,\mu\)), are expected to proceed through an intermediate state of two virtual photons. These processes are of great interest for understanding the pseudoscalar TFF and the interactions between pseudoscalars and virtual photons [17]. These TFFs are necessary inputs to calculate the pseduoscalar-meson-pole contributions to the hadronic light-by-light scattering, which causes the second largest uncertainty in the Standard Model determination of the muon anomalous magnetic moment [18]. Particularly, the double Dalitz decays of pseudoscalar mesons help to determine the TFFs in the small timelike momentum region, i.e. \(m_{ll}^{2}\leq q^{2}\leq m_{P}^{2}\), with \(m_{ll}\) the invariant mass of the dilepton and \(m_{P}\) the mass of the pseudoscalar meson, and thus are suitable to determine the slope of the TFFs at \(q^{2}=0\)[18]. The resulting \(M(e^{+}e^{-}e^{+}e^{-})\) distribution is shown in Fig. 5, where a clear \(\eta^{\prime}\) signal is visible. An unbinned extended maximum likelihood fit is performed to determine the \(\eta^{\prime}\) signal yield. The fit results shows the \(\eta^{\prime}\) signal with a significance of \(5.7\sigma\), and the calculated branching fraction is \(Br(\eta^{\prime}\to e^{+}e^{-}e^{+}e^{-})=(4.5\pm 1.0(stat.)\pm 0.5(sys.))\times 10^{ -6}\)[19], it is consistent with the theoretical predictions within the uncertainties and provides new information for the studies about \(\eta^{\prime}\) TFF and the interactions between \(\eta^{\prime}\) and virtual photons [17]. Figure 3: Invariant \(\gamma e^{+}e^{-}\) mass distribution for the selected signal events. The (black) crosses are the data, the (red) dashed line represents the signal, the (green) dot-dashed curve shows the non-peaking background shapes, the (orange) shaded component is the shape of the \(J/\psi\to\gamma\eta^{\prime},\eta^{\prime}\to\gamma\gamma\) peaking background events. The total fit result is shown as the (blue) solid line. Figure 4: Fit to the single pole form factor \(|F|^{2}\). The (black) crosses are data, where the statistical and systematic uncertainties are combined, the (blue) solid curve shows the fit results. The (gray) dotted line shows to the point-like case (_i.e._ with \(|F|^{2}=1\)) for comparison. ## IV Study of \(\eta^{\prime}\to\gamma\pi^{+}\pi^{-}\) decay dynamics The radiative decay \(\eta^{\prime}\to\gamma\pi^{+}\pi^{-}\) is the second most probable decay mode of the \(\eta^{\prime}\) meson with a branching fraction of \((28.9\pm 0.5)\%\)[3] and is frequently used for tagging \(\eta^{\prime}\) candidates. In the VMD model, this process is dominated by the decay \(\eta^{\prime}\to\gamma\rho^{0}\). In the past, the dipion mass distribution was studied by several experiments, and a peak shift of about \(+20\) MeV/\(c^{2}\) for the \(\rho^{0}\) meson with respect to the expected position was observed. Dedicated studies, concluded that a lone \(\rho^{0}\) contribution in the dipion mass spectrum did not describe the experimental data [20]. This discrepancy could be attributed to a higher term of the Wess-Zumino-Witten anomaly, known as the box anomaly, in the ChPT Lagrangian [21]. Both the model-dependent and model-independent approaches are carried out to investigate the decay dynamics. In the model-dependent study, binned maximum likelihood fits are performed to the \(M(\pi^{+}\pi^{-})\) distribution between 0.34 and 0.90 GeV/\(c^{2}\) with different scenarios. Finally, we find the "\(\rho^{0}\)-\(\omega\)-box anomaly" model gives the best goodness of fit \(\chi^{2}/ndf=207/107\) (Fig. 6(a)) [22]. An alternative fit by replacing the box anomaly with the \(\rho^{\prime}\) component gives considerably worse agreement with \(\chi^{2}/ndf=303/106\) (Fig. 6(b)). A model independent approach is also implemented to investigate the decay dynamics. The model independent approach provides a satisfactory parametrization of the dipion invariant mass spectrum, and yields the parameters of the process-specific part \(P(s)\) to be \(\kappa=0.992\pm 0.039\pm 0.067\pm 0.163\) GeV\({}^{-2}\), \(\lambda=-0.523\pm 0.039\pm 0.066\pm 0.181\) GeV\({}^{-4}\), and \(\xi=0.199\pm 0.006\pm 0.011\pm 0.007\), where the first uncertainties are statistical, the second are systematic, and the third are theoretical. In contrast to the conclusion in Ref. [23] based on the limited statistics from the Crystal Barrel experiment [24], our result indicates that the quadratic term and the \(\omega\) contribution in \(P(s)\), corresponding to statistical significances of \(13\sigma\) and \(34\sigma\), respectively, are necessary. ## V Evidence for the cusp effect in \(\eta^{\prime}\) decays into \(\eta\pi^{0}\pi^{0}\) Experimental studies of light meson decays are important guides to our understanding of how QCD works in Figure 6: Model-dependent fit results in case (a) \(\rho^{0}\)-\(\omega\)-box anomaly and (b) \(\rho^{0}\)-\(\omega\)-\(\rho^{\prime}\). Dots with error bars represent data, the green shaded histograms are the background from \(\eta^{\prime}\) sideband events, the red solid curves are the total fit results, and others represent the separate contributions as indicated. To be visible, the small contributions of \(\omega\), the box anomaly (\(\rho^{\prime}\)) and the interference between \(\omega\) and the box anomaly (\(\rho^{\prime}\)) are scaled by a factor of 20. Figure 5: The \(M(e^{+}e^{-}e^{+}e^{-})\) distribution of data and the fitting results. The dots with error bars are data, the red dashed line is the signal shape, and the solid blue line is the total fit result. The gray area represents the peaking background from \(J/\psi\to\gamma\eta^{\prime},\eta^{\prime}\to\gamma e^{+}e^{-}\), and the cyan dotted line is a linear function. the non-perturbative regime. In \(\pi\pi\) interaction, one of the prominent features is the loop contribution to the \(\pi\pi\) scattering: the \(S\)-wave charge-exchange rescattering \(\pi^{+}\pi^{-}\to\pi^{0}\pi^{0}\) causes a prominent cusp at the center of mass energy corresponding to the summed mass of two charged pions. The cusp effect can shed light on the fundamental properties of QCD at low energies, by determining the strength of the \(S\)-wave \(\pi\pi\) interaction. Using an unbinned maximum likelihood method, we fit the Dalitz plot of \(M^{2}(\pi^{0}\pi^{0})\) versus \(M^{2}(\eta\pi^{0})\) within the framework of NREFT. The resolution effect and detection efficiency are studied by MC simulation and taken into account in the fit. We perform alternative fits within the framework of NREFT to evaluate this effect (Fig. 8). The fit with tree level amplitude (Fit I) shows a discrepancy below the charged pion mass threshold, which implies the existence of the cusp effect. To describe the data in this region, the contributions at one- and two-loop level (Fit II \(\sim\) Fit IV) are introduced in the decay amplitude. We perform alternative analyses by taking into account the cusp effect. For each case, the amplitude provides a good description of the structure around the charged pion mass threshold and the statistical significance is found to be around \(3.5\sigma\)[25]. The scattering length combination \(a_{0}-a_{2}\) is measured to be \(0.226\pm 0.060\pm 0.013\), which is in good agreement with the theoretical value of \(0.2644\pm 0.0051\)[26] within the uncertainties. The observation of the evidence of the cusp effect in \(\eta^{\prime}\to\eta\pi^{0}\pi^{0}\) decay demonstrates the excellent potential to investigate the underlying dynamics of light mesons at the BESIII experiment. ## V First measurement of absolute branching fractions of \(\eta/\eta^{\prime}\) decays As two members of the ground-state nonet of pseudo-scalar mesons, the \(\eta\) and \(\eta^{\prime}\) mesons play an important part in understanding low energy QCD [27]. Precise measurements of their branching fractions (BFs) are important for a wide variety of physics topics. For example, the decay widths of \(\eta/\eta^{\prime}\to\gamma\gamma\) are related to the quark content of the two mesons [28], the BFs of \(\eta/\eta^{\prime}\to 3\pi\) decays can provide valuable information on light quark masses [29], the BFs of \(\eta/\eta^{\prime}\to\pi^{+}\pi^{-}\gamma\) decays are related to details of chiral dynamics [30], and the BFs of some rare decays of the \(\eta\) and \(\eta^{\prime}\) can test fundamental QCD symmetries [31] and probe for physics beyond the standard model [32]. As the BFs of the rare decays are obtained via normalization to the dominant decay modes, Figure 8: The fit result projections divided by phase space of different models to variable (a)\(M^{2}(\eta\pi^{0})\) and (b)\(M^{2}(\pi^{0}\pi^{0})\). The black dots with error bars are from data. The solid lines are fit results from the corresponding models. The red dashed line indicates the charged pion mass threshold. The cusp region is also shown in the inset. Figure 7: The results of the model-independent fit with \(\omega\) interference. Dots with error bars represent data, the (green) shaded histogram is the background contribution from \(\eta^{\prime}\) sideband events, and the (red) solid curve is the fit result. a precise determination of the BFs of the dominant decay modes of the \(\eta\) and \(\eta^{\prime}\) is essential. In Ref. [33; 34], we developed an approach to measure the absolute BFs of the exclusive decays of the \(\eta/\eta^{\prime}\) meson using the data sample collected with the BESIII detector. Taking advantage of the excellent momentum resolution of charged tracks in the MDC, photon conversions to \(e^{+}e^{-}\) pairs provide a unique tool to reconstruct the inclusive photon spectrum from radiative \(J/\psi\) decays. Take \(J/\psi\to\gamma\eta(\eta^{\prime})\) for example, Monte Carlo (MC) study indicates that the energy resolution of the radiative photon could be improved by a factor of three using the photon conversion events. This enables us to tag the \(\eta/\eta^{\prime}\) inclusive decays and then to measure the absolute BF of \(J/\psi\to\gamma\eta(\eta^{\prime})\), using \[Br(J/\psi\to\gamma\eta/\eta^{\prime})=\frac{N^{\rm obs}_{J/\psi\to\gamma\eta/ \eta^{\prime}}}{N_{J/\psi}\cdot\varepsilon\cdot f}, \tag{1}\] where \(N^{\rm obs}_{J/\psi\to\gamma\eta/\eta^{\prime}}\) is the observed \(\eta/\eta^{\prime}\) yield, \(\varepsilon\) is the detection efficiency obtained from MC simulation, and \(N_{J/\psi}\) is the number of \(J/\psi\) events. The photon conversion process is simulated with GEANT4, and \(f\) is a correction factor to account for the difference in the photon conversion efficiencies between data and MC simulation. After the \(\eta/\eta^{\prime}\) inclusive measurement, we present precision measurements of \(\eta\) decays to \(\pi^{0}\pi^{0}\pi^{0}\), \(\pi^{+}\pi^{-}\pi^{0}\), \(\pi^{+}\pi^{-}\gamma\), \(\gamma\gamma\), and \(\eta^{\prime}\) decays to \(\gamma\pi^{+}\pi^{-}\), \(\eta\pi^{+}\pi^{-}\), \(\eta\pi^{0}\pi^{0}\), \(\gamma\omega\), \(\gamma\gamma\), again using \(J/\psi\) decays to \(\gamma\eta/\eta^{\prime}\), but with the radiative photon directly detected by the EMC to improve the statistics. With the help of Eq. (1), the BF for each \(\eta^{\prime}\) exclusive decay is then calculated using \[Br(\eta/\eta^{\prime}\to X)=\frac{N^{\rm obs}_{\eta/\eta^{\prime}\to X}}{ \varepsilon_{\eta/\eta^{\prime}\to X}}\cdot\frac{\varepsilon}{N^{\rm obs}_{J/ \psi\to\gamma\eta/\eta^{\prime}}}\cdot f, \tag{2}\] where \(N^{\rm obs}_{\eta/\eta^{\prime}\to X}\) is the number of signal events obtained from a fit to data and \(\varepsilon_{\eta/\eta^{\prime}\to X}\) is the MC-determined reconstruction efficiency. The measured BF of \(J/\psi\to\gamma\eta(\eta^{\prime})\) is \((1.067\pm 0.005\pm 0.023)\times 10^{-3}\) (\((5.27\pm 0.03\pm 0.05)\times 10^{-3}\)), which is in agreement with the world average value, \((1.085\pm 0.018)\times 10^{-3}\) (\((5.25\pm 0.07)\times 10^{-3}\)) [3], but with a significantly improved precision. In addition, we also give the relative BFs for \(\eta\) and \(\eta^{\prime}\) decays as presented in Tab. 1 and Tab. 2 respectively. ## IV Conclusion The BESIII collaboration has produced fruitful results related with light meson decays, including the studies of the decay dynamics, tests of discrete symmetries, searches for rare decays, and many other interesting results not covered in this proceeding. The BESIII experiment has accumulated 10 billion \(J/\psi\) events in total, which is a unique world wide sample, allows to study the light mesons with unprecedented statistics. Ongoing analyses will produce more precise results in the next years.
2309.14119
Hopf Semimetals
We construct two-band topological semimetals in four dimensions using the unstable homotopy of maps from the three-torus $T^3$ (Brillouin zone of a 3D crystal) to the two-sphere $S^2$. Dubbed ``Hopf semimetals'', these gapless phases generically host nodal lines, with a surface enclosing such a nodal line in the four-dimensional Brillouin zone carrying a Hopf flux. These semimetals show a unique class of surface states: while some three-dimensional surfaces host gapless Fermi-arc states {\em and} drumhead states, other surfaces have gapless Fermi surfaces. Gapless two-dimensional corner states are also present at the intersection of three-dimensional surfaces.
Bhandaru Phani Parasar, Vijay B. Shenoy
2023-09-25T13:19:44Z
http://arxiv.org/abs/2309.14119v1
# Hopf Semimetals ###### Abstract We construct two-band topological semimetals in four dimensions using the unstable homotopy of maps from the three-torus \(T^{3}\) (Brillouin zone of a 3D crystal) to the two-sphere \(S^{2}\). Dubbed "Hopf semimetals", these gapless phases generically host nodal lines, with a surface enclosing such a nodal line in the four-dimensional Brillouin zone carrying a Hopf flux. These semimetals show a unique class of surface states: while some three-dimensional surfaces host gapless Fermi-arc states _and_ drumhead states, other surfaces have gapless Fermi surfaces. Gapless two-dimensional corner states are also present at the intersection of three-dimensional surfaces. _Introduction:_ The understanding and classification of gapped topological phases[1; 2; 3; 4; 5; 6; 7; 8] of non-interacting fermions has not only provided deeper insights, but also, stimulated wider generalizations[9; 10] and the search for topological materials[11]. The current understanding of these gapped phases is built on the symmetry classification of the fermionic systems that arise from the presence or absence of intrinsic symmetries such as time reversal, charge conjugation and sublattice symmetries[12; 13; 14; 8]. In a crystalline system in \(d\)-dimensions, the ground state of a gapped fermionic system is obtained by the state of occupied valance bands in the first Brillouin zone (BZ), the \(d\)-torus \(T^{d}\). Interestingly, the occupied states at any point in the BZ can be viewed as a point in one of the ten symmetric spaces \(\mathcal{S}\), the specific one being determined by the intrinsic symmetry. Topologically distinct gapped ground states are identified with the homotopy classes of maps from \(T^{d}\) to \(\mathcal{S}\), resulting in the periodic table of strong topological phases[5]. Apart from these symmetry protected topological phases, a class of gapless phases have elicited attention, beginning with graphene[15], and more recently, Weyl and Dirac semimetals [16; 17; 18; 19; 20; 21; 22]. Weyl semimetals arise in three dimensions, exploiting the topology in a lower dimensional slice of the Brillouin zone (say the \(k_{1}-k_{2}\) plane) that undergoes a "phase transition" as the \(k_{3}\) of the slice is varied. Thus, these semimetals are protected by the topology of the two adjacent two-dimensional phases, the gapless points being those \(k_{3}\) at which the quantum phase transition between the two-dimensional phases is affected. They have received considerable attention owing to the exotic properties such as Fermi-arc surface states, interesting nonlinear responses related to the chiral anomaly etc. The classification of gapped phases hinges on the number of bands being large. In more mathematical terms, these are determined by the stable homotopies of maps from \(T^{d}\) to \(\mathcal{S}\) which are realized when \(\mathcal{S}\) is large dimensional. In the absence of a large number of bands, one can still obtain topological phases that arise from unstable homotopies of maps from \(T^{d}\) to \(\mathcal{S}\) i. e., when the space \(\mathcal{S}\) is "small dimensional". An example in a three-dimensional lattice that hosts a two-band gapped system is dubbed as a "Hopf insulator"[23; 24; 25] whose topology can be traced to the homotopies of maps from the three-sphere \(S^{3}\) to the two sphere \(S^{2}\). In this paper, we show how a Hopf insulator in 3 dimensions can be used to construct interesting gapless phases in 4 dimensions. These gapless phases have several new features. Unlike the Weyl semimetal in three dimensions, these four-dimensional semimetals host nodal lines of gapless points (a one-dimensional submanifold) in the four-dimensional Brillouin zone. Remarkably, any three-dimensional surface that encloses one of these rings carries an integer Hopf number that characterizes the phases on either side of these rings. These features manifest spectacularly in the nature of gapless surface states. There are three-dimensional surfaces, which host Fermi-arc states and, in addition, gapless drumhead states [26]. Further, we also find evidence of two-dimensional corner states that arise at the intersection of two three-dimensional surfaces of the four-dimensional insulator. This work presents a new class of interesting topological phases in higher dimensions[27; 28; 29]. _Hopf and Hopf-Chern Insulators:_ We begin with a two-band system that realizes an insulting phase on a 3D cubic lattice with a unit lattice spacing. The Brillouin zone (BZ) of this system is the three torus \(T^{3}\) corresponding to \([-\pi,\pi]^{3}\). A generic point in the BZ is denoted by \(\mathbf{k}=(k_{1},k_{2},k_{3})\). A two-band Hamiltonian is defined by \[H(\mathbf{k})=\mathbf{d}(\mathbf{k})\cdot\mathbf{\sigma} \tag{1}\] where \(\mathbf{d}(\mathbf{k})\) is the vector \((d_{1}(\mathbf{k}),d_{2}(\mathbf{k}),d_{3}(\mathbf{k}))\), and \(\mathbf{\sigma}=(\sigma_{1},\sigma_{2},\sigma_{3})\) where \(\sigma_{i}\) are the \(2\times 2\) Pauli matrices. The chemical potential here and henceforth in this paper is set to zero so that the fermionic many-body system is half-filled (1 particle per site). Existence of a gap necessitates that \(|\mathbf{d}(\mathbf{k})|>0\) for \(\mathbf{k}\in T^{3}\), and thus the unit vector \(\hat{d}(\mathbf{k})=\mathbf{d}(\mathbf{k})/|\mathbf{d}(\mathbf{k})|\) can be identified with a point on the two-sphere \(S^{2}\). Consequently, the Hamiltonian Eq. (1) can be viewed as a map from \(T^{3}\) to \(S^{2}\). Distinct insulating topological phases are obtained depending on the homotopy class of the map from \(T^{3}\) to \(S^{2}\), with two insulators being identical if they can be smoothly deformed to each other (i.e., homotopic) without closing the gap. Such maps have been extensively studied both from the mathematical and physical perspectives[30; 31; 32; 33; 34]. The homotopy classes of the maps are characterized by four (integer) numbers \((\chi,(C_{1},C_{2},C_{3}))\). The numbers \(C_{\alpha}\) are the Chern numbers associated with two-dimensional \(T^{2}\) submanifolds of \(T^{3}\), where \(\alpha\) indicates the normal direction to the \(T^{2}\)-submanifold. The number \(\chi\) is in \(\mathbb{Z}_{2Q}\) where \(Q=\mathrm{GCD}(C_{1},C_{2},C_{3})\). Thus, if all \(C_{\alpha}\) are zero, \(\chi\) can take any integer value, and such insulators are termed as Hopf insulators[23; 24; 25]. On the other hand, if any of the \(C_{\alpha}\) is nonzero, then \(\chi\) takes on only a finite set of values, and such insulators are dubbed as Hopf-Chern insulators[25]. Hopf insulators can be constructed[23; 24] using an intermediate map from \(T^{3}\) to \(S^{3}\) (the three-sphere). Since \(S^{3}\) is described by two complex number \(z_{1},z_{2}\) such that \(|z_{1}|^{2}+|z_{2}|^{2}>0\), the prescription \[\begin{split} z_{1}(\mathbf{k},h)&=\sin k_{1}+\mathrm{i }\sin k_{2}\\ z_{2}(\mathbf{k},h)&=\sin k_{3}+\mathrm{i}(\cos k_{1}+ \cos k_{2}+\cos k_{3}+h)\end{split} \tag{2}\] (\(h\) is a parameter, \(\mathrm{i}=\sqrt{-1}\)), is a map from \(T^{3}\) to \(S^{3}\). The topological index \(\Gamma\) ([35], section S1) of this map, vanishes when \(|h|>3\), is 1 for \(1<|h|<3\), and \(-2\) for \(|h|<1\). The map is thus topologically nontrivial for \(|h|<3\). Finally, to obtain a two-band model, the point on \(S^{3}\) is mapped to \(S^{2}\) via the Hopf map[36] \[\begin{split}\mathbf{d}^{(p,q)}(\mathbf{k},h)&=\big{(}2 \Re(z_{1}^{p}(\mathbf{k},h)z_{2}^{*q}(\mathbf{k},h)),\\ 2\Im(z_{1}^{p}(\mathbf{k},h)z_{2}^{*q}(\mathbf{k},h)),&|z_{ 1}(\mathbf{k},h)|^{2p}-|z_{2}(\mathbf{k},h)|^{2q}\big{)}\end{split} \tag{3}\] where \(p,q\) are co-prime integers, \(*\) denotes complex conjugation. Such a map has a Hopf index[36]\(\mathscr{H}=\pm pq\). The Hopf insulator defined using Eq. (3) has vanishing Chern number \(C_{\alpha}\), and is thus characterized by \((\chi,(0,0,0))\) where \(\chi=\Gamma\mathscr{H}\)[24]. Hopf-Chern insulators are those which have a non-zero Chern numbers \(C_{\alpha}\). These are obtained[25; 37] using \[\begin{split}\begin{pmatrix}d_{1}^{(m)}(\mathbf{k})\\ d_{2}^{(m)}(\mathbf{k})\end{pmatrix}&=\begin{pmatrix}\cos mk_{1}&-\sin mk_{1}\\ \sin mk_{1}&\cos mk_{1}\end{pmatrix}\begin{pmatrix}\sin k_{2}\\ \sin k_{3}\end{pmatrix}\\ d_{3}^{(m)}(\mathbf{k})=1+\Delta_{1}(\cos k_{2}+\cos k_{3})+\Delta_{2}\cos k_{2} \cos k_{3}\end{split} \tag{4}\] where \(m\) is an integer and \(\Delta_{1}\),\(\Delta_{2}\) are real parameters. For this model, \(C_{2}=C_{3}=0\) always, and \(C_{1}\) is determined by the values of \(\Delta_{1}\) and \(\Delta_{2}\) (and is not affected by the value of \(m\)). The quantity \(\chi\) is determined by \(m\) as \(\chi=m|C_{1}|\mathrm{mod}\,2|C_{1}|\). _Hopf semimetals:_ Realization of such topological phases in \(d\) dimensions allows us to construct interesting semimetallic phases in \(d+1\) dimensions. In fact, the well-studied Weyl semimetallic phases are examples of such physics. These phases in three dimensions arise from the topological Chern insulators in two dimensions and enjoy a degree of protection owing to the stability of Weyl points where \(d_{i}(\mathbf{k})=0\). The Weyl points are those at which all components of \(\mathbf{d}\) vanish, each such equation describing a surface embedded in three dimensions. Three such surfaces generically intersect at isolated points, leading to the stability of Weyl points to small perturbations of the Hamiltonian. Taking this idea to a four-dimensional two-band system, semimetallicity will require that \(d_{i}(\mathbf{K})=0\) for \(i=1,2,3\) where the gap closes; \(\mathbf{K}\) is a point in the BZ \(T^{4}\) (four-torus, \([-\pi,\pi]^{4}\)) of the 4D cubic lattice with \(\mathbf{K}=(k_{1},k_{2},k_{3},k_{4})\equiv(\mathbf{k},k_{4})\). This condition, if satisfied, will be generically met on a one-dimensional submanifold of \(T^{4}\). The conclusion is that the semimetals arising in these four-dimensional systems will generically possess _nodal lines_. The exciting aspect here is that these line nodes enjoy a degree of protection in that small perturbations cannot remove them but, at best, change their shape. To construct such a semimetal arising from the topology of the Hopf insulator, we first study a quantum phase transition that occurs in the 3D Hopf insulator. Consider a three-dimensional system with a tuning parameter \(\lambda\) \[\mathbf{d}(\mathbf{k},h,\lambda)=(1-\lambda)\mathbf{d}^{(1,1)}(\mathbf{k},h)+\lambda\mathbf{d}^{ \sharp}(\mathbf{k}) \tag{5}\] where \(\mathbf{d}^{(1,1)}(\mathbf{k})\) is the dispersion in Eq. (3), and \(\mathbf{d}^{\sharp}(\mathbf{k})=(0,0,1)\) is the dispersion of a gapped flat band system which is topologically trivial. When \(\lambda=0\), the system hosts Hopf insulating phases in a regime of the parameter \(h\), and a topologically trivial phase for \(\lambda=1\). For intermediate values of \(\lambda\) we obtain a variety of semimetallic phases (see Fig. 1). These semimetallic phases are characterized by a change in Chern numbers of the \(T^{2}\) submanifolds of \(T^{3}\) ([35], section S2), as illustrated for \(\lambda=1/4\) in Fig. 1 (inset). We construct a semimetallic phase on a four-dimensional cubic lattice by defining for each \(\mathbf{K}\in T^{4}\) \[\mathbf{d}(\mathbf{K},\lambda)=(1-\lambda)\mathbf{d}^{(1,1)}(\mathbf{k},-3+\cos k_{4})+\lambda\bm {d}^{\mathsf{f}}(\mathbf{k}) \tag{6}\] The Hamiltonian Eq. (1) obtained using this produces a semimetallic phase for a range \(0\leq\lambda<1/2\). Focusing first on \(\lambda=0\), we find that the bulk gap closes at _two points_ in \(T^{4}\), namely \(\mathbf{K}_{H}^{\pm}(0,0,0,\pm\pi/2)\), where the bands touch quadratically ([35], section S3). Most interestingly, these points are a source of "Hopf flux" in \(T^{4}\); this is most easily seen by enclosing, for example, the point \(\mathbf{K}^{+}\) by a ball \(B^{+}=|\mathbf{K}-\mathbf{K}^{+}|\leq\epsilon\) where \(\epsilon\) is a small number, the boundary of this ball \(\partial B^{+}\) is homeomorphic to \(S^{3}\), and \(\mathbf{d}(\mathbf{K},0),\mathbf{K}\in\partial B^{+}\) defines a map from \(S^{3}\) to \(S^{2}\). Interestingly, the map carries a non-vanishing Hopf index \(\mathscr{H}=-11\) ([35], section S3), pointing to the topological nature of this "Hopf semimetal" similar to what is found in a three-dimensional Weyl semimetal. However, as noted in the discussion above, point touching of two bands in 4 dimensions is not stable (in contrast to the 3D Weyl semimetal), and this is indeed seen in our construction. For a small \(\lambda>0\), we find that the two Hopf points evolve to nodal lines where the bands touch linearly except at two points on the nodal line ([35], section S3). With increasing \(\lambda\), the size of the nodal lines centered around \(\mathbf{K}_{\pm}\) increases. Fig. 2 (thick blue lines) shows the nodal lines for \(\lambda=1/4\) in the \(k_{1}=0\)\(T^{3}\) sub-manifold of the \(T^{4}\) BZ. The nodal lines appear in the \(k_{3}-k_{4}\) plane (\(k_{1}=k_{2}=0\)), and encircle the Hopf points \(\mathbf{K}_{H}^{\pm}\) extending from \(k_{4}^{\min}\leq|k_{4}|\leq k_{4}^{\max}\), \(k_{4}^{\min}=\arccos\sqrt{\frac{\lambda}{1-\lambda}},k_{4}^{\max}=\pi-\arccos \sqrt{\frac{\lambda}{1-\lambda}}\) and described by the equation \[2(1-\cos k_{3})(1-\cos k_{4})+\cos^{2}k_{4}=\frac{\lambda}{1-\lambda}. \tag{7}\] The nodal lines \(L^{\pm}\) respectively encircle \(\mathbf{K}_{H}^{\pm}\). The intriguing aspect is that the nodal lines also carry the same Hopf number, i.e. if we place balls \(B^{\pm}\) centered around \(\mathbf{K}_{H}^{\pm}\), and _enclosing_ the nodal lines \(L^{\pm}\), then the Hamiltonian on the surface of the ball \(\partial B^{\pm}\) defines a Hopf map such that the Hopf invariant associated with the \(L^{\pm}\) nodal lines are opposite of each other. This demonstrates the topological origin of the nodal lines and their stability. The nodal lines which appear between \(\pm k_{4}^{\min}\) and \(\pm k_{4}^{\max}\) separate three dimensional \(T^{3}\) sub-manifolds of \(T^{4}\) that carry distinct invariants \(\chi\). Indeed, for all the \(T^{3}\) submanifolds with \(|k_{4}|<k_{4}^{\min}\), the invariant \(\chi=1\). This change of topology of the bands along \(k_{4}\) is encoded in the Hopf number on the surface of \(\partial B^{\pm}\). We next investigate the nature of the surface states of the four-dimensional Hopf semimetal. The surface of this system is characterized by a normal direction, and is a "three-dimensional crystal" with a \(T^{3}\) surface Brillouin zone. Depicted in Fig. 2 for the surface with the \((1,0,0,0)\) normal, are a remarkably rich set of surface states. First, there is a set of gapless "Fermi-arc" states that exist between \(\pm k_{4}^{\min}\), depicted by the solid green line in Fig. 2. These arise from the \((1,0,0)\) surface states of the \(\chi=1\) Hopf insulator realized in the \(T^{3}\) submanifolds in this regime of \(k_{4}\). There are additional surface states that arise in the regime \(k_{4}^{\min}<|k_{4}|<k_{4}^{\max}\). In fact, all the points in the \(T^{3}\) surface BZ that are inside the nodal line projected onto the surface BZ host gapless states that are higher-dimensional analogs of drumhead states([26] and references therein). Details of all of these states may be found in [35], section S4. The Hopf semimetal holds further interesting aspects when we study the surface states on the \((0,0,1,0)\) surface. This surface hosts two types of gapless states (see Figure 2: Hopf nodal line semimetal for \(\lambda=1/4\) in Eq. (6). Blue: nodal lines (nodal lines lie in the \(k_{1}=0\) submanifold of \(T^{4}\)). The figure also depicts \(T^{3}\) the _surface BZ_ of the \((1,0,0,0)\) surface. Green: Fermi arc surface states that correspond to the \((1,0,0)\) surface states of the \(\chi=1\) Hopf insulator. Red: Drumhead surface states that correspond to the edge states of the \(C_{3}=1\) Chern insulator. Figure 3: Surface states of the Hopf nodal line semimetal in the \((0,0,1,0)\) surface BZ. Red: “Fermi surface” states that correspond to the \((0,0,1)\) surface states of the \(\chi=1\) Hopf insulator. Blue: Fermi arc states that arise from the projection of the nodal lines onto the surface BZ. Fig. 3). We find, first, a "Fermi surface" of gapless states between \(|k_{4}|<k_{4}^{\rm min}\); these are the surface states of the \(\chi=1\) Hopf insulator that is realized in the \(T^{3}\) submanifolds. In addition, there are other gapless states shown by the blue lines of the same figure; these are gapless states corresponding to the projection of the gapless nodal line on to the surface BZ. Finally, we also point out the possibility of interesting "corner states" in this Hopf semimetal that arise in the two-dimensional intersection of two three-dimensional surfaces. As an example, The corner formed by the intersection of two surfaces (1,0,0,0) and (0,1,0,0), will have a two-dimensional \(T^{2}\) Brillouin zone labeled by \((k_{3},k_{4})\). The corner states arise because the corner terminates \(1-2\) planes of the four-dimensional crystal. In the instance of \(\lambda=1/4\), some of the \(1-2\) submanifolds (which are \(T^{2}\)) host non-zero Chern numbers in the regime \(k_{4}^{\rm min}<|k_{4}|<k_{4}^{\rm max}\), and should result in the "corner drumhead states" in the \(T^{2}\) Brillouin zone. Other corners (intersections of different three-dimensional surfaces) will host Fermi arc states. While our calculations are consistent with this possibility, a full demonstration of this requires very large system sizes. In the last part of this paper, we demonstrate that such semimetals in four dimensions can also be constructed out of the Hopf-Chern insulators. To achieve this we first construct as 3d system with a parameter \(\lambda\), \[\mathbf{d}(\mathbf{k},\lambda)=(1-\lambda)\mathbf{d}^{(1)}(\mathbf{k})+\lambda\mathbf{d}^{(0)}( \mathbf{k}) \tag{8}\] where \(\mathbf{d}^{(1)}\) and \(\mathbf{d}^{(0)}\) are obtained from Eq. (4) with \(3\Delta_{1}=\Delta_{2}=\frac{3}{2}\). This system undergoes a quantum phase transition from a Hopf-Chern insulator with \((\chi=2,(C_{1}=2,C_{2}=0,C_{3}=0))\) to another insulator with \((\chi=0,(C_{1}=2,C_{2}=0,C_{3}=0)\) which can be viewed as a stack of Chern insulators. The phase transition[38] occurs at \(\lambda=1/2\) where the gap closes along two nodal-line rings, and \(\chi\) changes from \(2\) to \(0\) for \(\lambda>1/2\), with no change in the Chern numbers, i. e., \(C_{1}=2\) for \(\lambda\neq 1/2\) (\(C_{2,3}=0\)). We use the above to construct a Hopf-Chern semimetal in a four dimensional cubic lattice, via \[\mathbf{d}(\mathbf{K})=\left(\frac{1-\cos k_{4}}{2}\right)\mathbf{d}^{(1)}(\mathbf{k})+\left( \frac{1+\cos k_{4}}{2}\right)\mathbf{d}^{(0)}(\mathbf{k}) \tag{9}\] where \(\mathbf{K}=(\mathbf{k},k_{4})\) is a point in BZ \(T^{4}\). This system hosts nodal lines in the \(k_{2}-k_{3}\) planes with \(k_{4}=\pm\frac{\pi}{2}\) as shown by blue lines in Fig. 4(a). The \(T^{3}\) submanifolds of \(T^{4}\) with \(|k_{4}|<\pi/2\) have Hopf-Chern character with \(\chi=2\). The topological aspects of this semimetal are again evident in the nature of the surface states that it hosts. On a surface with \((1,0,0,0)\) as the normal, we find a "Fermi-cylinder" of gapless states (see Fig. 4(a)). The end-arcs of these cylinders are the projection of the nodal lines onto the surface BZ. These arise from the surface states of the Hopf-Chern insulators residing in \(|k_{4}|<\pi/2\) region of \(T^{4}\) BZ. Turning now to the \((0,1,0,0)\) surface (see Fig. 4(b)), we find that there are two types of gapless states. Since this also terminates at 2-direction of the crystal, it terminates \(2-3\) planes of the crystal, which carry a Chern number (independent of the value of \(k_{4}\)). Thus, there is a set of gapless states on the planes with \(k_{3}=0\) and \(k_{3}=\pi\). Finally there is a second set of gapless states that arise as disjoint nodal lines on the \(k_{1}=\pi\) plane of the surface \(BZ\); these states are the projections of the nodal lines on to the surface BZ. It is also clear that this system can host a variety of corner states, states residing on the two-dimensional intersection of two three-dimensional surfaces. It is interesting to explore the possibilities of experimental realization of these four-dimensal Hopf-semimetals and their surface states exploiting the ideas of synthetic dimensions[39] in cold atoms[40] and photonic systems[41]. Further theoretical investigations should also be fruitful. It will be interesting to find generalizations of such semimetals using other recently proposed topological phases[42] in three dimensions. Understanding the responses[37; 43] of the Hopf semimetals also provides an exciting direction. _Acknowledgement:_ BPP thanks the PMRF program for support. VBS acknowledges DST-SERB, India, for support through a MATRICS grant.
2309.12568
A Study on Learning Social Robot Navigation with Multimodal Perception
Autonomous mobile robots need to perceive the environments with their onboard sensors (e.g., LiDARs and RGB cameras) and then make appropriate navigation decisions. In order to navigate human-inhabited public spaces, such a navigation task becomes more than only obstacle avoidance, but also requires considering surrounding humans and their intentions to somewhat change the navigation behavior in response to the underlying social norms, i.e., being socially compliant. Machine learning methods are shown to be effective in capturing those complex and subtle social interactions in a data-driven manner, without explicitly hand-crafting simplified models or cost functions. Considering multiple available sensor modalities and the efficiency of learning methods, this paper presents a comprehensive study on learning social robot navigation with multimodal perception using a large-scale real-world dataset. The study investigates social robot navigation decision making on both the global and local planning levels and contrasts unimodal and multimodal learning against a set of classical navigation approaches in different social scenarios, while also analyzing the training and generalizability performance from the learning perspective. We also conduct a human study on how learning with multimodal perception affects the perceived social compliance. The results show that multimodal learning has a clear advantage over unimodal learning in both dataset and human studies. We open-source our code for the community's future use to study multimodal perception for learning social robot navigation.
Bhabaranjan Panigrahi, Amir Hossain Raj, Mohammad Nazeri, Xuesu Xiao
2023-09-22T01:47:47Z
http://arxiv.org/abs/2309.12568v1
# A Study on Learning Social Robot Navigation ###### Abstract Autonomous mobile robots need to perceive the environments with their onboard sensors (e.g., LiDARs and RGB cameras) and then make appropriate navigation decisions. In order to navigate human-inhabited public spaces, such a navigation task becomes more than only obstacle avoidance, but also requires considering surrounding humans and their intentions to somewhat change the navigation behavior in response to the underlying social norms, i.e., being socially compliant. Machine learning methods are shown to be effective in capturing those complex and subtle social interactions in a data-driven manner, without explicitly hand-crafting simplified models or cost functions. Considering multiple available sensor modalities and the efficiency of learning methods, this paper presents a comprehensive study on learning social robot navigation with multimodal perception using a large-scale real-world dataset. The study investigates social robot navigation decision making on both the global and local planning levels and contrasts unimodal and multimodal learning against a set of classical navigation approaches in different social scenarios, while also analyzing the training and generalizability performance from the learning perspective. We also conduct a human study on how learning with multimodal perception affects the perceived social compliance. The results show that multimodal learning has a clear advantage over unimodal learning in both dataset and human studies. We open-source our code for the community's future use to study multimodal perception for learning social robot navigation.1 Footnote 1: GitHub: [https://github.com/RobotiXX/multimodal-fusion-network/](https://github.com/RobotiXX/multimodal-fusion-network/) ## I Introduction Thanks to decades of robotics research [1, 2], autonomous mobile robots can navigate from one point to another in a collision-free manner in many real-world environments, e.g., factories and warehouses. Using onboard sensors, e.g., LiDARs and RGB cameras, those robots can perceive the environments, divide their workspaces into obstacles and free spaces, and then make navigation decisions to avoid obstacles and move towards their goal [3, 4, 5, 6]. However, when deploying mobile robots in human-inhabited public spaces, the navigation task becomes more complex [7, 8, 9]: While avoiding any obstacle on the way to the goal is still required, they also need to consider other humans sharing the same environments and adjust their decision-making process to produce new navigation behaviors that respond to the underlying, usually unwritten, social norms. One avenue to achieve such social compliance is machine learning [10]. Learning approaches allow those complex and subtle human-robot interactions during social navigation to be captured in a data-driven manner and alleviate roboticists from manually designing simplified models [11, 12], crafting cost functions [13, 14], and fine-tuning system parameters [15, 16, 17, 18, 19]. The development of machine learning infrastructure, e.g., onboard computation devices and an extensive corpus of perception data being generated from robots, also accelerates the adoption of learning methods for social robot navigation. Most current robots have multiple sensors onboard, with LiDARs and RGB cameras as the most common sensing modalities, and are therefore able to perceive complex social interactions from different sources (Fig. 1). While LiDARs have been the main perception modality for mobile robots for decades, recent research has shifted towards visual navigation with RGB input alone, thanks to its cheap cost and wide availability. Intuitively speaking, LiDARs provide high-resolution and high-accuracy geometric information about the environments, while cameras stream in RGB images which contain rich semantics. Both geometric and semantic information play a role in the decision making process of social robot navigation: Geometric structures like obstacles and humans need to be avoided, while semantics including navigation terrain, human gaze [20, 21], gesture, clothing, and body language can shed light on the navigation contexts and other humans' intentions to inform robot navigation decisions. Considering the rich and potentially complementary information provided by multiple available sensor modalities onboard mobile robots and the efficiency of learning methods in enabling emergent social robot navigation behaviors, this paper presents a comprehensive study on using multimodal Fig. 1: Social Robot Navigation Decision Making on the Global and Local Level with Multimodal and Unimodal (RGB Image and Point Cloud) Perception Input. perception of LiDAR and RGB camera inputs, two most common perception modalities of autonomous mobile robots, to learn the robot decision making process during social robot navigation. The study is conducted on a large-scale real-world Socially Compliant Navigation Dataset (scand) [22] collected in a variety of natural crowded public spaces on a university campus. From the social robot navigation perspective, we study the decision-making capability of multimodal and unimodal learning on both global and local planning in different social scenarios (e.g., Against Traffic, With Traffic, and Street Crossing); From the machine learning perspective, we study the training and generalizability performance of multimodal and unimodal learning in terms of training time, loss value, generalization accuracy, etc. We also conduct a human study and reveal how social compliance achieved by different sensor modalities can be perceived by humans interacting with the robot. The results show that multimodal learning is more reliable and robust than using unimodal networks in both dataset and human studies. ## II Related Work We review related work in social robot navigation, machine learning for navigation, and multimodal learning. ### _Social Robot Navigation_ While collision-free navigation has been investigated by the robotics community for decades [1, 2, 3, 4, 5, 6], roboticists have also built mobile robots that navigate around humans since the early museum tour-guide robots RHINO [23] and MINERVA [24]. Going beyond simply treating humans as dynamic, non-reactive obstacles [4], researchers have also modeled the uncertainty of human movements [25, 26, 27, 28, 29] or prescribed social norms for navigating agents [30, 31, 32], and then devised navigation planners that can take such uncertainty into account or abide such selected rules. These physics-based models [33, 34, 35, 36] consider humans' behavior features, such as proxemics [37, 38, 39, 40], intentions [41, 42], and social formations and spaces [43, 44, 20, 27, 20]. However, prescribing a simple model is usually not sufficient to capture complex human behaviors in the wild. For example, pedestrians move differently during rush hours or on weekends, within formal or informal contexts. Furthermore, such a plethora of factors to be considered during social robot navigation all have to be processed from raw perceptual data from onboard sensors, e.g., LiDARs and RGB cameras, and set challenges for onboard perception algorithms, e.g., human tracking, motion prediction, and intention detection. Along with the recent success in machine learning, both these challenges led to the recent adoption of data-driven approaches for social robot navigation [10]. ### _Machine Learning for Navigation_ As a potential solution to the aforementioned challenges, machine learning approaches have been leveraged to implicitly encode the complexities and subtleties of human social behaviors in a data-driven manner [10] and also address other challenges in navigation, e.g., off-road navigation [45, 46, 47, 48, 49, 50]. These data-driven approaches include learning representations or costmaps [13, 51, 52, 53, 14], parameterizations of navigation planners [15, 16, 17, 18, 19, 54], local planners [55, 56, 57, 58, 59], or end-to-end navigation policies that map directly from raw or pre-processed perceptions of the humans in the scene to motor commands that drive the robot [60, 61, 62]. From the perspective of machine learning methods, reinforcement learning [19, 58, 59, 63, 63] and imitation learning [16, 17, 18, 64, 65, 66] depend on training data from mostly simulated trial-and-error experiences and either human or artificial expert demonstrations respectively. Considering the difficulty in producing high-fidelity perceptual data and natural human-robot interactions in simulation, this study adopts an imitation learning setup, in particular, Behavior Cloning (BC) [65, 66], with a large-scale social robot navigation demonstration dataset. ### _Multimodal Learning_ Recent research has shown that combining data from different modalities in a multimodal learning framework can lead to promising results in solving downstream tasks [67]. For autonomous mobile robot navigation, researchers have tried sensor fusion by combining RGB cameras, LiDARs, and robot odometry with a multimodal graph neural network to navigate unstructured terrain including bushes, small trees, and grass regions of different heights and densities [68]. Furthermore, they have demonstrated the robustness of the network towards partial occlusion and unreliable sensor information in challenging outdoor environments. Other researchers have also combined laser, RGB images, point cloud, and distance map to learn navigation in time-sensitive scenarios such as disaster response or search and rescue, which include constrained narrow passages, pathways with debris, and irregular navigation scenarios [69]. Additionally, they have demonstrated that multimodal networks outperformed models that only utilized RGB images and distance maps. Multimodal perception has been shown to be valuable in addressing different challenges during real-world navigation tasks, but to the best of our knowledge, investigation into how multimodal perception can affect decision making during social robot navigation is still very limited, which is the focus of this study. Notice that we are interested in learning social robot navigation with multimodal _perception_ as input [67], rather than learning models with multimodal _distribution_, which has a relatively richer literature [70, 71, 72]. ### _Socially Compliant Robot Navigation Dataset (scand)_ Our study is based on an open-source, large-scale, real-world social robot navigation dataset, scand[22], of 8.7 hours, 138 trajectories, 40 kilometers of socially compliant, human teleoperated driving demonstrations that comprise multimodal data streams including 3D LiDAR, visual and inertial information, robot odometry, and joystick commands, collected on two morphologically different mobile robots--a Boston Dynamics Spot and a Clearpath Jackal--by four different human demonstrators in both indoor and outdoor environments. Due to its rich social interactions and multimodal perception-to-action navigation decisions, scand is suitable for studying social robot navigation learning with multimodal perception. Specifically, we study the effect of both point cloud data from a 3D LiDAR and RGB images from a camera, the most commonly available perception modalities onboard mobile robots, considering the geometric and semantic information provided by the point cloud data and RGB images can complement each other to assist decision making during social robot navigation in human-inhibited public spaces. ## III Multimodal Learning for Social Robot Navigation We adopt an imitation learning approach, i.e., BC, to learn socially compliant navigation decisions using multimodal perception from scand. Similar to classical navigation systems with a global and a local planning system, we design our multimodal learning framework so that it will produce both global and local plans and study how multimodal and unimodal learning can imitate the navigation decisions made by the human demonstrator on both global and local levels. ### _Problem Formulation_ Specifically, at each time step \(t\) of each trial in scand, the robot receives onboard perceptual input, including a sequence of 3D LiDAR point cloud data \(L\) and RGB images \(I\), and a goal \(G\) it aims to reach, which is taken as a waypoint 2.5m away from the robot on the future robot odometry. We denote all these inputs necessary to inform the decision-making process during social robot navigation as a navigation input: \(\mathcal{I}_{t}^{D}=\{L_{k}^{D},I_{k}^{D},G_{t}^{D}\}_{k=t-N+1}^{t}\), where \(N\) denotes the history length included in the navigation input at \(t\) and \(D\) denotes that the data is from the scand demonstrations. Facing a social navigation input \(\mathcal{I}_{t}^{D}\), the scand demonstrator shows the desired, socially compliant navigation decision \(\mathcal{D}_{t}\) on both global and local levels: \(P_{t}\) is the demonstrated global plan, recorded as the human-driven future robot odometry starting from time \(t\), and takes the form of a sequence of 2D waypoints \(P_{t}^{D}=\{(x_{i}^{D},y_{i}^{D})\}_{i=t}^{t+M-1}\); \(A_{t}\) is the demonstrated local plan represented as a sequence of joystick action commands \(A_{t}^{D}=\{(v_{t}^{D},\omega_{i}^{D})\}_{i=t}^{t+K-1}\), where \(v\) and \(\omega\) is the linear and angular velocity respectively. \(M\) and \(K\) denote the length of the navigation decision on the global and local plan level respectively. The demonstrated navigation decision is therefore defined as \(\mathcal{D}_{t}^{D}=\{P_{t}^{D},A_{t}^{D}\}\). Producing the navigation decision \(\mathcal{D}_{t}^{D}\) based on \(\mathcal{I}_{t}^{D}\) as input, a navigation system is defined as a combination of two functions, \(\mathcal{F}_{\theta}^{g}(\cdot)\) and \(\mathcal{F}^{l}(\cdot)\), responsible of generating the global plan \(P_{t}^{D}\) and local plan (action) \(A_{t}^{D}\): \[P_{t}^{D} =\mathcal{F}^{g}(\mathcal{I}_{t}^{D}),\] \[A_{t}^{D} =\mathcal{F}^{l}(\mathcal{I}_{t}^{D},P_{t}^{D}).\] In a data-driven manner, we instantiate both global and local planners by learning \(\mathcal{F}_{\theta}^{g}(\cdot)\) and \(\mathcal{F}_{\phi}^{l}(\cdot)\) as deep neural networks with learnable parameters \(\theta\) and \(\phi\) respectively. In particular, we aim to learn the parameters to minimize a BC loss: \[\theta^{*},\phi^{*}=\operatorname*{argmin}_{\theta,\phi}\sum_{P_{t}^{D},A_{t} ^{D},\mathcal{I}_{t}^{D}\in\text{scand}} \tag{1}\] where the first term is the difference between demonstrated and learned global plan, while the second term is for the local plan, with \(\lambda\) as a weight between them. In this study, we are interested in studying the effect of including different perception modalities in \(\mathcal{I}_{t}\) on making socially compliant navigation decisions \(P_{t}\) and \(A_{t}\). We study three scenarios, i.e., multimodal perception \(\mathcal{I}_{t}^{\text{MM}}=\{L_{k},I_{k},G_{t}\}_{k=t-N+1}^{t}\), unimodal LiDAR (point cloud) perception \(\mathcal{I}_{t}^{\text{LDAR}}=\{L_{k},G_{t}\}_{k=t-N+1}^{t}\), and unimodal vision (RGB image) perception \(\mathcal{I}_{t}^{\text{vision}}=\{I_{k},G_{t}\}_{k=t-N+1}^{t}\). For simplicity and consistency, we keep \(N=1\) for all three cases in this study and leave an investigation into different history lengths as future work. ### _Unimodal Perception_ #### Iii-B1 Point Cloud Modality We take points that are within the range of 8 meters in front, 3 meters on either side and within 2.5 meters of height from the robot as perceived by the 3D LiDAR. All points are placed into their respective voxel inside a 3D voxel grid with \(5\times 5\times 5\)cm voxels, resulting in a \(160\times 120\times 50\) voxel representation for \(L_{k}\). We use a 3D Convolution Neural Network (CNN) [73] to process the voxel representation to extract meaningful information for our downstream social robot navigation task. The point cloud encoder is shown as the green trapezoid in the red box at the bottom of Fig. 2. #### Iii-B2 RGB Modality For RGB images, we take a \(224\times 224\times 3\) image from the camera as input. We use ResNet-18 [74] to extract features for our social robot navigation task. The image encoder is shown as the green trapezoid in the yellow box at the top of Fig. 2. Both RGB and point cloud inputs have their own unimodal decision making modules, shown in the upper yellow and lower red box in Fig. 2 respectively. For a fair comparison, we enforce the same architecture, the only difference is the different input modalities. To be specific, we concatenate the embeddings from the corresponding input encoders with the local goal (2.5m away), and feed them into a Recurrent Neural Network (RNN) to capture history information (blue ellipsoids in Fig. 2). Then we use a Multi-Layer Perceptron (MLP) (yellow boxes in Fig. 2) to produce global plan in the form of a sequence of 2D waypoints (red dots in Fig. 2), which are further fed into another MLP. Concatenating the MLP output with the RNN output, a transformer, and another MLP at the end produces local plan, i.e., actions of linear and angular velocities. ### _Multimodal Fusion_ For multimodal fusion, the outputs of the RNNs from the point cloud and image modules are concatenated and passed through the fusion process, shown in Fig. 2 middle. Similar to the unimodal modules, our feature fusion also happens at two different places in our multimodal network. Each fusion caters to different downstream tasks, i.e., producing both global and local plans. ### _Navigation Decisions and Loss Functions_ The global navigation decisions are instantiated as a sequence of five future waypoints ahead of the robot, i.e., \(P_{t}^{D}=\{(x_{i}^{D},y_{i}^{D})\}_{i=t}^{t+4}\) (\(M=5\)), each of which is 0.5m apart taken from the future robot odometry. The local navigation decisions take the form of the current linear and angular velocity commands, i.e., \(A_{t}^{D}=\{(v_{t}^{D},\omega_{t}^{D})\}\) (\(K=1\)). For the first and second loss terms in Eqn. 1, we use \(L2\)-norm of the five future waypoints and \(L1\)-norm of the current angular and linear velocity. We set \(\lambda=1\). ### _Design Choices_ Notice that all aforementioned design choices with respect to neural network hyper-parameters and architecture are made after extensive trial-and-error and careful fine-tuning to ensure the different modalities can achieve the best learning performance for a fair comparison. All detailed hyper-parameters and design choices can be found in our open-source implementation for the future use of the community. We have experimented with PointNet [75] and PointNet++ [76] for the point cloud encoder, which does not perform well on scand social navigation scenarios: PointNet encodes individual point and relies on the global pooling layers to extract effective features. However, encoding points for highly diverse indoor and outdoor scand scenarios is not effective. Unlike closed, and small-scale indoor objects, point clouds collected during real-world robot navigation contain significantly more variation in terms of the number and distribution of points. Our further investigation into the point cloud encoder reveals that converting them to a voxelized grid and then processing them through a 3D CNN network results in a significant performance gain. We also try to learn local planner using simple MLP, but it fails to capture the variations in scand. For instance, for the same global path, there can be different velocities: If humans are nearby the linear velocity will be slower, in contrast to a scenario where they are far apart. Transformer can achieve significant performance gain because of the attention modules which can decide which features it should attend to in order to capture these variations. ## IV Scand Study Results We first present our study results on all the social scenarios in scand before presenting our human study results. We divide the scand trials into 18 for training and 8 for testing. We analyze the learning results on the test data from both the machine learning and social robot navigation perspectives. The training loss curves for the global planner in terms of L1 loss on the eight scand rosbags are shown in Fig. 3, while the local planner loss in Fig. 4. We also plot the performance of a variety of classical social robot navigation planners using the same loss function between their output and the scand demonstration to compare against end-to-end learned policies. ### _Multimodal Learning Performance_ The results of the eight test scand rosbags are ordered roughly according to increasing performance discrepancy among different modalities in Fig. 3, which can also be treated as an approximate representation of the "difficulty" level in social robot navigation decision making. For example, the loss values of most modalities converge faster and to a lower point in the earlier "easy" trials (upper left), compared to the later "difficult" ones (lower right). It is clear that in terms of test loss for global planning, learning with multimodal perception significantly outperforms both unimodal perception modalities. The multimodal test loss shown by the green curves drops faster, converges at a smaller epoch number, and reduces to a lower value Fig. 2: Image Module, Fusion (Multimodal) Module, and Point Cloud Module Architecture for Social Robot Navigation. compared to both the yellow and blue curves for RGB image and point cloud respectively. It is also worth noticing that the green multimodal learning curves are similar and consistent across all eight test scand rosbags with different social interactions in different social scenarios, showing the advantage of multimodal learning from both point cloud and RGB image. Another very clear trend is that for the two unimodal perception types, point cloud perception consistently outperforms RGB image in all test trials, despite underperforming multimodal learning. In the earlier "easier" trials, point cloud performs slightly better than RGB image and has a relatively larger discrepancy compared to multimodal learning. For the later "difficult" trials, such trend is reversed, with the point cloud blue curves come closer to the multimodal green curves, compared to the RGB yellow curves. Considering that there is no significant difference on the local planning loss curves across the eight test scand rosbags, for the sake of space, we combine all eight curves into one for each modality and show them in Fig. 4. We observe a similar trend in learning local planning from all three perception modalities: Multimodal learning can achieve slightly better performance at imitating the scand demonstrations than learning with the point cloud, which further outperforms learning with RGB image. ### _Multimodal Social Compliance_ In addition to the pure machine learning statistics, we also discuss how each perception modality performs with different social interactions in different social scenarios. As discussed above, the learning performance of RGB image decreases from the first to the last test trial and results in a more-than-doubled loss value in Fig. 3, while multimodal learning and point cloud learning consistently maintain similar performance. We also list the majority of the social scenarios presented in each test scand rosbag at the top of each subfigure in Fig. 3. We observe that the increasing "difficulty" level (mostly for RGB images) directly corresponds to increased human density caused by more confined social spaces and larger number of humans in the crowd. While learning with RGB image produces performance only slightly worse than point cloud and multimodal learning in the first "with traffic" scenario, which is a relatively simple scenario on a wide open walkway on the UT Austin campus, including "against traffic" human crowds and constraining navigation on a sidewalk instead of an open walkway deteriorates the performance of learning with RGB image only (first row in Fig. 3). When the "difficulty" level keeps increasing by adding more complex social scenarios such as "street crossing", "large group/crowd", and "narrow hallway", RGB image's performance keeps degrading. We posit that such performance degradation is caused by the increased complexity and variance in the RGB input, which prevent learning with RGB image only from generalizing to unseen data in challenging social scenarios. Furthermore, considering the lack of direct and explicit geometric information from RGB images, operating mobile robots in confined social spaces with large human crowds is also less safe compared to point cloud, whose geometric information can be utilized to assure safety, i.e., asserting a safe stopping behavior when the distance between the robot and the humans in the scene is too close. Such a lack of safety by relying only on RGB images is also apparent in our human study (see details in Sec. V). The obvious gap between multimodal and point cloud Fig. 3: Test Loss on Eight scand rosbags with Multimodal, Point Cloud, and RGB Image Input (Averaged Over Three Training Runs with Negligible Variances Invisible in the Figures). learning is also of interest. While both of them are able to perform similarly across all eight test scand rosbags, multimodal learning maintains a very consistent advantage over point cloud alone in terms of a lower converged loss value and fewer epochs until convergence. We posit that the additional semantic information provided by the RGB image in addition to the pure geometric data from point cloud can provide extra relevant social cues to inform social navigation decision making. Such an empirical gap reveals the necessity of including semantic information in the social robot navigation decision making process, compared to traditional autonomous mobile robot navigation, for which avoiding obstacles is the only concern. ## V Human Study Results We conduct a human study to test whether the findings from our scand study can translate to real-world social robot navigation. We use a Clearpath Jackal robot with a Velodyne VLP-16 LiDAR and a ZED2 RGB-D camera for the point cloud and RGB image input respectively. We recruit eight human subjects for our human study. Two sets of experiments are designed according to a previous protocol to evaluate social robot navigation [77]: frontal approach of the robot with one and two human participants in a public outdoor space (Fig. 6). In the one-human study, participants are instructed to take a natural path towards the robot; Participants in the two-human study are instructed to take three different approaches to initiate social interactions: move directly towards the robot, move forward then diverge, and move towards one side of the robot. After deploying the RGB module, we found that the robot may move dangerously close to the human subjects. Therefore, we exclude the RGB module in the human study. After each human-robot interaction, we ask the participant to fill in a standard questionnaire [77] with five questions2: _1. The robot moved to avoid me, 2. The robot obstructed my path\({}^{*}\), 3. The robot maintained a safe and comfortable distance at all times, 4. The robot nearly collided with me\({}^{*}\), and 5. It was clear what the robot wanted to do._ Footnote 2: \({}^{*}\) denotes negatively formulated questions, for which we reverse-code the ratings to make them comparable to the positively formulated ones. The per-question average along with error bars are plotted in Fig. 5 for both the one-person (left) and two-person scenarios (right). For all five questions, the multimodal learning approach is able to consistently achieve higher social compliance scores with smaller variance, compared to move_base, the best classical planner according to the loss values in the scand study. Compare the left and right figures, the difference between multimodal learning and move_base increases with more humans, showing multimodal learning's potential to enable socially compliant navigation with higher human density in public spaces, which is consistent with the results we observe in terms of test loss values in the scand study (Fig. 3). For our curated human study, we do not observe a significant advantage of multimodal learning in comparison to point cloud only. We posit that it is because our curated social scenarios do not contain sufficiently rich semantic social cues to showcase the necessity of using RGB images. ## VI Conclusions We present a study on learning social robot navigation with multimodal (and unimodal) perception conducted on both a large-scale real-world social robot navigation dataset and in a human study with a physical robot, in comparison to a set of classical approaches. Our study results indicate that multimodal learning has clear advantage over either unimodal counterpart by a large margin in both the dataset and human studies, especially in difficult situations with increasing human density. In terms of unimodal learning, point cloud input is superior compared to RGB input, but it can be improved by utilizing the extra semantic information provided by the camera. Despite the found superiority of multimodal learning, the current study only remains in pre-recorded dataset and curated social scenarios. How multimodal learning will perform in real-world, large-scale, long-term social robot navigation tasks remains unclear and may require extra research and engineering effort. Fig. 4: Average Test Loss on All scand rosbags with Multimodal, Point Cloud, and RGB Image Input (Three Training Runs). Fig. 5: Human Study Results. Fig. 6: Human Study with Different Social Scenarios.
2309.05014
Classical-Quantum Hybrid Models
Hybrid classical-quantum models are computational schemes that investigate the time evolution of systems, where some degrees of freedom are treated classically, while others are described quantum-mechanically. First, we present the motivation for such models, outline the requirements they must satisfy, and provide explanations for their development. Then we review various popular non-relativistic schemes and their associated limitations, with a particular emphasis on reversible dynamics.
Daniel R. Terno
2023-09-10T12:25:05Z
http://arxiv.org/abs/2309.05014v3
# Classical-Quantum Hybrid Models ###### Abstract Hybrid classical-quantum models are computational schemes that investigate the time evolution of systems, where some degrees of freedom are treated classically, while others are described quantum-mechanically. First, we present the motivation for such models, outline the requirements they must satisfy, and provide explanations for their development. Then we review various popular non-relativistic schemes and their associated limitations, with a particular emphasis on reversible dynamics. hybrid mechanics, mean-field theory, Poisson-Lie algebra, Koopman-von Neumann equation, phase space, Wigner transformation, Moyal bracket, classical limit, master equation, Lindblad operators ###### Contents * I Introduction * I.1 Scope and structure * I.2 Notation and conventions * II Formal structures in mechanics * I.3 Classical mechanics * I.4 Quantum mechanics * I.5 Hybrid mechanics * III Mean-field models * IV Hybrid brackets * V Hilbert space models * V.1 Hilbert space classical mechanics * V.2 Hybrid dynamics * V.3 Statistical ensembles in configuration space * VI Phase space models * VI.1 Phase space quantum mechanics * VII.2 Hybrid dynamics * VII Stochastic dynamics ## I Introduction Phenomena that are described by hybrid classical-quantum mechanics range from the interaction of elementary particles with a superheated liquid in a bubble chamber to the inflaton field driving the expansion of the Universe (Boucher and Traschen, 1988). There are several reasons for such a hybrid description. At the purely pragmatic level it is an efficient computational scheme. For example, investigations of atomic and molecular dynamics require simulating the time evolution of increasingly large electronic and nuclear states. The exponentially high computational costs of such simulations may be contained by different strategies. One way to moderate the growth of the required computational resources is to freeze the nuclear dynamics, describe inner shell populations statistically and to treat explicitly only several chemically active electrons per atom (Fulde, 2012). Alternatively, the full set of degrees of freedom may be retained by splitting the system into a fully quantum subsystem and a set that is treated classically (Crespo-Otero and Barbati, 2018), with the selection criteria usually being the separation into fast and slow degrees of freedom in the Born-Oppenheimer approximation (Bayer, 2006). Likewise, numerous processes in condensed matter systems find convenient descriptions through classical or thermodynamic language, with quantum mechanics only providing values to a limited set of parameters. While the influence of this classically described background on quantum systems is, in principle, well-understood, dealing with back reaction--where the quantum system affects its environment--presents a considerably more complex problem. Hybrid mechanics is specifically designed to tackle this challenge. Hybrid mechanics is designed to address this challenge. At the fundamental level, emergence of the classical world as a limit of a quantum description is still not fully understood (Zurek, 2003; Schlosshauer, 2007). One aspect of this challenge involves understanding the classical limit within a single subsystem, which necessitates a consistent dynamics for the resulting quantum-classical hybrid. Regardless of one's preferred interpretation of quantum theory, the outcome of measurement is a classical record (Peres, 1995; Busch et al., 2016). Therefore, achieving a consistent description of quantum and classical sectors engaged in mutual interaction is essential for the logical coherence of presenting quantum foundations. Despite decades of research, a fully developed quantum theory of gravity remains elusive, and there is a notable ab sence of compelling experimental evidence for its quantum signatures (Kiefer, 2012). Therefore, we should seriously consider an old hypothesis suggesting that the gravitational field is classical, even though its material sources are quantized. The success or failure of a hybrid theory that describes classical gravity and quantum matter, as well as the form it eventually takes, will significantly influence our understanding of their interaction (Oppenheim, 2018). Different purposes of various hybrid models lead to different expectations. Effective computational schemes just need to be "good enough" for the duration of the investigated process, even if it is desirable to be assured of their consistency. Candidates for a fundamental theory ought to satisfy a number of consistency requirements. Successes and failures in reaching this goal form our subject matter. ### Scope and structure First we summarise the necessary formal aspects of classical (Sec. II.1) and quantum (Sec. II.2) theories. The constraints on hybrid models that follow from their compatibility with the rest of the known physics as well as from the perspective of generalised probability theories (GPTs) are discussed in Sec. II.3. By borrowing terminology from quantum theory, hybrid models can be broadly separated into reversible (unitary) and irreversible (completely positive) schemes. Since most of the literature is devoted to the reversible schemes, they are our primary focus. After reviewing the mean field models (Sec. III), we describe the algebraic aspects of constructing hybrid brackets in Sec. IV. Intuitively, hybrid mechanics should involve objects of the same nature. Hence many attempts to write hybrid schemes start from rewriting classical mechanics in terms of wave functions, or reformulating quantum mechanics on the phase space. Hilbert space constructions avoid various no-go theorems of Sec. III and are described in Sec. V. Phase space methods and their role in deriving the classical limit and hybrid schemes are discussed in Sec. VI. After reviewing the schemes of reversible dynamics and their problems, the irreversible hybrid dynamics is discussed in Sec. VII. We introduce schemes that incorporate decoherence and diffusion to ensure compliance with the minimal consistency requirements of the hybrid models. The discussion is necessarily brief. We work in the non-relativistic regime and do not discuss quantum or classical fields, including gravity, as well as fermions and the anti-commutation relations. No position on any controversy in foundations of quantum mechanics is taken. Only a short list of references is provided, but several monographs and topical reviews that together contain a nearly exhaustive survey of literature are highlighted. ### Notation and conventions For brevity we refer to a quantum system as Q and to a classical system as C. Generalized position and momentum are combined into \(z=(x,k)\), and the collection of derivatives is denoted as \(\nabla_{z}=(\partial_{x},\partial_{k})\). We use carets to indicate operators only when they act on a concrete Hilbert space. We denote a joint operation of the operator trace and integration over the phase space variables as \[\mathrm{Tr}A=\int\!\mathrm{tr}A\mu(dz). \tag{1}\] The hybrid density operator \(\varrho\) satisfies \(\mathrm{Tr}\varrho=1\). The Einstein summation convention over repeated indices is assumed. ## II Formal structures in mechanics Several structural components of classical and quantum theories (Arnold, 1989; Landsman, 1998; Landsman, 1998) are particularly important in construction of hybrids. In their description we follow the same pattern, emphasising the algebraic structures and the state - observable duality. We begin with classical mechanics and introduce the shared concepts in its setting. ### Classical mechanics States and observables of a classical system are described with the help of the phase space \(\mathcal{P}\). We restrict the discussion to non-constrained systems with finite number of degrees of freedom. Then the phase space is a symplectic manifold that \begin{table} \begin{tabular}{|c|l|} \hline symbol & definition \\ \hline \hline \(\mathfrak{A}_{\mathrm{cl}},\mathfrak{A}_{\mathrm{qm}},\mathfrak{A}\) & algebras of classical, quantum, and hybrid operators \\ \hline \(\mathcal{P}\) & phase space (Poisson manifold) \\ \hline \(J\) & symplectic matrix \\ \hline \(I\) & identity operator or matrix \\ \hline \(\boldsymbol{\xi}_{H}\) & Hamiltonian vector field \\ \hline \(z=(x,k)\) & a set of \(2n\) canonical positions and momenta \\ \hline \(\mu(dz)\) & phase space measure \\ \hline \(\{f,g\}\equiv f\mathcal{P}g\) & Poisson bracket of functions \(f(z)\) and \(g(z)\) \\ \hline \([A,B]\) & commutator of the operators \(A,B\in\mathfrak{A}_{\mathrm{qm}}\) \\ \hline \(\{A,B\}_{h}\) & Poisson bracket of the operators, \([A,B]/ih\) \\ \hline \([f,g]\) & Moyal bracket of functions \(f(z)\) and \(g(z)\) \\ \hline \(\{[\![Y,F]\!]\!\}\) & a hybrid bracket of \(Y,F\in\mathfrak{A}\) \\ \hline \(W\) & Wigner function \\ \hline \(\Upsilon(q,x,p,t)\) & quantum-classical wave function \\ \hline \(\Upsilon(q,t)\) & quantum-classical wave function, Sec. VC \\ \hline \(\varrho\) & hybrid density operator \\ \hline \end{tabular} \end{table} Table 1: Table of symbols recurrently used in the text. is a cotangent bundle of the configuration space. Local coordinates \(z=(x,k)\) are given by the generalized coordinates of the configuration space \(x=\{x^{a}\}\) and the generalized momenta, that are related to the coordinates \(x\) and velocities \(\dot{x}\) via \(k_{a}:=\partial L/\partial\dot{x}^{a}\), where \(L(x,\dot{x})\) is the system's Lagrangian. Classical observables are smooth functions on the phase space and form the algebra \(\mathfrak{A}_{\rm cl}\). The Poisson bracket \[\{f,g\} = \left(\frac{\partial f}{\partial x^{a}}\frac{\partial g}{ \partial k_{a}}-\frac{\partial f}{\partial k_{a}}\frac{\partial g}{\partial x^ {a}}\right) \tag{2}\] \[\equiv f\big{(}\overleftarrow{\partial_{x}}\overrightarrow{\partial_{ k}}-\overleftarrow{\partial_{k}}\overrightarrow{\partial_{x}}\big{)}g\equiv f \mathbb{P}g,\] where arrows indicate direction of action of the differential operators, governs the dynamics of observables via the canonical equations of motion \[\dot{x}=\{x,H\},\qquad\dot{k}=\{k,H\}, \tag{3}\] that are generated by the system's Hamiltonian. An arbitrary phase space function \(f\) evolves according \[\frac{df}{dt}=\frac{\partial f}{\partial t}+\{f,H\}. \tag{4}\] A non-degenerate closed 2-form that determines the symplectic structure on \(\mathcal{P}\) can always be written in local coordinates as \[\omega^{(2)}=dp_{a}\wedge dq^{a}\equiv dp\wedge dq. \tag{5}\] It determines the isomorphism between vectors and 1-forms on \(\mathcal{P}\) by matching the vector \(\mathbf{\eta}\) and the form \(\omega^{(1)}_{\mathbf{\eta}}\) via \(\omega^{(1)}_{\mathbf{\eta}}(\mathbf{\xi}):=\omega^{(2)}(\mathbf{\eta},\mathbf{\xi})\). Hence in the local coordinates the Hamiltonian vector field is given by \[\mathbf{\xi}_{H}=J\nabla_{z}H,\qquad J=\begin{pmatrix}0&I\\ -I&0\end{pmatrix}, \tag{6}\] where \(J\) is the symplectic matrix. The canonical equations thus become \[\dot{z}(t)=\mathbf{\xi}_{H}\big{(}z(t)\big{)}, \tag{7}\] representing the Hamiltonian phase flow. Then \[\{f,g\}=-\omega^{(2)}\big{(}\nabla_{z}f,\nabla_{z}g\big{)}=(\nabla_{z}f)^{T} \cdot J\cdot\nabla_{z}g. \tag{8}\] The Poisson bracket is defined more abstractly as a Lie bracket on the underlying manifold: it is linear, antisymmetric and satisfies the Jacobi identity \[\{f,gh\}=\{f,g\}h+g\{f,h\}, \tag{9}\] \[\{f,\{g,h\}\}=\{\{f,g\},h\}+\{g,\{f,h\}\}. \tag{10}\] In addition it satisfies the Leibnitz rule with respect to the product defining the algebra, \(f\circ g(x,k):=f(x,k)g(x,k)\), \[\{f,gh\}=\{f,g\}h+g\{f,h\}. \tag{11}\] Technically this is the Jordan-Lee algebra with associative multiplication, i.e. the Poisson algebra. The state space \(\mathcal{S}\) for an algebra \(\mathfrak{A}\) with identity \(I\) consists of all linear functionals \(\omega:\mathfrak{A}\to\mathbb{C}\) that are positive, (\(\omega(A^{*}A)\geqslant 0\)\(\forall A\in\mathfrak{A}\)) and normalized, \(\omega(I)=1\). The states are continuous functionals. \(\mathcal{S}(\mathfrak{A})\) is a convex set, i. e. for two states \(\omega_{1},\omega_{2}\) and \(0\leqslant\lambda\leqslant 1\) the mixture \(\omega=\lambda\omega_{1}+(1-\lambda)\omega_{2}\) belongs to \(\mathcal{S}\), which is a closed subset of the unit ball in the dual space of the algebra, \(\mathfrak{A}^{*}\). In classical mechanics the states are described by the Liouville probability density \(\rho_{\rm cl}\), with \[\rho_{\rm cl}\geqslant 0,\qquad\int\rho_{\rm cl}(x,k,t)\mu(dxdk)=1, \tag{12}\] \[\left\langle A\right\rangle_{\rho}=\int A(x,k,t)\rho_{\rm cl}(x,k,t)\mu(dxdk), \tag{13}\] where the measure \(\mu(dxdk)\) follows from the volume form \(\omega^{(2)^{n}}\) on the phase space. For \(\mathcal{P}=\mathbb{R}^{2n}\) the measure is \(\mu(dz)=d^{n}xd^{n}k\) that we abbreviate as \(dxdk\). Pure states of a classical system are the points of its phase space \(\mathcal{P}\), and the corresponding Liouville density is a distribution \[\rho_{\rm cl}^{z_{0}}=\delta(x-x_{0})\delta(k-k_{0}). \tag{14}\] The convex set of all probability measures on a topological space is a generalized simplex. Its vertices are all point-concentrated measures. It is an important property of the simplexes that each point of a simplex can be uniquely represented as a convex combination (finite or infinite) of the extremal points. This uniqueness of decomposition is a crucial distinction between classical and quantum mixtures (Mielnik, 1974). Hamiltonian flows preserve the symplectic structure and the variety of the derived invariants, including the phase space volume. Invariance of the overlaps of the Liuoville densities -- the classical unitarity -- is expressed as the Liouville equation, \[\partial_{t}\rho_{\rm cl}=-\{\rho_{\rm cl},H\}. \tag{15}\] ### Quantum mechanics Quantum observables are elements of the self-adjoint part \(\mathfrak{A}_{\rm qm}\) of the relevant operator algebra. A commutative product \(\circ\) is introduced via \[A\circ B:=\tfrac{1}{2}(AB+BA). \tag{16}\] Quantum Poisson bracket is defined as \[\{A,B\}_{\hbar}=\frac{1}{i\hbar}(AB-BA). \tag{17}\] It is a Lie bracket and a derivation in \(\mathfrak{A}_{\rm qm}\). In analysis of the classical limit the Planck constant is treated as a variable parameter and the limit \(\hbar\to 0\) is studied. The associator identity \[(A\circ B)\circ C-A\circ(B\circ C)=\tfrac{1}{4}\hbar^{2}\{\{A,C\},B\}, \tag{18}\] is what differentiates the commutative product \(\circ\) from its classical counterpart. When \(\circ\) is associative the algebra becomes the Poisson algebra, and it formally corresponds to \(\hbar\equiv 0\). In the Heisenberg picture operators evolve \[\frac{dA^{\mathsf{H}}}{dt}=\{A^{\mathsf{H}},H\}_{\hbar}+\frac{\partial A^{ \mathsf{H}}}{\partial t}. \tag{19}\] In the Hilbert space representation the elements of \(\mathcal{S}\) are trace class trace one positive operators (density operators), and the expectation value of \(A\in\mathfrak{A}_{\mathrm{qm}}\) is given by \[\left\langle A\right\rangle_{\rho}=\mathrm{tr}(\rho A). \tag{20}\] Pure states are elements of the projective space \(\psi\in\mathbb{P}\mathcal{H}\). In the Schrodinger picture pure states evolve according to \[i\hbar\frac{d\psi}{dt}=H\psi, \tag{21}\] and the density operator \(\rho\) evolves as its classical counterpart, \[\partial_{t}\rho=-\{\rho,H\}_{\hbar}. \tag{22}\] States and operators can be represented not only on the Hilbert space of square-integrable functions on the configuration space of the system, such as \(\mathcal{H}=\mathbb{L}^{2}(\mathbb{R}^{n})\). In many applications, such as quantum optics or quantum information with continuous variables (Weedbrook et al., 2012), representation of the state of a quantum system on the phase space of its classical counterpart is particularly convenient. The Wigner quasi-probability distribution function (Hillary et al., 1984; Zachos et al., 2005), \[W_{\rho}(q,p)=\frac{1}{(2\pi\hbar)^{n}}\int dye^{ipy/\hbar}\langle q+\tfrac{1} {2}y|\hat{\rho}|x-\tfrac{1}{2}y\rangle, \tag{23}\] represents the state \(\rho\). The quasi-provability function is normalized according to \[\int\!W_{\rho}(z)\mu(dz)=1,\qquad\mathrm{tr}\big{(}\hat{A}\hat{\rho}\big{)}= \int\!W_{\rho}(z)A(z)\mu(dz), \tag{24}\] where \(z=(q,p)\), but is not necessarily positive-definite. A Gaussian state \(\rho\) has a Gaussian characteristic function. Its Fourier transform gives us a Gaussian Wigner function \[W_{\rho}(z)=\frac{\exp\left[-\tfrac{1}{2}(z-\langle z\rangle)^{T}\sigma^{-1}(z -\langle z\rangle)\right]}{(2\pi)^{n}\sqrt{\mathrm{det}\sigma}}, \tag{25}\] where \(\sigma\) is a covariance matrix, namely, the second moment of the state \(\hat{\rho}\). By definition, a Gaussian probability distribution can be completely described by its first and second moments; all higher moments can be derived from the first two using the following method \[\left\langle(z-\langle z\rangle)^{k}\right\rangle=0 \text{for odd }k, \tag{26}\] \[\left\langle(z-\langle z\rangle)^{k}\right\rangle=\sum_{i}\left(c _{ij}...c_{xz}\right) \text{for even }k \tag{27}\] also known as Wick's theorem. The sum is taken over all the different permutations of \(k\) indices. Therefore we will have \((k-1)!/(2^{k/2-1}(k/2-1)!)\) terms where each consists of the product of \(k/2\) covariances \(c_{ij}\equiv\left\langle(z_{i}-\langle z\rangle_{i})(z_{j}-\langle z\rangle_{j })\right\rangle\). Gaussian operations preserve the Gaussian character of the states they are applied to. Dynamics of an open system is often conveniently expressed via Gorini, Kossakowki, Sudarshan, Linblad (GKSL) equation (Ingarden et al., 1997; Schlosshauer, 2007), one of whose forms is \[i\hbar\frac{d\rho}{dt}=-[\rho,H]+\tfrac{1}{2}d^{\alpha\beta}\left([L_{\alpha}, \rho L_{\beta}^{\dagger}]+[L_{\alpha}\rho,L_{\beta}^{\dagger}]\right). \tag{28}\] The Lindblad operators \(L_{\alpha}\) are obtained from the interaction term, and the coefficients \(d^{\alpha\beta}\) encapsulate all information about the physical parameters of the decoherence and dissipation processes. ### Hybrid mechanics A consistent hybrid dynamics has to satisfy a number of restrictions that follow from its inclusion in a broader physical picture. Moreover, it should be self-consistent. The following is the list of requirements, of various degrees of impermanence and acceptance, that may or should be imposed on proposed hybrid schemes. It largely follows the list of Boucher and Traschen (1988). The most basic purpose of any hybrid scheme is to obtain predictions about the Q and C subsystems. It has to to identify the classical and the quantum sectors, as well as to produce 1. a phase-space probability density \(\rho_{\mathrm{cl}}\) that satisfies Eqs. (12) and (13); 2. a positive-semidefinite density operator \(\rho_{\mathrm{qm}}\) that satisfies Eq. (20). Each of the sectors behaves in the usual way, i.e. 1. if C and Q are uncoupled, then \(\rho_{\mathrm{cl}}\) evolves according to Eq. (15) and \(\rho_{\mathrm{qm}}\) according to Eq. (22); 2. classical canonical transformations and quantum unitary transformations are realised on C and Q sectors, respectively (equivariance). While the evolution of the QC system may be unitary or not, and have only one of the Schrodinger or the Heisenberg pictures accessible, it should to 1. conform to the laws of physics and at the very least (a) satisfy the standard conservation laws, particularly energy; (b) maintain impossibility of the superluminal communications; (c) conform to the second law of thermodynamics. These requirements have far-reaching consequences. In quantum mechanics, non-linear evolution is compatible with the principle of superposition and simply implies that a time-evolved initial pure state \(\psi_{1}(0)+\psi_{2}(0)\) is different form \(\psi_{1}(t)+\psi_{2}(t)\)) (Peres, 1995). However, it enables to distinguish non-orthogonal states. In turn, this leads to super-luminal communication (Gisin, 1990) and to violation of the second law of thermodynamics (Peres, 1989). As one of the motivations for hybrid dynamics is to introduce back reaction of Q on C, it is characterized via * the quantum purity \(\mathrm{tr}(\rho_{\mathrm{qm}}^{2})\) is not a constant of motion. This decoherence property (Gay-Balmaz and Tronci, 2022) for initially pure \(\rho_{\mathrm{qm}}\) under overall unitary evolution indicates building up of entanglement between C and Q systems. As the minimal goal is to produce reasonable probability distributions, having the Heisenberg picure, i. e., explicit evolution equations for all the observable operators can be described as a desirable feature. However, in this case it is reasonable to demand that at least * if purely classical and purely quantum (Heisenberg) equations of motion have the same form, the hybrid equations have the same form as well (Peres and Terno, 2001). More generally, difference between the quantum and hybrid equations of motion and/or classical unobservable quantities should disappear in the formal limit \(\hbar\to 0\). Failure to comply with this form of the correspondence principle leads to the breaking down of the classical limit (taken for the system Q) when expressed via the statistical moments of the classical and quantum probability distributions (Terno (2006)). For illustration of various hybrid schemes we consider Hamiltonian systems (that have a classical Hamiltonian \(H(x,k,q,p)\), or a quantum Hamiltonian \(H(X,K,Q,P)\). Separating the hybrid Hamiltonian into C, Q and the interacting parts, we write \(H=H_{\mathrm{cl}}(x,k)+H_{\mathrm{qm}}(q,p)+H_{\mathrm{int}}\). Hybrid schemes give the concrete meaning to this formal expression. One popular example is provided by two bilinear coupled oscillators, where masses and frequencies are absorbed into the definitions of canonical variables, thus making \(\hbar\) dimensionless. Its classical version is \[H=\tfrac{1}{2}x^{2}+\tfrac{1}{2}k^{2}+\tfrac{1}{2}q^{2}+\tfrac{1}{2}p^{2}+ \lambda xq, \tag{29}\] where in the hybrid picture \(x,k\) remain classical operators, \(\{x,k\}=1\) while the other canonical pair is promoted to operators \(\{q,p\}_{\hbar}=1\). Another popular system consists of a classical particle and a quantum spin, with the interaction term \[H_{\mathrm{int}}=\lambda\sigma_{z}k, \tag{30}\] where \(\sigma_{z}\) is the Pauli \(z\)-matrix. Several general results indicate that construction of the self-consistent hybrid dynamics modifies at least some of our expectations. Adapting the argument of Diosi et al. (2000), we conclude that no matter what form hybrid dynamics takes, it should be impossible from the measurement of \(x\) and \(k\) to obtain precise information about the quantum state. Hence, even if the Liouville distribution was introduced into phase space as a concession to statistical mechanics, some sort of epistemic restrictions on classical information is inevitable (Bartlett et al., 2012). Recent work of Galley et al. (2023) provides some indication on the possibilities for hybrid dynamics under the most relaxed requirements. Generalized probabilistic theories (GPTs), provide a unified meta-theoretical framework in which rules of classical and quantum theory are special cases (Janotta and Hinrichsen, 2014). Their primary focus is on the probabilistic relationships between preparation and effects and identification of the state space structure from the experimental results. For finite dimensional system the exact ingredients of what are the necessary assumptions to recover quantum or classical theory are quite well understood (Brandford et al. 2018). In particular, the discrete structure of classical pure states (Sec. II.A) is reflected in identification of classical state spaces as simplexes. Galley et al. (2023) considered an interacting classical (C) and non-classical systems S. If one requires that interaction leads to the precisely defined information flow from S to C (or its backreaction on C), and reversibility of that interaction, then a contradiction is established. We now survey several of the most popular classes of models. A fully consistent mixed dynamics (Diosi et al. 2000) involves treating the entire Hamiltonian \(H(X,K,Q,P)\) as quantum, following its evolution and then taking a partial classical limit. Such approach is impractical, but truncation of the exact equations at a particular order leads to many of the schemes that are surveyed below. ## III Mean-field models Known as mean-field or Ehrenfest model (Boucher and Traschen, 1988; Tully, 1998; Kapral, 2006; Crespo-Otero and Barbati, 2018) this scheme in its basic form evolves the classical variables \(z=(x,k)\) and the wave function \(\psi(t,q;x,k)\equiv\langle q|\psi(t;z)\rangle\) as \[\dot{x}=\{x,\langle\hat{H}\rangle\}=\partial_{k}\langle\hat{H}\rangle,\qquad \dot{k}=\{k,\langle\hat{H}\rangle\}=-\partial_{x}\langle\hat{H}\rangle, \tag{31}\] \[i\hbar\frac{d\psi}{dt}=i\hbar\left(\partial_{t}\psi+\langle\mathbf{\xi}_{\hat{H}} \rangle\cdot\nabla_{z}\psi\right)=\hat{H}\psi. \tag{32}\] Here the classical evolution is driven by the expectation value of the hybrid Hamiltonian, \(\langle\hat{H}\rangle=\langle\psi_{\mathrm{qm}}|\hat{H}|\psi_{\mathrm{qm}}\rangle\), and the phase space Hamiltonian vector \(\langle\mathbf{\xi}_{\hat{H}}\rangle=(\partial_{k}\langle\hat{H}\rangle,- \partial_{x}\langle\hat{H}\rangle)\) enters the Schrodinger equation through the dependence of the wave function on classical variables. The scheme satisfies requirements I-IV, as well as the conservation of energy (Tully 1998; Manfredi et al., 2023). Its observables are Hermitian operator valued functions on phase space, and the scheme often provides accurate quantum transition probabilities, and can be augment by additional terms or computational methods (Crespo-Otero and Barbati, 2018). Spin degrees of freedom can be naturally incorporated (Hussain et al., 2022). The standard semiclassical equations \[G_{\mu\nu}(\mathsf{g})=\frac{8\pi G}{c^{4}}\langle\psi|\hat{T}_{\mu \nu}(\phi,\pi;\mathsf{g})|\psi\rangle_{\rm ren}, \tag{33}\] \[i\hbar\dot{\psi}=\hat{H}[\phi,\pi;\mathsf{g}]\psi, \tag{34}\] where the Einstein tensor is sourced by the renormalized energy-momentum tensor of the matter (fields \(\phi\) and their canonical momenta \(\pi\)), and the quantum state of the matter fields \(\psi\) is driven by the Hamiltonian that depends on the classical metric \(\mathsf{g}_{\mu\nu}\), is one the more famous examples. Its main drawback from the computational perspective is the lack of correlations between quantum and classical degrees of freedom. Introducing the density operator \(\hat{\rho}(t;z)\), \({\rm Tr}\hat{\rho}=1\) (with a view of replacing the sharp classical data with \(\rho(t,x,k)={\rm tr}\hat{\rho}\)) results in the Liouville-like equation \[\partial_{t}\hat{\rho}=-\{\hat{\rho},\hat{H}\}_{\hbar}-\{\hat{\rho},\hat{H}\}. \tag{35}\] Its modifications, at least partially, allow to introduce quantum-classical correlations (Alonso et al., 2012). From the fundamental perspective the main problem of this scheme is its nonlinearity. Evolution of a observable \(\hat{A}\) in the Ehrenfest model follows (Salcedo, 2012) \[\frac{d\langle\hat{A}\rangle_{\hat{\rho}}}{dt}=\int\Bigl{(}{\rm tr}\bigl{(} \hat{\rho}\{\hat{A},\hat{H}\}_{\hbar}\bigr{)}+\{{\rm tr}(\hat{\rho}\hat{A}),{ \rm tr}(\hat{\rho}\hat{H})\}\Bigr{)}\,\mu(dz), \tag{36}\] which clearly prevents evolution of the expectation value of \(A\) on some initial mixture into the mixture of evolved expectation values of the individual components of the mixture. As a result, the scheme allows to discriminate between different mixtures that realise the state \(\hat{\rho}\). This not only violates the Requirements Vb and Vc, but also demonstrates the internal inconsistency of the scheme. Indeed, if this scheme is used to describe the measurement process in quantum mechanics, then the classical measurement apparatus would be able to perform operations that are forbidden in quantum theory, whose basic description is predicated on using the classical apparatus. ## IV Hybrid Brackets Given that both classical and quantum dynamics are expressed with their respective Poisson brackets, one approach to the construction of hybrid schemes is the construction of a bracket \(\{\![\cdot,\cdot]\!\}\) that defines the hybrid operator algebra on the tensor product space \(\mathfrak{A}=\mathfrak{A}_{\rm cl}\otimes\mathfrak{A}_{\rm qm}\). Its elements are linear combinations of products of functions on the phase space with quantum operators. Hence a general hybrid observable\(A\in\mathfrak{A}\) is a function defined on the classical phase space taking values on the set of quantum operators \(\mathfrak{A}_{\rm qm}\). The two natural generalisation of the Poisson brackets are due to Anderson (1995), \[\{\![A,B]\!\}:=\{A,B\}_{\hbar}+\{A,B\}, \tag{37}\] or Alexandrov (1981) and Gerasimenko, \[\{\![A,B]\!\}:=\{A,B\}_{\hbar}+\tfrac{1}{2}\bigl{(}\{A,B\}-\{B,A\}\bigr{)}. \tag{38}\] In both cases the quantum and classical Poisson brackets are evaluated on the relevant objects according to the standard rules. For \(f,g\in\mathfrak{A}_{\rm cl}\) and \(A,B\in\mathfrak{A}_{\rm qm}\) the definition (37) result in \[\{\![fA,gB]\!\}=fg\{A,B\}_{\hbar}+AB\{f,g\}, \tag{39}\] and the expectation value of an operator \(A(z)\) is given by \[\langle A\rangle_{\rho}=\int{\rm tr}(\rho A)\mu(dz). \tag{40}\] Applying the former version to the system of Eq. (29), we find the Hamiltonian equations of motion in agrement with both fully classical or fully quantum cases, perfectly satisfying the expectations dicted by the correspondence principle, \[\dot{q}=p, \dot{p}=-q-\lambda x, \tag{41}\] \[\dot{x}=k, \dot{k}=-x-\lambda q. \tag{42}\] However, for more general Hamiltonian the lack of antisymmetry (which indicates an obvious failure as a Lie bracket) leads to the possibility that time-independent Hamiltonians do not commute, \(\{\![H,H]\!\}\neq 0\), leading to energy non-conservation and non-positivity of \(\rho_{\rm cl}\), as was already pointed out by (Anderson, 1995). The antisymmetric bracket of Eq. (38) was designed to produce the standard-looking equation for the operator-valued density on the phase space, \(\rho(x,k)\) via \[\partial_{t}\rho=-\{\![\rho,H]\!\}, \tag{43}\] which allows for conservation of the total energy \(\langle H\rangle={\rm Tr}(H)=\int\!{\rm tr}(\rho H)\mu(dxdk)\). Its Wigner function version is used for modelling quantum rate processes, such as proton and electron transport (Kapral, 2006; Crespo-Otero and Barbati, 2018). It is linear and antisymmetric, however it fails to be a Lie bracket as the Jacobi identity is not fulfilled. Thus it lacks a Hamiltonian structure and leads to time-irreversible dynamics (Sergi et al. 2018). In fact, none of the combinations of the classical and quantum brackets that satisfy \[\{\![f,g]\!\}=\{f,g\},\qquad\{\![A,B]\!\}=\{A,B\}_{\hbar}, \tag{44}\] for \(f,g\in\mathfrak{A}_{\rm cl}\) and \(A,B\in\mathfrak{A}_{\rm qm}\) can fulfill all the common properties of the classical and the quantum brackets. Indeed, under the standard rules (particularly, using the bracket's antisymmetry and the Leibnitz rule), one finds \[\{\![fA,gB]\!\}=\{f,g\}AB+fg[A,B]=\{f,g\}BA+fg[A,B], \tag{45}\] that fails for non-commutative \(A\) and \(B\) (and for coupling of any two quantization schemes with \(\hbar_{1}\neq\hbar_{2}\), (Caro and Salcedo, 1999; Salcedo, 2012)). Ideally, a hybrid scheme should satisfy all the requirements I-VII, and to this end the hybrid bracket properties should mimic those of its quantum and classical counterparts (Caro and Salcedo, 1999). Specifically, the hybrid bracket is a Lie bracket, i.e. satisfies Eqs. (9) and (10). The antisymmetry of the bracket ensures that time-independent Hamiltonians are conserved. The linearity guarantees that for two observables \(F,Z\in\mathfrak{A}\) without an explicit time dependence their linear combination is also free from the explicit time dependence. Compliance with the Jacobi identity guarantees this independence for the result of their bracket \(\{[\![F,Z]\!]\}\). Satisfying the Leibnitz rule Eq. (11) ensures that the product of two observables is consistent with time evolution the commutation relations among canonical variables, and the expression such as \([q,p]=i\hbar\) or \([x,k]=0\) are preserved. Eq. (44) is necessary to satisfy III (independence of classical and quantum evolutions in absence of interaction), that should be supplemented by \[\{[\![f,A]\!]=0,\qquad f\in\mathfrak{A}_{\rm cl},A\in\mathfrak{A}_{\rm qm}. \tag{46}\] Hermiticity or reality of the relevant observers is preserved if \[\{[\![F,Z]\!]^{\dagger}=\{\![\![F^{\dagger},Z^{\dagger}]\!]\!\} \tag{47}\] As we have seen that enforcing a general form of the Leibnitz rule is impossible, the minimal requirement is that a constant observables, such as \(I\) do not evolve. Demanding a weaker from of the Leibnitz rule, \[\{\![\![f,gA]\!]=\{f,g\}A,\qquad\{\![\![A,fB]\!]=f\{A,B\}_{h} \tag{48}\] for \(f,g\in\mathfrak{A}_{\rm cl}\) and \(A,B,\mathfrak{A}_{\rm qm}\), enforces this, as well as an independent evolution of each factor in a product observable if the two sectors are dynamically uncoupled. It was shown by Gil and Salcedo (2017) that if the Hilbert space of the quantum subsystem is finite-dimensional, there is a unique hybrid bracket that satisfies the above requirements. However, it does not preserve positivity of the resulting quantum density matrix (Requirement II). A different perspective on the construction of a hybrid bracket was provided by Amin and Walton (2021). The bracket \[\{\![F,Z]\!]=\frac{1}{i\hbar}(F*Z-Z*F), \tag{49}\] is defined with the help of an unspecified non-commutative associative product \(*\) that acts on the algebra of operator-valued phase space functions \(A(x,k)\in\mathfrak{A}\). Such bracket satisfies both the Jacobi identity and the Leibnitz rule. Taking the partial classical limit using the phase space representation of quantum mechanics (Sec. VI), it is possible to show that \[*=1+\frac{i\hbar}{2}(\mathsf{P}+\Sigma), \tag{50}\] where \(\Sigma\) is some symmetric binary operation on classical variables. If such \(\Sigma\) can be found, then \[\{\![A,B]\!]:=\{A,B\}_{h}+\frac{1}{2}\big{(}\{A,B\}-\{B,A\}+A\Sigma B-B\Sigma A \big{)}. \tag{51}\] It reduces to the bracket of Eq. (38) for \(\Sigma=0\). Some specific constructions are discussed in Sec. VI. They fail as generators of the universal hybrid dynamics, but indicate that the hybrid dynamics may be consistent for restricted types of the interaction terms, such as \[H_{\rm int} =A(q,p)(\alpha x+\beta k) \tag{52}\] \[H_{\rm int} =A(q,p)f(x),\qquad H_{\rm int}=A(q,p)g(k), \tag{53}\] where \(A\), \(f\), \(g\) are arbitrary functions and \(\alpha\), \(\beta\) are real constants. Conclusions of no-go theorems may be evaded if one of their premises is not realised in the proposed construction. This is the case of a scheme of Elze (2012), that is constructed using the generalized Poisson bracket, where the role of canonical pairs is taken by \((\psi,i\hbar\psi^{*})\)(Zhang and Wu, 2006). The resulting hybrid dynamics is equivalent to Eqs. (31) and (32) (Salcedo, 2012), and thus shares the benefits and the drawbacks of the mean field models. Another way to introduce the canonical variables and the Poisson bracket is given by Gay-Balmaz and Tronci (2022). ## V Hilbert space models A different way of avoiding the no-go results of Sec. IV is based on using the Hilbert space realisation of the classical mechanics. This representation of classical mechanics was developed by Koopman and von Neumann (see Reed and Simon (1972), Peres (1995), Dammeier and Werner (2023) for the details), and was applied to the problem of hybrid dynamics by Sherry and Sudarshan (1978). Classical states, i. e., probability densities on a phase space \(\mathcal{P}\), are described as classical wave functions \(\phi(x,k)\), elements of the Hilbert space of square-integrable complex-valued functions on the phase space, \(\mathcal{H}_{\rm cl}={\rm L}^{2}\big{(}\mathcal{P},\mu(dxdk)\big{)}\). The hybrid dynamics is defined on the tensor product \(\mathcal{H}_{\rm qm}\otimes\mathcal{H}_{\rm cl}\). Bondar et al. (2019) and Gay-Balmaz and Tronci (2022) provide a historical introduction and a comprehensive list of references. ### Hilbert space classical mechanics We illustrate the construction in the simplest possible setting of a single particle in one dimension. Eq. (15) can be rewritten as \[i\partial_{t}\rho_{\rm cl}=\{iH,\rho_{\rm cl}\}=:\hat{\mathcal{L}}\rho_{\rm cl}, \tag{54}\] where \(\hat{\mathcal{L}}\) is the Liouville operator, or Liouvillian. The Liouville density is never negative, so we define a classical wave function via \[\rho_{\rm cl}=:|\phi|^{2}, \tag{55}\] (in general the classical wave function can have a complex phase), which satisfies the Schrodinger-Koopman equation with the Liouvillian in place of the Hamiltonian op erator. Under reasonable assumptions about the Hamiltonian, the Liouvillian is an essentially self-adjoint operator on \(\mathrm{L}^{2}\big{(}\mathcal{P},\mu(dxdk)\big{)}\) and generates a unitary evolution, \[\langle\phi|\phi^{\prime}\rangle=\int\!\mu(dxdk)\phi^{*}(x,k,t)\phi(x,k,t). \tag{56}\] It is possible to further mimic quantum theory by introducing commuting multiplicative operators \(\hat{z}=(\hat{x},\hat{k})\), \[\hat{x}\phi=x\phi,\qquad\hat{k}\phi=k\phi. \tag{57}\] Then the shift operator is \(\hat{p}_{x}:=-i\partial_{x}\) and the boost operator is \(\hat{p}_{k}:=-i\partial_{k}\). They can be combined into a vector \(\hat{\mathbf{\Pi}}:=-i\nabla_{z}\). The shift operators are not observable, but determine the dynamics via the Liouvillian, \[\hat{\mathcal{L}}=\partial_{k}H\hat{p}_{x}-\partial_{x}H\hat{p}_{k}=\big{(}J \nabla H\big{)}\cdot\hat{\Pi}=\mathbf{\xi}_{H}\cdot\hat{\mathbf{\Pi}} \tag{58}\] It is possible to introduce the Heisenberg picture, and arrive to the equations via a variational formulation. As pure states are actually phase space distributions, classical wave functions do not represent pure states. Nevertheless, it is possible to describe phase space measurements via positive-operator valued measures, and introduce, at least formally, classical entanglement between the subsystems (Terno, 2006). Despite these formal similarities it is important to note that the Liouvillian is not only not the energy \[\langle H\rangle=\int\!\mu(dz)H\rho_{\mathrm{cl}}, \tag{59}\] but may be unbounded from below. In fact, it happens already for harmonic oscillator (Peres and Terno, 2001). The minimal coupling method of a \(U(1)\) gauge theory allows to introduce a covariant formulation to the variational formulation of the Koopmanian dynamics. Under the transformation \[i\partial_{t}\to i\partial_{t}-\Phi(z),\qquad i\nabla\to i\nabla+\mathbf{ \mathcal{A}}(z), \tag{60}\] the covariant Liouvillian is \[h\hat{\mathcal{L}}_{H}:=\hbar\hat{\mathcal{L}}+\Phi-\mathbf{\xi}_{H}\cdot\mathbf{ \mathcal{A}}. \tag{61}\] One choice of gauge potential is \[\Phi=H/\hbar,\qquad\mathbf{\mathcal{A}}\cdot dz=p\cdot dq/\hbar, \tag{62}\] where the 1-form \(\mathbf{\mathcal{A}}\cdot dz\) is set to be the symplectic potential, as the symplectic form \(\omega^{(2)}=\hbar d\mathbf{\mathcal{A}}\). The the modified the Schrodinger-Koopman becomes \[i\frac{\partial\phi}{\partial t}=i\{H,\phi\}-\hbar^{-1}L\phi, \tag{63}\] where the Lagrangian \(L=p\cdot\partial_{p}H-H\), while writing the classical wave function in the polar form \(\phi=\sqrt{\rho_{\mathrm{cl}}}e^{iS/\hbar}\) leads to a suggestive pair of equations \[\frac{\partial\rho_{\mathrm{cl}}}{\partial t}+\{\rho_{\mathrm{cl}},H\}=0, \qquad\frac{\partial S}{\partial t}+\{S,H\}=L. \tag{64}\] A different (called Liouville or the harmonic oscillator) gauge has the vector potential part \[\mathbf{\mathcal{A}}\cdot dz=\tfrac{1}{2}(k\cdot dx-x\cdot dk). \tag{65}\] For homogenous quadratic Hamiltonians in this gauge \(\Phi-\mathbf{\xi}_{H}\cdot\mathbf{\mathcal{A}}=0\) and \(\hat{\mathcal{L}}=\hat{\mathcal{L}}_{H}\) Auxiliary quantities (de Gosson, 2005), \[\hat{\mathbf{z}}_{\pm}:=J(\pm\hbar\hat{\mathbf{\Pi}}-\mathbf{\mathcal{A}}),\qquad\mathbf{j}:= \phi^{*}\hat{\mathbf{z}}_{+}\phi \tag{66}\] allow to write the Liouvillian as \[\hbar\hat{\mathcal{L}}_{H}=H-\mathbf{\xi}_{H}\cdot\hat{\mathbf{z}}_{+}, \tag{67}\] and the Hamiltonian functional (Bondar et al. 2019; Gay-Balmaz and Tronci, 2022; 2023), as \[h=\hbar\int\!\phi^{*}\hat{\mathcal{L}}_{H}\phi\mu(dz)=\int\!H(|\phi|^{2}+ \mathrm{div}\mathbf{j})\mu(dz). \tag{68}\] Identification \(h\equiv\langle H\rangle\) leads to \[\rho_{\mathrm{cl}}=|\phi|^{2}+\mathrm{div}\mathbf{j}=|\phi|^{2}- \mathrm{div}\big{(}J\mathbf{\mathcal{A}}|\phi|^{2}\big{)}+\hbar\{\phi^{*},\phi\}. \tag{69}\] The normalisation is not affect by \(j\), but \[\langle z\rangle=\int\!\rho_{\mathrm{cl}}\mu(dz)=\int\!\phi^{*}\hat{\mathbf{z}}_{ -}\phi\mu(dz). \tag{70}\] While the Liouville density is not positive definite, its sign is preserved in time since the Liouville equation is a characteristic equation. An algebraic approach to the Koopmanian mechanics is presented by Morgan (2023). ### Hybrid dynamics The hybrid Hilbert space is constructed as a direct product of the quantum Hilbert space \(\mathcal{H}_{\mathrm{qm}}\), say \(\mathrm{L}^{2}(\mathbb{R}^{3},d^{3}q)\) of a single spinless particle, and the classical Koopman - von Neumann Hilbert space \(\mathcal{H}_{\mathrm{cl}}\). Mathematical details, including the question of measures on these spaces are discussed in de Gosson (2005) and Dammeier and Werner (2023). If the fully quantum Hamiltonian is \(\hat{H}(\hat{x},\hat{k},\hat{q},\hat{p},\hat{s})=\hat{H}_{\mathrm{cl}}+\hat{H}_ {\mathrm{qm}}+\hat{H}_{\mathrm{int}}\), where \(\hat{s}\) stand for the discrete degrees of freedom, a natural extension of the Koopmanian formalism is the hybrid Liouvillian (Bondar et al., 2019) is \[\hat{\mathcal{L}}_{\hat{H}}=\hat{H}-\nabla\hat{H}\cdot\hat{\mathbf{z}}_{+} \tag{71}\] (to simplify the subsequent expressions we absorbed \(\hbar\) into the definition of the hybrid Liouvillian). The C and Q operators commute and the Jacobi identity is satisfied by construction. The Schrodinger equation for the mixed wave function \(\Upsilon(z,x)\) is \[i\hbar\partial_{t}\Upsilon=\hat{\mathcal{L}}_{\hat{H}}\Upsilon. \tag{72}\] There is a variational principle that preserves the energy invariant \[h=\langle\Upsilon|\hat{\mathcal{L}}_{\widehat{H}}|\Upsilon\rangle=\mathrm{tr}\int \Upsilon^{\dagger}\hat{\mathcal{L}}_{\widehat{H}}\Upsilon\mu(dz). \tag{73}\] Equating it with the total density \(h=\mathrm{tr}\int\hat{H}\hat{\rho}\mu(dz)\) identifies the hybrid density operator as \[\hat{\rho}(q,q^{\prime},z)=\Upsilon(q,z)\Upsilon^{*}(q^{\prime},z)+\mathrm{ div}\big{(}\Upsilon(q,z)\hat{\mathbf{z}}_{+}\Upsilon^{*}(q^{\prime},z)\big{)}. \tag{74}\] (compare with Sec. VII). However, this density operator does not possess a closed Hamiltonian equation. Its evolution has to be expressed in rather convoluted form in terms of \(\Upsilon\). It takes a simpler form if the so-called exact decomposition of the wave function (Abedi et al., 2010), \[\Upsilon(z,q,t)=\psi_{z}(q,t)\phi(z,t),\qquad\int|\psi_{z}(q,t)|^{2}dq=1 \tag{75}\] can be obtained. This factorization, upon making classical phases unobservable by a gauge principle, leads to a nonlinear hybrid theory, as shown by Gay-Balmaz and Tronci (2022). Alternatively (Peres and Terno, 2001), we can consider any hybrid interaction terms that allow to fulfill as many of the desiderata of Sec. II.3 as possible. Highlighting explicitly the C, Q and CQ parts, we write this Koopmanian operator as \[\hat{\mathcal{K}}=\hat{H}_{\mathrm{q}\mathrm{m}}+h\hat{\mathcal{L}}_{H_{ \mathrm{cl}}}+\hat{\mathcal{K}}_{\mathrm{int}}. \tag{76}\] This hybrid approach was successfully applied to describe interactions in simple measurement models and it was anticipated that the Requirements I and II impose constraints on the admissible interaction terms (Sherry and Sudarshan, 1978). For example, for the quadratic Hamiltonian of Eq. (29) using the harmonic oscillator gauge we have \[\hat{\mathcal{L}}_{\widehat{H}}=\tfrac{1}{2}\hat{q}^{2}+\tfrac{1}{2}\hat{p}^{ 2}+h(\hat{k}\hat{p}_{x}-\hat{x}\hat{p}_{k})-\lambda\hat{q}\hat{p}_{x}-\tfrac{1 }{2}\lambda\hat{q}\hat{x}. \tag{77}\] However, neither this form of the interaction term nor any \(\hat{\mathcal{K}}_{\mathrm{int}}\) can reproduce the identical classical and quantum equations of motion (Terno, 2006). Indeed, having both \([\hat{p},\hat{\mathcal{K}}_{\mathrm{int}}]=-\lambda\hat{x}\), \([\hat{k},\hat{\mathcal{K}}_{\mathrm{int}}]=-\lambda\hat{q}\), as well as having all C operators to commute with all Q operators, is incompatible with the Jacobi identity for \(\hat{p}\), \(\hat{q}\), and \(\hat{\mathcal{K}}_{\mathrm{int}}\). The gauge \(\Phi=0,\mathbf{\mathcal{A}}=0\) allows to introduce the minimal modifications to the equations for the observables. In this case \(\hat{\mathcal{K}}_{\mathrm{int}}=-\lambda\hat{q}\hat{p}_{x}\), and three of the equations (41),(42) remain unchanged, while \[\dot{\hat{p}}=-\hat{q}-\lambda\hat{p}_{y} \tag{78}\] now has to be supplemented with the equations for the unobservable \(\hat{p}_{x}\) and \(\hat{p}_{y}\). Their dynamics remains decoupled from other variables but now drives the evolution of observables, leading to violation of the energy conservation (Peres and Terno, 2001; Ahmadzadegan et al., 2016). The construction of Eq. (74), on the other hand, conservs energy by construction. However, for non-trivial gauges the combined density operator \(\varrho(z)\) is not positive-definite and its sign is not preserved in time. While the quantum reduced density operator is positive semidefinite, \[\varrho=\int\varrho(z)\mu(dz)=\int\Upsilon(z)\Upsilon^{\dagger}(z)\mu(dz), \tag{79}\] the classical marginal may become negative, \[\rho_{\mathrm{cl}}(z)=\mathrm{tr}\varrho(z)=\mathrm{tr}\left(\Upsilon(z) \Upsilon^{\dagger}(z)+\mathrm{div}\big{(}\Upsilon(z)\hat{\mathbf{z}}_{-}\Upsilon^ {\dagger}(z)\big{)}\right). \tag{80}\] Gay-Balmaz and Tronci (2020) identified an infinite family of hybrid Hamiltonians preserving the initial sign of \(\rho_{\mathrm{cl}}\), thus fullfilling the minimal set of requirement for the hybrids. Once important system a coupled classical oscillator and a quantum two-level system with possibly time-dependent parameters (Gay-Balmaz and Tronci, 2023; Manfredi et al., 2023). A good agreement with the fully quantum treatment is found for a series of study cases involving harmonic oscillators with linear and quadratic time-varying coupling. In all these cases the classical evolution (starting with the appropriately selected configurations that we discuss in Sec. VI) coincides exactly with the oscillator dynamics resulting from the fully quantum description. A mathematically rigorous procedure that was introduced of Dammeir and Werner (2023) allows consistent hybrid dynamics for quasi-free operations (i. e. the resulting states are Gaussian, characterized by the matrix of expectations and variances) that include evolution under quadratic Hamiltonians, and also general types of noise. Describing operations as quantum channels it allows to treat many important quantum-informational tasks with continuous variables, such as preparation, measurement, repeated observation, cloning, teleportation, and dense coding. The hybrid Hilbert space, again the direct product \(\mathcal{H}_{\mathrm{q}\mathrm{m}}\otimes\mathcal{H}_{\mathrm{cl}}\), is built as a representation space for the algebra of quantum and classical observables \(q,p,z\). An important distinction with previous approaches is that the symplectic structure on C is disregarded, and the classical phase space is treated simply as a real vector space without additional structure. ### Statistical ensembles in configuration space The approach of Hall and Reginatto (2005) can be also traced to the wave function-based representation of classical mechanics, with the Madelung hydrodynamical model for the Schrodinger equation (Peres, 1995) as the pattern to follow (Salcedo, 2012). This QC hybrid is described by two real functions configuration space functions, \(\varrho(x,q)\) and \(S(x,q)\). In the purely quantum case these are defined via \[\psi(q)=\sqrt{\varrho(q)}e^{iS(q)/\hbar}, \tag{81}\] satisfy the pair of the Madelung equations (the \(q\) and \(\nabla_{q}\) dependent terms in Eqs. (84) and (85) below). The function \(\varrho\), being the probability density, is non-negative and normalized. When only the C system is present, the two functions define a classical mixed state \[\rho(x,k)=\varrho(x)\delta\big{(}k-\nabla S(x)\big{)}. \tag{82}\] The two functions again satisfy a pair of equations. The first one - the continuity equation -- is identical to the quantum case, and the second one, due to absence of the terms proportional to \(\hbar^{2}\) is the Hamilton-Jacobi equation. Taking observable in C or Q cases as functionals of \(\varrho\) and \(S\) it is possible to introduce the variational Poisson bracket and the Hamiltonian functional that generates the dynamics. The same construction is applied to the interacting QC systems. For example, for \[H=\frac{k^{2}}{2M}+\frac{p^{2}}{2m}+V(x,q) \tag{83}\] The equations that govern the two functions are \[\partial_{t}\varrho=-\frac{1}{M}\nabla_{x}\cdot(\varrho\nabla_{x }S)-\frac{1}{m}\nabla_{q}\cdot(\varrho\nabla_{q}S) \tag{84}\] \[\partial_{t}S=-\frac{1}{2M}\big{(}\nabla_{x}S\big{)}^{2}-\frac{1 }{2m}\big{(}\nabla_{q}S\big{)}^{2}+\frac{\hbar^{2}}{2m}\frac{\nabla_{q}^{2} \varrho^{1/2}}{\varrho^{1/2}}-V. \tag{85}\] The Poisson bracket is defined as \[\{\mathcal{A},\mathcal{B}\}=\int dxdq\left(\frac{\delta\mathcal{A}}{\delta \varrho}\frac{\delta\mathcal{B}}{\delta S}-\frac{\delta\mathcal{A}}{\delta S }\frac{\delta\mathcal{B}}{\delta\varrho}\right). \tag{86}\] The observables are represented by their expectation values. In particular, a phase space function \(f(x,k)\) and an operator \(\hat{A}\) result in \[\mathcal{F}=\int\!dxdq\varrho f\big{(}x,\nabla_{x}S\big{)},\qquad\mathcal{A} =\int\!dx\langle\Upsilon(x)|\hat{A}|\Upsilon(x)\rangle, \tag{87}\] where the QC wave function \(\langle q|\Upsilon(x)\rangle=\sqrt{\varrho(x,q)}e^{iS(x,q)/\hbar}\) satisfies a nonlinear Schrodinger equation \[i\hbar\frac{\partial\Upsilon}{\partial t}=\left(-\frac{\hbar^{2}}{2M}\nabla_{ x}^{2}-\frac{\hbar^{2}}{2m}\nabla_{q}^{2}+V+\frac{\hbar^{2}}{2M}\frac{\nabla_{x}^ {2}|\Upsilon|}{|\Upsilon|}\right)\Upsilon. \tag{88}\] Using the canonical pair \((\Upsilon,i\hbar\Upsilon^{*})\) it is possible to show that the scheme has a Lie bracket that is defined on the set of observables, and it reduces to the Poisson bracket and the commutator for purely C and Q systems, respectively. The Ehrenfest relations generalize to hybrid systems, and in particular the expectation values for the position and momentum observables of linearly coupled classical and quantum oscillators obey the classical equations of motion (Hall, 2008; Hall and Reginatto, 2005). However, beyond the usual issues that are brought by nonlinearity (potentially conflicting with some clauses of Requirement V), the bracket (86) of a general purely quantum observable with a general purely classical observable is not zero. As this result remains valid for \(H_{\rm int}\equiv 0\), it violates Requirement III. ## VI Phase space models The phase-space formulation of quantum mechanics provides an alternative way of analyzing hybrid quantum-classical systems. In this formulation, quantum and classical systems are described using functions on the phase space. Quantum states are described by their Wigner functions, and the exact quantum dynamics is obtained if the Poisson bracket is replaced with the Moyal bracket (Zachos et al., 2005). It is a convenient setting to study the classical limit (Peres, 1995; Landsman, 2017) and its partial version that is used to derive the hybrid schemes (Caro and Salcedo, 1999; Diosi et al., 2000; Amin and Walton, 2021). We first describe this representation and then present the main features of the phase space hybrid dynamics. ### Phase space quantum mechanics A (Weyl-ordered) operator \(\hat{A}\) on \(\mathcal{H}=\mathbb{L}^{2}(\mathbb{R}^{n})\) is represented as a phase space function \(A(q,p)\) on \(\mathcal{P}=\mathbb{R}^{2n}\) via the Wigner transform (Hillary et al., 1984; Zachos et al. 2005; Schlosshauer, 2007), \[A(q,p)=\frac{1}{(2\pi\hbar)^{n}}\int\!dye^{ipy/\hbar}\langle q+\tfrac{1}{2}y |\hat{A}|x-\tfrac{1}{2}y\rangle. \tag{89}\] Under this transformation the operator product on the Hilbert space is mapped into the \(\star\)-product of the phase space functions, \[\star:=\exp\left[\frac{i\hbar}{2}\left(\stackrel{{\leftarrow}}{{ \partial_{x}}}\stackrel{{\rightarrow}}{{\partial_{k}}}-\stackrel{{ \leftarrow}}{{\partial_{k}}}\stackrel{{\rightarrow}}{{ \partial_{x}}}\right)\right]\equiv\exp\left(\frac{i\hbar\mathbb{P}}{2} \right), \tag{90}\] and the commutator maps to the Moyal bracket \[[\hat{A},\hat{B}]\rightarrow[\![A,B]\!]:=\frac{1}{i\hbar}(A\star B-B\star A). \tag{91}\] Expansion of Eq. (90) shows that the Moyal bracket equals to the Poisson bracket plus correction terms, \[[\![A,B]\!]=\{A,B\}+\mathcal{O}(\hbar). \tag{92}\] It is easy to see that for quadratic functions, the Moyal bracket coincides with the Poisson bracket. The Wigner transform of a density operator results in the Wigner quasi-probability distribution, \(W_{\rho}\) (Sec. II.B). Its dynamics is governed by the quantum counterpart of Eq. (15) \[\partial_{t}W_{\rho}(q,p)=-[\![W_{\rho},H]\!]. \tag{93}\] The question of equivalence of quantum and classical descriptions makes sense in the following context. A positive initial Wigner function \(W(x,k,t=0)\) that corresponds to the quantum state \(\hat{\rho}(t=0)\) can be identified with the Liouville function, \(\rho_{\rm cl}(t=0)\gets W(t=0)\). This function is evolved classically by Eq. (4), and then the reverse identification is made: \(W(t)\leftarrow\rho_{\rm cl}(t)\). If this represents a valid quantum state \(\rho_{\rm qm}(t)\) the procedure is consistent. If, furthermore, the phase space expectation values, calculated with \(\rho_{\rm cl}(t)\) or, equivalently, the quantum expectations calculated with \(W_{\rho_{\rm cl}(t)}\) are the same as the expectations that are obtained with the quantum-evolved state \(\rho_{\rm qm}(t)\), the two descriptions are equivalent (Ahmadzadegan et al. 2016). ### Hybrid dynamics The first step in devising a phase space hybrid dynamics is to represent the entire system on the combined phase space \(\mathcal{P}\) with the coordinates \((q,p,x,k)\). Usual methods of reaching the classical limit (Zurek, 2003; Schlosshauer, 2007; Landsman, 2017), such as use of coherent states (Diosi et al., 2000), Moyal brackets (Caro and Salcedo, 1999) or their counterparts for various operator orderings (Amin and Walton, 2021) are adapted to taking the partial classical limit (over the variables \(x,k\)) and result in various hybrid schemes. It is obtained by keeping only terms up to the order \(\hbar\) in the derivatives of \(x\) and \(k\) in the \(\star\)-product. The result for an arbitrary quantization scheme in the C subsystem (and thus a general \(\star\)-product) give the explicit form of the phase space representation of the hybrid bracket Eq. (51), \[\{\![A,B]\!\} :=[\![A,B]\!]_{\rm Q}+\tfrac{1}{2}(A\star_{\rm Q}\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! classical system, and \(\delta\hat{q}=\hat{q}-\langle\hat{q}\rangle\) and \(\delta\hat{p}=\hat{p}-\langle\hat{p}\rangle\) are the operators for deviations from the mean (expectation) values. For a particular initial QC Gaussian state with non-zero QC correlations the evolution of \(W(q,p,x,k)\) leads to violation of the Heisenberg uncertainty relation for \(q\) and \(p\) after some evolution time \(t_{*}\). Hence \(W_{\rm qm}(q,p,t_{*})\) does not represent a valid quantum state \(\hat{\rho}_{\rm qm}(t)\), and the hybrid scheme breaks down. ## VII Stochastic dynamics A view that all (with a possible exception of gravity), classical systems are just effective descriptions of the underlying quantum systems, as well as difficulties of the known schemes that are based on the interaction of only Q and C subsystems, suggest adding to the discussion the mechanism that is responsible for the classicality of C (Diosi and Halliwell, 1998; Diosi et al. 2000). Its key element is decoherence, which from the path integral perspective manifests itself as washing out interference between histories of certain types of variables. Mathematically it is effected by some kind of coarse-graining procedure, such as coupling to a thermal bath or a measuring device, with subsequent tracing out of those degrees of freedom (Zurek, 2003; Schlosshauer, 2007; Busch et al., 2016). Diosi and Halliwell (1998) considered a (quasi-)classical particle \(M\) that is coupled to a quantum harmonic oscillator via \(H_{\rm int}=\lambda qx\). It is assumed that the classical particle continuously measures the quantum one and is coupled to the momentarily measured value \(\bar{q}\). With quantum state taken to be pure and normalized at each time step, the resulting stochastic system of equations consists of \[i\hbar\partial_{t}\psi =\big{(}\hat{H}_{\rm qm}+\lambda xq\big{)}\psi\] \[+\frac{ih}{2\sigma}\left(\frac{1}{2\sigma}(q-\langle q\rangle)^{2 }+(q-\langle q\rangle)\eta(t)\right)\psi, \tag{100}\] for Q and \[M\ddot{x}=-V^{\prime}(x)-\lambda\langle q\rangle-\lambda\sigma\eta(t), \tag{101}\] where \(\langle q\rangle=\langle\psi|\hat{q}|\psi\rangle\), parameter \(\sigma\) represents the effective resolution of the measurement (and needs to scale as \(\lambda^{-1}\), and \(\eta(t)\) is the Gaussian white noise with zero mean and delta correlation function. This system gives intuitively sensible results even when the quantum oscillator starts as a superposition superposition of well-separated localized states. This scheme is non-linear, but it is not a problem for a purposely effective description. Two other drawbacks are the perpetual purity of the quantum state and impossibility to describe separated Q and C systems when the coupling \(\lambda\to 0\). Once the decoherence is invoked there is no reason why not to construct a hybrid dynamics that will be a QC counterpart (Diosi, 2014; Oppenheim, 2018) of the trace preserving completely positive (TPCP) evolution of open quantum systems. First we introduce the QC density operator. We take a cue from the so-called zero discord quantum-classical states of quantum theory (Olivier and Zurek, 2001; Brodutch and Terno, 2010) that have a form \(\varrho=\sum_{z}p(z)|z\rangle_{C}\langle z|\otimes\rho_{\rm Q}(z)\), where \(|z\rangle_{C}\) form an orthonormal basis on \(\mathcal{H}_{\rm C}\), \(\rho_{\rm Q}(z)\) are density matrices on Q and the positive weights \(p(z)\) sum up to one. The hybrid density operator is defined as \[\varrho=:\int\!\mu(dz)\varrho(z):=\int\!\mu(dz)\rho_{\rm cl}(z)\rho_{\rm qm}( z),\qquad\mathrm{Tr}\varrho=1, \tag{102}\] where, as before, \(\rho_{\rm cl}(z)\) is the probability density on the classical phase space \(\mathcal{P}\) and \(\rho_{\rm qm}(z)\) is the operator-values function of \(z\in\mathcal{P}\). The most general TPCP evolution of the hybrid QC state is generalisation of the GKSL equation (Oppenheim, 2018; Oppenheim et al. 2023). It has the form \[\frac{\partial\varrho(z,t)}{\partial t}= -\{\varrho,H\}_{\hbar}+d_{0}^{\alpha\beta}L_{\alpha}\varrho L_{ \beta}^{\dagger}-\tfrac{1}{2}d_{0}^{\alpha\beta}[L_{\beta}^{\dagger}L_{\alpha },\varrho]_{+}\] \[+\sum_{n=1}^{n=2}(-1)^{n}\left(\frac{\partial^{n}}{\partial z_{i_ {1}}\ldots\partial z_{i_{n}}}\right)(d_{n,i_{1}\ldots i_{n}}^{00}\varrho)\] \[+\frac{\partial}{\partial z_{i}}\big{(}d_{1,i}^{0\alpha}L_{ \alpha}^{\dagger}\big{)}+\frac{\partial}{\partial z_{i}}\big{(}d_{1,i}^{\alpha 0}L_{\alpha} \varrho\big{)}, \tag{103}\] where the first line consists of the three terms of the GKSL equations. Here \(d_{0}\) is the matrix of Linbladian couplings, \(d_{1}\) encodes the strenght of CQ interaction, and \(d_{2}^{00}\) represent the unavoidable diffusion in the classical phase spaces. The matrices should satisfy \(2d_{2}^{00}\succeq d_{1}d_{0}^{-1}d_{1}^{\dagger}\), and \((I-d_{0}d_{0}^{-1})d_{1}=0\), where \(d_{0}^{-1}\) is the generalized inverse of the positive semi-definite matrix \(d_{0}\). Taking the system of coupled oscillators of Eq. (29) as an example (with \(d_{1}\equiv\lambda\)), the evolution becomes \[\partial_{t}\varrho=-\{[\![\varrho,H]\!]-\tfrac{1}{2}\kappa\{q,\{\varrho,q\}_ {\hbar}\}_{\hbar}+d_{2}\frac{\partial^{2}\varrho}{\partial k^{2}}+\gamma\frac {\partial k\varrho}{\partial k}, \tag{104}\] where the hybrid bracket of Eq. (38) that is explicitly given by \[\{[\![\varrho,H]\!]\}=\{\varrho,H_{\rm cl}\}+\{\varrho,H_{\rm qm}\}_{\hbar}- \tfrac{1}{2}d_{1}\big{(}q\partial_{k}\varrho+\partial_{k}\varrho\big{)}, \tag{105}\] where the decoherence and diffusion (that is dampened by a friction term with coupling \(\gamma\)) ensure that the QC density matrix remains positive. The three parameters should satisfy \(d_{2}\geqslant d_{1}^{2}/\kappa\). In general, if the dynamics is memoryless, there are two classes of these dynamics, one with finite sized jumps in the classical phase space and one which is continuous (Oppenheim et al., 2023). ###### Acknowledgements. I am grateful to my friends and colleagues Aida Ahmadzadegan, Denys Bondar, Aharon Brodutch, Laios Diosi, Flaminia Giacomini, Viqar Husain, Robert Mann, Jonathan Oppenheim, Asher Peres, and Cesare Tronci for numerous discussions and collaborations. Suggestions and critical comments of Flaminia Giacomini, Lorenzo Salcedo, and Cesare Tronci greatly contributed to this article.
2309.11061
Anomalous thermal transport across the superionic transition in ice
Superionic ices with highly mobile protons within the stable oxygen sub-lattice occupy an important proportion of the phase diagram of ice and widely exist in the interior of icy giants and throughout the universe. Understanding the thermal transport in superionic ice is vital for the thermal evolution of icy planets. However, it is highly challenging due to the extreme thermodynamic conditions and dynamical nature of protons, beyond the capability of the traditional lattice dynamics and empirical potential molecular dynamics approaches. In this work, by utilizing the deep potential molecular dynamics approach, we investigate the thermal conductivity of ice-VII and superionic ice-VII" along the isobar of $p = 30\ \rm{GPa}$. A non-monotonic trend of thermal conductivity with elevated temperature is observed. Through heat flux decomposition and trajectory-based spectra analysis, we show that the thermally-activated proton diffusion in ice-VII and superionic ice-VII" contribute significantly to heat convection, while the broadening in vibrational energy peaks and significant softening of transverse acoustic branches lead to a reduction in heat conduction. The competition between proton diffusion and phonon scattering results in anomalous thermal transport across the superionic transition in ice. This work unravels the important role of proton diffusion in the thermal transport of high-pressure ice. Our approach provides new insights into modeling the thermal transport and atomistic dynamics in superionic materials.
Rong Qiu, Qiyu Zeng, Han Wang, Dongdong Kang, Xiaoxiang Yu, Jiayu Dai
2023-09-20T04:51:26Z
http://arxiv.org/abs/2309.11061v1
# Anomalous thermal transport across the superionic transition in ice ###### Abstract Superionic ices with highly mobile protons within the stable oxygen sub-lattice occupy an important proportion of the phase diagram of ice and widely exist in the interior of icy giants and throughout the universe. Understanding the thermal transport in superionic ice is vital for the thermal evolution of icy planets. However, it is highly challenging due to the extreme thermodynamic conditions and dynamical nature of protons, beyond the capability of the traditional lattice dynamics and empirical potential molecular dynamics approaches. In this work, by utilizing the deep potential molecular dynamics approach, we investigate the thermal conductivity of ice-VII and superionic ice-VII" along the isobar of \(p=30\) GPa. A non-monotonic trend of thermal conductivity with elevated temperature is observed. Through heat flux decomposition and trajectory-based spectra analysis, we show that the thermally-activated proton diffusion in ice-VII and superionic ice-VII" contribute significantly to heat convection, while the broadening in vibrational energy peaks and significant softening of transverse acoustic branches lead to a reduction in heat conduction. The competition between proton diffusion and phonon scattering results in anomalous thermal transport across the superionic transition in ice. This work unravels the important role of proton diffusion in the thermal transport of high-pressure ice. Our approach provides new insights into modeling the thermal transport and atomistic dynamics in superionic materials. + Footnote †: Corresponding author: [email protected] + Footnote †: Corresponding author: [email protected] + Footnote †: Corresponding author: [email protected] As one of the most abundant substances in Earth and the universe, ice is of vital importance from a scientific perspective and attracts wide research interests. Especially, in the interior conditions of icy moons, where pressure ranges from 2 GPa to hundreds of GPa and temperature ranges from 300 K to 4000 K, high-pressure ice phases (VII/VII"/X) is expected to widely exist [1]. These phases present the same body-centered cubic (BCC) oxygen sub-lattice but differ in the dynamics of hydrogen atoms (protons)[2]. For the molecular crystal ice-VII, the orientation of hydrogen-bonding is disordered and continually changing as in hexagonal ice, obeying the 'ice rule' [3]. When temperature grows above thousands of Kelvin, ice-VII transforms into the superionic phase VII" [4]. It has been suggested that the suitable conditions for superionic ice lie deep inside the watery giants Uranus and Neptune and may be common throughout the Universe [1; 5]. VII" is characterized by highly mobile hydrogen ions (protons), behaving like a liquid and moving within the BCC oxygen sub-lattice. The difference in the behavior of protons can result in anomalies in thermodynamic and transport properties of ice. The occurrence and geodynamic behaviors of these high-pressure ice polymorphs (MPa-GPa range) have important effects on the thermal evolution of icy planets. On this issue, thermal conductivity serves a key role for in-depth understanding. However, despite the enormous phase transition regime and proton transfer dynamics explored by previous efforts [1; 6; 7; 8; 9], it seems likely that we still know little about the thermal transport properties of dense ice across superionic transition. Existing experimental efforts had been pursued to measure the thermal conductivity of ice-VII up to 20 GPa [10; 11], but still far from the condition of superionic regime due to the limitation of experimental techniques under extreme conditions. From a theoretical point of view, the dynamical nature of protons prevents the most commonly used tool, the lattice dynamics approach, from tackling these issue. Another way to obtain thermal conductivity is molecular dynamics simulation. However, _ab initio_ method requires an expensive computational cost to reach the long-time trajectories with large simulation size required for estimation of correlation function [12]. Moreover, the diverse local environments that characterize the different relevant phases of water make classical force fields unfit for an accurate simulation of their properties. Until now, the microscopic mechanism determining the thermal conductivity of superionic ice at high pressure remains unclear. Recent advances in machine-learning potential surface allow a full quantum-mechanical, _ab initio_ treatment of the interatomic interactions efficiently. The deep potential water model is reported to predict a phase diagram close to experiments [13], and its following applications have demonstrated its success in estimating the thermal conductivity of water at extreme conditions [14; 15]. Therefore, in this work, we adopt the DP-SCAN water model and conduct a series of deep potential molecular dynamics (DPMD) simulations to obtain the thermal conductivity of VII and VII", as well as diffusion coefficient, spectral energy density (SED), and dynamic struc ture factor (DSF), to understand the heat transport and to unravel the impact of mobile protons across superionic phase transition. _Computational Details_ The DP model was trained with the DeePMD-kit package [16; 17] using diverse ice crystal and liquid phase covering from ambient condition to extreme thermodynamic state (p = 50 GPa, T = 2400 K). The training data were obtained from density functional theory calculations using the strongly constrained and appropriated normed (SCAN) exchange-correlation functional. More details can be found in [13]. With DPMD simulation, the lattice thermal conductivity \(\kappa\) is obtained from the integration of the heat current autocorrelation function (HCACF), known as Green-Kubo formula [18]. We performed a series of DPMD simulations with the LAMMPS package [19]. A large supercell containing 1,296 atoms is used to overcome the size effect (see Fig. S1 in the SI). The timestep was set to 0.5 fs and the Nose-Hoover thermostat [20; 21] was employed in the NVT ensemble. After a thermalization stage of 20 ps, the ensemble is switched into the NVE ensemble to calculate the HCACF during the next 320 ps with the correlation time set to 32 ps. To provide a representative sample for the relevant statistical analysis, each (P, T) case repeats 20 times, with independent initial velocity distribution. We note that such computation complexity can hardly be achieved by the traditional AIMD method. _Non-monotonic behavior of thermal conductivity at elevated temperature._ The isobar of \(p=30\) GPa is chosen to investigate the thermal and proton transport of ice-VII and VII". As the temperature increases from 800 K to 1600 K, our DPMD simulations reproduce the superionic phase transition of ice reported in previous works [9; 13; 7]. The atomic trajectories at temperatures near the phase boundary are shown in Fig.1(a). More atomic trajectories can be seen in Fig. S2 in SI. We can easily identify the different behaviors of protons in the oxygen sub-lattice. At low temperatures, the hydrogen atoms are bonded and only vibrate within the O-H\(\cdots\)O bonds. At 1200 K close to the phase boundary, the hydrogen atoms begin to initiate hopping between different O-H\(\cdots\)O bonds but remain bonded with oxygen atoms. The proton can migrate from an O-H\(\cdots\)O bond to another, leading to a fast change in the orientation of water molecules. At higher temperatures, the protons diffuse freely out of the bcc oxygen sub-lattice. Namely, the system transits into a superionic phase. Correspondingly, \(\kappa\) exhibits a non-monotonic trend, as shown in Fig. 1(b). Firstly, \(\kappa\) decreases from \(6.12\pm 0.14\) Wm\({}^{-1}\)K\({}^{-1}\) at 800 K to a minimum value of \(5.37\pm 0.21\) Wm\({}^{-1}\)K\({}^{-1}\) at 1000 K. Then a significant increase in \(\kappa\) to a three-fold value of \(17.28\pm 0.67\) Wm\({}^{-1}\)K\({}^{-1}\) at T = 1200 K is observed. As the ice transits from VII phase into VII" phase, \(\kappa\) gradually increases and finally reaches a plateau value of \(22.72\pm 1.13\) Wm\({}^{-1}\)K\({}^{-1}\) at 1600 K. _Dominant role of proton diffusion in heat convection._ The anomalous non-monotonic trend of \(\kappa\) is attributed to the significant change in proton transport across the superionic transition. We extract the contributions of heat convection and conduction to \(\kappa\) (\(\kappa_{conv}\) and \(\kappa_{cond}\)) by decomposing the heat current into a heat convection term and a heat conduction term, respectively (see more details in SI). As depicted in Fig. 2(a), \(\kappa_{conv}\) shows a monotonic increasing trend with increasing temperature. At low temperatures, \(\kappa_{conv}\) is close to zero. At temperatures near the phase boundary, \(\kappa_{conv}\) increases sharply. At higher temperatures, \(\kappa_{conv}\) gradually converges to a plateau similar to \(\kappa\). \(\kappa_{conv}\) overwhelms the \(\kappa\) after the superionic transition, because of the fast diffusion of protons. The behavior of \(\kappa_{conv}\) dominates the dramatic increase and saturation of \(\kappa\), highlighting the importance of proton transfer dynamics in understanding the heat transport in ice-VII and superionic VII". Figure 1: (a) Atomic trajectories of ice-VII and VII” at different temperatures during a 20-ps long run. The oxygen and hydrogen atoms are orange and blue respectively. The cyan color is used to highlight the selected hydrogen atoms that undergo transitions from bonded states between two adjacent oxygen atoms to superionic states around different oxygen atoms. (b) Temperature-dependent thermal conductivity \(\kappa\) of ice-VII and VII” along the isobar of P = 30 GPa. The gray vertical dashed line denotes the VII-VII” phase boundary obtained from previous work [13]. We plot the mean square displacements (MSD) of oxygen and hydrogen atoms in Fig. S3 in SI. The diffusion coefficient can be estimated from the slope of MSD from DPMD trajectories. Oxygen atoms have a flat curve of MSD and thus a diffusion coefficient close to zero. In comparison, the MSD curves of proton show diffusion characteristics and different slopes at different temperatures. As shown in Fig. 2(b), the diffusion coefficient of proton \(D_{H}\) increases by two orders of magnitude, as the temperature increases from 1000 K to 1600 K. Besides, the diffusion behavior of proton at elevated temperatures can be well described as an Arrhenius-like process, which is given as \(D(T)=Ae^{-E_{a}/k_{B}T}\), where \(T\) is temperature, \(A\) is a prefactor, \(E_{a}\) is the activation energy of the hopping mechanism leading to particle diffusion, and \(k_{B}\) is Boltzmann constant. By fitting the \(D_{H}\) with the Arrhenius model, the \(E_{a}\) of ice-VII and VIT' is 1.77 eV and 0.36 eV respectively. The much smaller \(E_{a}\) of ice-VII' indicates the weaker confinement and stronger diffusion of protons in superionic phase. The temperature where the diffusion behavior of proton changes is consistent with the phase boundary of superionic transition. Except for the mass diffusion, we further investigate the thermal diffusion based on the dynamic structure factor \(S(k,\omega)\) (see more details in SI). The central Rayleigh peak encodes the thermal diffusion process as the wavenumber is small enough to reach the hydrodynamic regime [22; 23]. At the hydrodynamic limit, the shape of this central peak can be described by a Lorentzian function with peak width relating to the thermal diffusivity \(D_{T}\), which gives \(S(k,\omega)\propto 2D_{T}k^{2}/(\omega^{2}+(D_{T}k^{2})^{2})\). We calculate the \(S(k,\omega)\) of oxygen and hydrogen atoms. By fitting the DPMD results with the hydrodynamic expression, we present the normalized Rayleigh-Brillouin triplets for both H-contributed and O-contributed \(S(k,\omega)\) at a small wavenumber of \(k=0.07\)\(\mathbf{A}^{-1}\). As presented in Fig. 3, as temperature increases, the broadening of central Rayleigh peaks for protons is more significant than that for oxygen atoms, indicating a more significant increase in thermal diffusion of protons compared with that of oxygen atoms. The sharp increase in \(\kappa\) across the superionic transition stems from the dramatically enhanced diffusion of protons and thus sharply increased thermal diffusion of protons. _Transverse mode softening across superionic transition._ We also extract the contributions of heat conduction to \(\kappa\) (\(\kappa_{cond}\)). As presented in Fig. 4(a), \(\kappa_{cond}\) shows a monotonic decrease trend for ice-VII. The \(1/T\) dependence of \(\kappa_{cond}\) is a typical characteristic that reveals the dominant role of three-phonon scatterings [24]. At low temperatures, \(\kappa_{cond}\) is much larger than \(\kappa_{conv}\), leading to a decreasing trend. Therefore, the anomalous non-monotonic trend of \(\kappa\) originates from the competing mechanism of heat conduction and convection. We also note a discontinuity of \(\kappa_{cond}\) near the phase boundary. On one hand, it can be attributed to the density decrease accompanied by the VII-VII" phase transition (see Fig. S5 in SI). On the other hand, a softening of the transverse acoustic modes is observed. Here we calculate the spectra energy density \(C(k,\omega)\) from DPMD Figure 3: Normalized dynamic structure factor \(S(k,\omega)\) of hydrogen and oxygen atoms along the isobar of p = 30 GPa at \(k=0.07\)\(\mathbf{A}^{-1}\). Figure 2: Temperature dependence of (a) thermal conductivity contributed by heat conduction \(\kappa_{conv}\) (b) diffusion coefficient along the isobar of p = 30 GPa. Blue and orange markers denote the results of ice-VII and superionic VII”, respectively. The gray vertical dashed line denotes the VII-VII” phase boundary obtained from previous work [13]. trajectories (see details in SI). The \(C(\mathbf{k},\omega)\) provides information on collective vibrational modes (group velocity, lifetime) inside the complex ice polymorph. As shown in Fig.4(b), we present the \(C(k,\omega)\) in the low-frequency regime (\(\nu\leq 20\) THz), which contains the longitudinal and transverse acoustic branches that make dominant contributions to \(\kappa_{cond}\). As the wavenumber approaches the center of the first Brillouin zone, the dispersion relationship exhibits a linear behavior, and the sound velocity can be extracted. For ice-VII, the sound velocities of both longitudinal and transverse acoustic branches decrease slightly with increasing temperature. For ice-VII\({}^{\text{\textregistered}}\), the sound velocity of longitudinal acoustic branches maintains a slight decreasing trend while the sound velocity of transverse acoustic branches shows a sudden larger decrease across the superionic transition. A significant softening of the transverse acoustic modes across the superionic transition is observed with decreased group velocity from 0.96 km/s to 0.74 km/s. Moreover, as temperature increases, the vibrational energy peaks of both longitudinal and transverse acoustic branches exhibit broadening, corresponding to the reduction in phonon lifetimes. Overall, by summarizing our results, we can understand the anomalous behavior of \(\kappa\) in ice-VII across the superionic transition. At moderate temperatures, the propagation of lattice vibration mode dominates the \(\kappa\) and exhibits a typical \(1/T\) dependence due to the three-phonon scattering process. At T = 1000 K, the onset of hydrogen diffusion between O-H-.O pairs leads to an exponential increase by two orders of magnitude in the diffusion coefficient of protons. These protons hop within the oxygen-formed BCC sub-lattice and carry heat, creating a non-negligible contribution via heat convection. As temperature increases above the superionic transition threshold (T = 1250 K), the first-order phase transition occurs. The significantly enhanced mobility of hydrogen, combined with softening in transverse acoustic branches, leads to a saturated value above T = 1300 K. Under such conditions, the diffusing protons can experience a stronger thermal diffusion process as compared with oxygen atoms. In conclusion, by utilizing the newly developed DPSCAN water model, we investigate the microscopic mechanism behind the anomalous thermal and proton transport of high-pressure ice across the superionic transition. We explain the anomalous increasing trend of \(\kappa\) with elevated temperature and illustrate the important role of proton diffusion in superionic ice. To overcome the limitation of the traditional lattice dynamics approach which requires high-order force constants to correct the quasi-harmonic approximation, here we extract all the mass transport, heat transport, and collective dynamics from long-time large-scale molecular dynamics trajectories, without any assumptions or approximations made. These approaches combined with _ab initio_ accurate deep neural network potential energy surface model, can be applied to various complex materials, including proton-disorder ice polymorphs, proton-diffusion superionic crystal, and amorphous materials. ## Acknowledgment This work was supported by the National Key R&D Program of China under Grant No. 2017YFA0403200, the NSAF under Grant No. U1830206, the Science and Technology Innovation Program of Hunan Province under Grant No. 2021RC4026. Figure 4: Temperature dependence of (a) thermal conductivity contributed by heat conduction \(\kappa_{cond}\) and (b) normalized spectra energy density \(C(k,\omega)\), where the direction of wavevector \(\mathbf{k}=(n_{x}\Delta k_{x},0,0)\) is set to \(x\) direction with the wavenumber resolution of \(\Delta k_{x}=2\pi/L_{x}\sim 0.07\) Å\({}^{-1}\). The red and blue dotted line denotes the linear dispersion relationship for longitudinal and transverse acoustic branches, respectively.
2309.07603
Warped product Quasi Bi-slant Submanifolds of Kaehler Manifolds
In this paper, we introduce the notion of warped product quasi bi-slant submanifolds in Kaehler manifolds. We have shown that every warped product quasi bi-slant submanifold in a Kaehler manifold is either a Riemannian product or a warped product quasi hemi slant submanifold. Furthermore, we provide examples for both cases.
Mehraj Ahmad Lone, Prince Majeed
2023-09-14T11:09:37Z
http://arxiv.org/abs/2309.07603v1
# Warped product Quasi Bi-slant Submanifolds of Kaehler Manifolds ###### Abstract In this paper, we introduce the notion of warped product quasi bi-slant submanifolds in Kaehler manifolds. We have shown that every warped product quasi bi-slant submanifold in a Kaehler manifold is either a Riemannian product or a warped product quasi hemi slant submanifold. Furthermore, we provide examples for both cases. H emi-slant submanifolds, Quasi bi-slant submanifolds, Warped products. ## 1 Introduction Chen[5] introduced the notion of slant submanifolds, the initial findings on slant submanifolds were collected in his book [6]. Numerous geometer groups continue to study and conduct research on this idea of submanifolds. Recently, the related literature of slant submanifolds has been compiled in the form of two books by Chen, Shahid and Solamy (see [15, 16]). During last decade, many generalizations and extensions of slant submanifolds have been introduced, like: semi-slant, pointwise slant, hemi-slant, pointwise hemislant and many more. The related literature of these kind of generalizations can be be found in (see, [11, 17, 18, 20, 22]). A more generic class of submanifolds in the form of bi-slant submanifolds was introduced by Cabrerizo and Cariazo [4]. This class of submanifolds acts as a natural generalization of CR, semi-slant, slant, hemi-slant submanifolds [18, 20, 23]. Further the extended notion of pointwise bi-slant submanifolds of Kaehler manifolds can be found in [14]. Etayo[17] introduced the idea of pointwise slant submanifolds as an extension of slant submanifolds and gave them the label quasi-slant submanifolds. Prasad, Shukla, and Haseeb [24] recently proposed the notion of quasi hemi-slant submanifolds of Kaehler manifolds. This notion of quasi hemi-slant submanifolds was generalised by Prasad, Akyol, Verma, and Kumar [25] to a more generic class of submanifolds in the form of quasi bi-slant submanifolds of Kaehler manifolds. They established the prerequisites for the integrability of the distributions used in the definition of such submanifolds. Bishop and O'Neill in \(1960s\) introduced the concept of warped product manifolds. These manifolds find their applications both in physics as well as in Mathematics. Since then the study of warped product submanifolds has been investigated by many geometers (see, [1, 9, 10, 12]). In particular, Chen started looking these warped products as submanifolds of different kinds of manifolds (see, [7, 8]). In Kaehlerian settings, he proved besides CR- products the non-existence of warped products of the form \(N^{\perp}\times_{f}N^{T}\), where \(N^{\perp}\), \(N^{T}\) is a totally real and holomorphic submanifold, respectively. Now from the past two decades this area of research is an active area of research among many of the geometers and theoretical physicists. For the overall development of the subject we refer[13]. Now while importing the survey of warped products to slant cases, Sahin in [27] proved the non-existence of semi-slant warped products in any Kaehler manifold. Then in [29] he extended the study to pointwise semi-slant warped products of Kaeherian manifolds. Uddin, Chen and Solamy [31] studied warped product bi-slant submanifolds in Kaehler manifolds. In this paper we have studied the notion of warped product quasi bi-slant submanifolds in Kaehler manifolds, we proved that every warped product quasi bi-slant submanifold in a Kaehler manifold is either a Riemannian product or a warped product quasi hemi slant submanifold. Moreover, we provide the examples of both the cases. ## 2 Preliminaries Let \((\bar{M},J,g)\) be an almost Hermitian manifold with an almost complex structure \(J\) and a Riemannian metric \(g\) such that \[J^{2}=-I, \tag{2.1}\] \[g(JX,JY)=g(X,Y) \tag{2.2}\] for any \(X,Y\in\Gamma(T\bar{M})\), where \(I\) is the identity map and \(\Gamma(T\bar{M})\) denotes the set of all vector fields of \(\bar{M}\). Let \(\bar{\nabla}\) denotes the Levi-Civita connection on \(\bar{M}\) with respect to the Riemannian metric \(g\). If the almost complex structure \(J\) satifies \[(\bar{\nabla}_{X}J)Y=0, \tag{2.3}\] for any vector \(X,Y\in\Gamma(T\bar{M})\), the \(\bar{M}\) is called a Kaehler manifold. Let \(M\) be a Riemannian manifold isometrically immersed in \(\bar{M}\) and we denote by the symbol \(g\) the Riemannian metric induced on \(M\). Let \(\Gamma(TM)\) denote the Lie algebra of vector fields in \(M\) and \(\Gamma(T^{\perp}M)\), the set of all vector fields normal to \(M\). If \(\nabla\) be the induced Levi-Civita connection on \(M\), the Gauss and Weingarten formulas are respectively given by \[\bar{\nabla}_{X}Y=\nabla_{X}Y+\sigma(X,Y), \tag{2.4}\] and \[\bar{\nabla}_{X}N=-A_{N}X+\nabla_{X}^{\perp}N, \tag{2.5}\] for any \(X,Y\in\Gamma(TM)\) and \(N\in\Gamma(T^{\perp}M)\), where \(\nabla^{\perp}\) is the normal connection on \(T^{\perp}M\) and \(A\) the shape operator. The shape operator and the second fundamental form of \(M\) are related by \[g(A_{N}X,Y)=g(\sigma(X,Y),N), \tag{2.6}\] for any \(X,Y\in\Gamma(TM)\) and \(N\in\Gamma(T^{\perp}M)\), and \(g\) denotes the induced metric on \(M\) as well as the metric on \(\bar{M}\). For a tangent vector field \(X\) and a normal vector field \(N\) of \(M\), we can write \[JX=\phi X+\omega X, \tag{2.7}\] where \(\phi X\) and \(\omega X\) are the tangential and normal components of \(JX\) on \(M\) respectively. Similarly for \(N\in\Gamma(T^{\perp}N)\), we have \[JN=BN+CN, \tag{2.8}\] where \(BN\) and \(CN\) are tangential and normal components of \(JN\) on \(M\) respectively. Moreover, from (2.2), (2.7) and (2.8), we have \[g(TX,Y)=g(X,TY), \tag{2.9}\] for any \(X,Y\in\Gamma(TM)\). We can now specify the following classes of submanifolds of Hermitian manifolds for later use: (1) A submanifold \(M\) of an almost Hermitian manifold \(\bar{M}\) is said to be slant (see [5]), if for each non-zero vector \(X\) tangent to \(M\), the angle \(\theta(X)\) between \(JX\) and \(T_{p}M\) is a constant, i.e., it does not depend on the choice of \(p\in M\) and \(X\in T_{p}M\). In this case, the angle \(\theta\) is called the slant angle of the submanifold. A slant submanifold \(M\) is called proper slant submanifold if \(\theta\neq 0,\frac{\pi}{2}\). (2) A submanifold \(M\) of an almost Hermitian manifold \(\bar{M}\) is said to be invariant(holomorphic or complex) submanifold (see [5]), if \(J(T_{p}M)\subseteq T_{p}(M)\) for every point \(p\in M\). (3) A submanifold \(M\) of an almost Hermitian manifold \(\bar{M}\) is said to be anti-invariant (totally real) submanifold (see [7]), if \(J(T_{p}M)\subseteq T_{p}^{\perp}(M)\) for every point \(p\in M\). (4) A submanifold \(M\) of an almost Hermitian manifold \(\bar{M}\) is said to be semi-invariant (see [3]), if there exist two orthogonal complementary distributions \(D\) and \(D^{\perp}\) on M such that \[TM=D\oplus D^{\perp},\] where \(D\) is invariant and \(D^{\perp}\) is anti-invariant. (5) A submanifold \(M\) of an almost Hermitian manifold \(\bar{M}\) is said to be semi-slant [23], if there exist two orthogonal complementary distributions \(D\) and \(D_{\theta}\) on \(M\) such that \[TM=D\oplus D_{\theta},\] where \(D\) is invariant and \(D_{\theta}\) is slant with slant angle \(\theta\). In this case, the angle \(\theta\) is called semi-slant angle. (6) A submanifold \(M\) of an almost Hermitian manifold \(\bar{M}\) is said to be hemis-slant (see, [20, 22]), if there exist two orthogonal complementary distributions \(D_{\theta}\) and \(D^{\perp}\) on \(M\) such that \[TM=D_{\theta}\oplus D^{\perp},\] where \(D_{\theta}\) is slant with slant angle \(\theta\) and \(D^{\perp}\) is anti-invariant. In this case, the angle \(\theta\) is called hemi-slant angle. **Definition 2.1**.: Let \(M\) be a submanifold of an almost Hermitian manifold \(\bar{M}\). Then, we say \(M\) is a bi-slant submanifold of \(\bar{M}\) if there exists a pair of orthogonal distributions \(D_{1}\) and \(D_{2}\) of \(M\), at a point \(p\in M\) such that (a) \(TM=D_{1}\oplus D_{2}\); (b) \(JD_{1}\perp D_{2}\) and \(JD_{2}\perp D_{1}\); (c) The distributions \(D_{1},D_{2}\) are pointwise slant with slant functions \(\theta_{1},\theta_{2}\), respectively. The pair \(\{\theta_{1},\theta_{2}\}\) of slant functions is called the bi-slant function. A pointwise bi-slant submanifold \(M\) is called proper if its bi-slant function satisfies \(\theta_{1},\theta_{2}\neq 0,\frac{\pi}{2}\) and both \(\theta_{1},\theta_{2}\) are not constant on \(M\). ## 3 Quasi bi-slant submanifolds of Kaehler manifolds In this section, we define and study quasi bi-slant submanifolds of Kaehler manifolds. **Definition 3.1**.: A submanifold \(M\) of an almost Hermitian manifold \(\bar{M}\) is called a quasi bi-slant submanifold if there exist distributions \(D\), \(D_{1}\) and \(D_{2}\) such that: (a) \(TM\) admits the orthogonal direct decomposition as \[TM=D\oplus D_{1}\oplus D_{2};\] (b) \(J(D)=D\) i.e., \(D\) is invariant; (c) \(J(D_{1})\perp D_{2}\); (d) For any non-zero vector field \(X\in(D_{1})_{x}\); \(x\in M\); the angle \(\theta_{1}\) between \(JX\) and \((D_{1})_{x}\) is constant and independent of the choice of point \(x\) and \(X\) in \((D_{1})_{x}\); (e) For any non-zero vector field \(Z\in(D_{2})_{y}\); \(y\in M\); the angle \(\theta_{2}\) between \(JZ\) and \((D_{2})_{y}\) is constant and independent of the choice of point \(y\) and \(Z\) in \((D_{2})_{y}\); The angles \(\theta_{1}\) and \(\theta_{1}\) are called slant angles of quasi bi-slant submanifold. _Remark 3.2_.: We can generalize the above definition by taking \(TM=D\oplus D_{\theta_{1}}\oplus D_{\theta_{2}}...\oplus D_{\theta_{n}}\). Hence we can define multi-slant submanifolds, quasi multi-slant submanifolds etc. Let \(M\) be a quasi bi-slant submanifold of an almost Hermitian manifold \(\bar{M}\). We denote the projections of \(X\in\Gamma(TM)\) on the distributions \(D\), \(D_{1}\) and \(D_{2}\) by \(P\), \(Q\) and \(R\), respectively. Then we can write, for any \(X\in\Gamma(TM)\) \[X=PX+QX+RX, \tag{3.1}\] we, can write \[JX=\phi X+\omega X, \tag{3.2}\] where \(\phi X\) and \(\omega X\) are tangential and normal components of \(JX\) on \(M\), respectively. Using (3.1) and (3.2), we obtain \[JX = JPX+JQX+JRX \tag{3.3}\] \[= \phi PX+\omega PX+\phi QX+\omega QX+\phi RX+\omega RX.\] Since \(JD=D\), we have \(\omega PX=0\). Therefore, we get \[JX=\phi PX+\phi QX+\omega QX+\phi RX+\omega RX. \tag{3.4}\] This means, for any \(X\in\Gamma(TM)\), we have \[\phi X=\phi PX+\phi QX+\phi RX\ \ \mbox{and}\ \ \omega X=\omega QX+\omega RX.\] Thus, we have the following decomposition \[J(TM)\subset D\oplus\phi D_{1}\oplus\omega D_{1}\oplus\phi D_{2}\oplus\omega D _{2}.\] Since \(\omega D_{1}\in(T^{\perp}M)\) and \(\omega D_{2}\in(T^{\perp}M)\), we have \[T^{\perp}M=\omega D_{1}\oplus\omega D_{2}\oplus\mu, \tag{3.5}\] where \(\mu\) is the orthogonal complement of \(\omega D_{1}\oplus\omega D_{2}\) in \((T^{\perp}M)\) and it is invariant with respect to \(J\). For any \(Z\in\Gamma(T^{\perp}M)\), we put \[JZ=BZ+CZ,\] where \(BZ\in\Gamma(TM)\) and \(CZ\in\Gamma(T^{\perp}M)\). **Lemma 3.3**.: [25] _Let \(M\) be a quasi bi-slant submanifold of an almost Hermitian manifold \(\bar{M}\), Then_ _(i) \(\phi^{2}X=-(\cos^{2}\theta_{1})X\),_ _(ii) \(g(\phi X,\phi Y)=(\cos^{2}\theta_{1})g(X,Y)\),_ _(iii) \(g(\omega X,\omega Y)=(\sin^{2}\theta_{1})g(X,Y)\)_ _for any \(X,Y\in\Gamma(D1)\), where \(\theta_{1}\) denotes the slant angle of \(D_{1}\)._ **Lemma 3.4**.: [25] _Let \(M\) be a quasi bi-slant submanifold of an almost Hermitian manifold \(\bar{M}\), Then_ _(i) \(\phi^{2}Z=-(\cos^{2}\theta_{2})Z\),_ _(ii) \(g(\phi Z,\phi W)=(\cos^{2}\theta_{2})g(Z,W)\),_ _(iii) \(g(\omega Z,\omega W)=(\sin^{2}\theta_{2})g(Z,W)\)_ _for any \(Z,W\in\Gamma(D2)\), where \(\theta_{2}\) denotes the slant angle of \(D_{2}\)._ ## 4 Some Results on quasi bi-slant submanifolds For a proper quasi bi-slant submanifold \(M\) of a kaehler manifold \(\bar{M}\), the normal bundle of \(M\) is decomposed as \[T^{\perp}M=\omega D_{1}\oplus\omega D_{2}\oplus\mu, \tag{4.1}\] where \(\mu\) is the orthogonal complement of \(\omega D_{1}\oplus\omega D_{2}\) in \((T^{\perp}M)\) and it is invariant with respect to \(J\). The following results for proper quasi bi-slant submanifolds is given as: **Proposition 4.1**.: _Let \(M\) be a proper quasi bi-slant submanifold of a Kaehler manifold \(\bar{M}\). Then, we have_ \[g(\nabla_{X}Y,\phi Z) = (\cos^{2}\theta_{1})g(\nabla_{X}QY,\phi Z)+g(A_{\omega\phi QY} \phi Z,X)-(\sin^{2}\theta_{1})g(A_{\omega Z}X,QY) \tag{4.2}\] \[+(\cos^{2}\theta_{2})g(\nabla_{X}RY,\phi Z)+g(A_{\omega\phi RY} \phi Z,X)-(\sin^{2}\theta_{2})g(A_{\omega Z}X,RY)\] \[+g(\nabla_{X}^{\perp}\omega Z,\omega\phi QY)+g(\nabla_{X}^{\perp }\omega Z,\omega\phi RY)+g(A_{\omega QY}Z,X)\] \[+g(A_{\omega RY}Z,X),\] _for any \(X\in\Gamma(D_{1})\) and \(Y,Z\in\Gamma(D_{1}\oplus D_{2})\), where \(\theta_{1}\) and \(\theta_{2}\) are slant angles of slant distribution \(D_{1}\) and \(D_{2}\), respectively._ Proof.: For any \(X\in\Gamma(D_{1})\) and \(Y,Z\in\Gamma(D_{1}\oplus D_{2})\), we have \[g(\nabla_{X}Y,\phi Z)=g(\bar{\nabla}_{X}(QY+RY),JZ)-g(\bar{\nabla}_{X}(QY+RY),\omega Z).\] Using (2.1), (2.2), (2.4) and (3.2), we have \[g(\nabla_{X}Y,\phi Z) = -g(J\bar{\nabla}_{X}QY,Z)-g(J\bar{\nabla}_{X}RY,Z)-g(\sigma(X,QY),\omega Z)\] \[-g(\sigma(X,RY),\omega Z).\] Then by applying \(\bar{\nabla}J=0\), and using (2.6) and (3.2), we obtain \[g(\nabla_{X}Y,\phi Z) = -g(\bar{\nabla}_{X}JQY,Z)-g(\bar{\nabla}_{X}JRY,Z)-g(\sigma(X,QY),\omega Z)\] \[-g(\sigma(X,RY),\omega Z)\] \[= -g(\bar{\nabla}_{X}\phi QY,Z)-g(\bar{\nabla}_{X}\omega QY,Z)-g( \bar{\nabla}_{X}\phi RY,Z)\] \[-g(\bar{\nabla}_{X}\omega RY,Z)-g(A_{\omega Z}X,QY)-g(A_{\omega Z }X,RY).\] On simplifying above equation and using (2.5), we arrive at \[g(\nabla_{X}Y,\phi Z) = -g(\bar{\nabla}_{X}\phi QY,Z)+g(A_{\omega QY}X,Z)-g(\bar{\nabla}_ {X}\phi RY,Z)\] \[+g(A_{\omega RY}X,Z)-g(A_{\omega Z}X,QY)-g(A_{\omega Z}X,RY).\] Using (2.2), the above equation can be re-written as \[g(\nabla_{X}Y,\phi Z) = -g(J\bar{\nabla}_{X}\phi QY,JZ)+g(A_{\omega QY}X,Z)-g(J\bar{ \nabla}_{X}\phi RY,JZ)\] \[+g(A_{\omega RY}X,Z)-g(A_{\omega Z}X,QY)-g(A_{\omega Z}X,RY).\] Since, the shape operator \(A\) is self-adjoint, it follows from (2.3), (2.6) and (3.2), we get \[g(\nabla_{X}Y,\phi Z) = -g(\bar{\nabla}_{X}J\phi QY,\phi Z)-g(\bar{\nabla}_{X}J\phi QY, \omega Z)-g(\bar{\nabla}_{X}J\phi RY,\phi Z)\] \[-g(\bar{\nabla}_{X}J\phi RY,\omega Z)+g(A_{\omega QY}Z,X)+g(A_{ \omega RY}Z,X)\] \[-g(A_{\omega Z}X,QY)-g(A_{\omega Z}X,RY)\] \[= -g(\bar{\nabla}\chi\phi^{2}QY,\phi Z)-g(\bar{\nabla}\chi\omega \phi QY,\phi Z)-g(\bar{\nabla}_{X}\phi^{2}RY,\phi Z)\] \[-g(\bar{\nabla}\chi\omega\phi RY,\phi Z)-g(\bar{\nabla}\chi\phi^{ 2}QY,\omega Z)-g(\bar{\nabla}\chi\omega\phi QY,\omega Z)\] \[-g(\bar{\nabla}\chi\phi^{2}RY,\omega Z)-g(\bar{\nabla}\chi\omega \phi RY,\omega Z)+g(A_{\omega QY}Z,X)\] \[+g(A_{\omega RY}Z,X)-g(A_{\omega Z}X,QY)-g(A_{\omega Z}X,RY).\] Now, using (2.5), Lemma 3.3 and Lemma 3.4, we obtain \[g(\nabla_{X}Y,\phi Z) = (\cos^{2}\theta_{1})g(\nabla_{X}QY,\phi Z)+g(A_{\omega\phi QY}X, \phi Z)+(\cos^{2}\theta_{2})g(\nabla_{X}RY,\phi Z)\] \[+g(A_{\omega\phi RY}X,\phi Z)+(\cos^{2}\theta_{1})g(\bar{\nabla}_ {X}QY,\omega Z)+g(\omega\phi QY,\bar{\nabla}_{X}\omega Z)\] \[(\cos^{2}\theta_{2})g(\bar{\nabla}_{X}RY,\omega Z)+g(\phi\phi RY, \bar{\nabla}_{X}\omega Z)+g(A_{\omega QY}Z,X)\] \[+g(A_{\omega RY}Z,X)-g(A_{\omega Z}X,QY)-g(A_{\omega Z}X,RY).\] Again using (2.3), (2.5) and (2.6), we arrive at \[g(\nabla_{X}Y,\phi Z) = (\cos^{2}\theta_{1})g(\nabla_{X}QY,\phi Z)+g(A_{\omega\phi QY} \phi Z,X)+(\cos^{2}\theta_{2})g(\nabla_{X}RY,\phi Z)\] \[+g(A_{\omega\phi RY}\phi Z,X)+(\cos^{2}\theta_{1})g(A_{\omega Z}X,QY)+g(\nabla_{X}^{\perp}\omega Z,\omega\phi QY)\] \[(\cos^{2}\theta_{2})g(A_{\omega Z}X,RY)+g(\nabla_{X}^{\perp} \omega Z,\omega\phi RY)+g(A_{\omega QY}Z,X)\] \[+g(A_{\omega RY}Z,X)-g(A_{\omega Z}X,QY)-g(A_{\omega Z}X,RY).\] Now, from above relation, the desired result follows. Hence, the proof is complete. **Proposition 4.2**.: _Let \(M\) be a proper quasi bi-slant submanifold of a Kaehler manifold \(\bar{M}\). Then, we have_ \[g([Y,Z],\phi X) = g(A_{\omega\phi Z}\phi X,Y)-g(A_{\omega\phi Y}\phi X,Z)+g(\nabla _{Y}^{\perp}\omega X,\omega\phi Z)\] \[-g(\nabla_{Z}^{\perp}\omega X,\omega\phi Y)+g(A_{\omega Z}X,Y)-g( A_{\omega Y}X,Z)\] _for any \(X\in\Gamma(D_{1})\) and \(Y,Z\in\Gamma(D_{1}\oplus D_{2})\), where \(\theta_{1}\) and \(\theta_{2}\) are slant angles of slant distribution \(D_{1}\) and \(D_{2}\), respectively._ Proof.: In a similar fashion, as Proposition 4.1, we can derive \[g(\nabla_{Z}Y,\phi X) = (\cos^{2}\theta_{1})g(\nabla_{QZ}QY,\phi X)+g(A_{\omega\phi QY} \phi X,QZ)-(\sin^{2}\theta_{1})g(A_{\omega X}QZ,QY) \tag{4.4}\] \[+(\cos^{2}\theta_{2})g(\nabla_{RZ}RY,\phi X)+g(A_{\omega\phi RY} \phi X,RZ)-(\sin^{2}\theta_{2})g(A_{\omega X}RZ,RY)\] \[+g(\nabla_{Z}^{\perp}\omega X,\omega\phi QY)+g(\nabla_{Z}^{\perp} \omega X,\omega\phi RY)+g(A_{\omega QY}X,Z)\] \[+g(A_{\omega RY}X,Z),\] for \(X\in\Gamma(D)\) and \(Y,Z\in\Gamma(D_{1}\oplus D_{2})\). Interchanging \(Y\) and \(Z\) in (4.4) yields \[g(\nabla_{Y}Z,\phi X) = (\cos^{2}\theta_{1})g(\nabla_{QY}QZ,\phi X)+g(A_{\omega\phi QZ} \phi X,QY)-(\sin^{2}\theta_{1})g(A_{\omega X}QY,QZ) \tag{4.5}\] \[+(\cos^{2}\theta_{2})g(\nabla_{RY}RZ,\phi X)+g(A_{\omega\phi RZ} \phi X,RY)-(\sin^{2}\theta_{2})g(A_{\omega X}RY,RZ)\] \[+g(\nabla_{Y}^{\perp}\omega X,\omega\phi QZ)+g(\nabla_{Y}^{\perp} \omega X,\omega\phi RZ)+g(A_{\omega QZ}X,Y)\] \[+g(A_{\omega RZ}X,Y).\] Then after using symmetry of shape operator, (3.1) and subtracting (4.4) from (4.5), we obtain (4.3). Hence the proof is complete. ## 5 Warped product quasi bi-slant submanifolds of Kaehler manifold Let \((M_{1},g_{1})\) and \((M_{2},g_{2})\) be two Riemannian manifolds and \(f>0\), be a positive differentiable function on \(M_{1}\). Consider the product manifold \(M_{1}\times M_{2}\) with its canonical projections \(\pi:M_{1}\times M_{2}\to M_{1}\) and \(\rho:M_{1}\times M_{2}\to M_{2}\). The warped product \(M=M_{1}\times_{f}M_{2}\) is the product manifold \(M_{1}\times M_{2}\) equipped with the Riemannian metric \(g\) such that \[g(X,Y)=g_{1}(\pi_{*}(X),\pi_{*}(Y))+(f\circ\pi)^{2}g_{2}(\rho_{*}(X),\rho_{*}(Y))\] for any tangent vector \(X,Y\in TM\), where \(*\) is the symbol for the tangent maps. It was proved in [2] that for any \(X\in TM_{1}\) and \(Z\in TM_{2}\), the following holds \[\nabla_{X}Z=\nabla_{Z}X=(Xlnf)Z \tag{5.1}\] where \(\nabla\) denotes the Levi-Civita connection of \(g\) on \(M\). A warped product manifold \(M=M_{1}\times_{f}M_{2}\) is said to be trivial if the warping function \(f\) is constant. If \(M=M_{1}\times_{f}M_{2}\) is a warped product manifold then \(M_{1}\) is totally geodesic and \(M_{2}\) is a totally umbilical (see [2, 8]). From now onwards, we assume the ambient manifold \(\bar{M}\) is Kaehler manifold and \(M\) is quasi bi-slant submanifold in \(\bar{M}\). Now we give the following useful lemma for later use. **Lemma 5.1**.: _Let \(M=M_{1}\times_{f}M_{2}\), where \(M_{2}=M_{\theta_{1}}\times M_{\theta_{2}}\) be a warped product quasi bi-slant submanifold of a Kaehler manifold \(\bar{M}\). Then, we have_ \[g(\sigma(X,Y),\omega Z)=g(\sigma(X,Z),\omega QY)+g(\sigma(X,Z),\omega RY), \tag{5.2}\] _for any \(X,Z\in\Gamma(M_{1})\) and \(Y\in\Gamma(M_{2})\)._ Proof.: For any \(X,Y\in\Gamma(M_{1})\) and \(Z\in\Gamma(M_{2})\), we have \[g(\sigma(X,Y),\omega Z) = g(\bar{\nabla}_{X}Y,\omega Z)\] \[= g(\bar{\nabla}_{X}Y,JZ)-g(\bar{\nabla}_{X}Y,\phi Z).\] Using (2.2), (2.3) and the orthogonality of vector fields given in condition \((c)\) of Definition 3.1, we find \[g(\sigma(X,Y),\omega Z)=-g(\bar{\nabla}_{X}JY,Z)+g(\bar{\nabla}_{X}\phi Z,Y).\] Using (2.4), (3.1), (3.2), (5.1) and orthogonality of vector fields, we obtain \[g(\sigma(X,Y),\omega Z) = -g(\bar{\nabla}_{X}\phi QY,Z)-g(\bar{\nabla}_{X}\omega QY,Z)-g( \bar{\nabla}_{X}\phi RY,Z)\] \[-g(\bar{\nabla}_{X}\omega RY,Z)-(Xlnf)g(\phi Z,Y).\] \[= -g(\bar{\nabla}_{X}\phi Y,Z)-g(\bar{\nabla}_{X}\omega QY,Z)-g( \bar{\nabla}_{X}\omega RY,Z).\] Thus, using (2.5) and (5.1), we arrive at \[g(\sigma(X,Y),\omega Z) = g(\nabla_{X}Z,\phi Y)+g(A_{\omega QY}X,Z)+g(A_{\omega RY}X,Z)\] \[= (Xlnf)g(Z,\phi Y)+g(\sigma(X,Z),\omega QY)+g(\sigma(X,Z),\omega RY).\] Using the orthogonality of vector fields, we get \[g(\sigma(X,Y),\omega Z)=g(\sigma(X,Z),\omega QY)+g(\sigma(X,Z),\omega RY).\] Hence, the proof of lemma follows. **Lemma 5.2**.: _Let \(M=M_{1}\times_{f}M_{2}\), where \(M_{2}=M_{\theta_{1}}\times M_{\theta_{2}}\) be a warped product quasi bi-slant submanifold of a Kaehler manifold \(\bar{M}\). Then, we have_ \[g(\sigma(X,Z),\omega W)=g(\sigma(X,W),\omega QZ)+g(\sigma(X,W),\omega RZ), \tag{5.3}\] _for any \(X\in\Gamma(M_{1})\) and \(Z,W\in\Gamma(M_{2})\)._ Proof.: For any \(X\in\Gamma(M_{1})\) and \(Z,W\in\Gamma(M_{2})\), we have \[g(\sigma(X,Z),\omega W) = g(\bar{\nabla}_{X}Z,JW)-g(\nabla_{X}Z,\phi W)\] \[= -g(\bar{\nabla}_{X}JZ,W)-g(\nabla_{X}Z,\phi W).\] Using (2.4), (3.1) and (3.2), we obtain \[g(\sigma(X,Z),\omega W) = -g(\bar{\nabla}_{X}\phi QZ,W)-g(\bar{\nabla}_{X}\omega QZ,W)-g( \bar{\nabla}_{X}\phi RZ,W)\] \[-g(\bar{\nabla}_{X}\omega RZ,W)-g(\nabla_{X}Z,\phi W).\] On further simplification, we arrive at \[g(\sigma(X,Z),\omega W) = -(\phi QZInf)g(X,W)+g(A_{\omega QZ}X,W)-(\phi RZInf)g(X,W)\] \[+g(A_{\omega RZ}X,W)-(Zlnf)g(X,W).\] Using orthogonality of vector fields, we get \[g(\sigma(X,Z),\omega W)=g(A_{\omega QZ}X,W)+g(A_{\omega RZ}X,W).\] Using (2.6), we have \[g(\sigma(X,Z),\omega W)=g(\sigma(X,W),\omega QZ)+g(\sigma(X,W),\omega RZ).\] Hence, the proof of lemma is complete. ## 6 Main Result **Theorem 6.1**.: _Let \(M=M_{1}\times_{f}M_{2}\) where \(M_{2}=M_{\theta_{1}}\times M_{\theta_{2}}\) be a warped product quasi bi-slant submanifold with bi-slant angles \(\{\theta_{1},\theta_{2}\}\) in a Kaehler manifold \(\bar{M}\). Then one of the following two cases must occur:_ _(1) The warping function \(f\) is constant i.e., \(M\) is Riemannian product;_ _(2) \(\theta_{2}=\frac{\pi}{2}\), i.e., \(M\) is a warped product quasi hemi-slant submanifold such that \(M_{\theta_{2}}\) is a totally real submanifold \(M_{\perp}\) of \(\bar{M}\)._ Proof.: Let \(M=M_{1}\times_{f}M_{2}\) where \(M_{2}=M_{\theta_{1}}\times M_{\theta_{2}}\) be a warped product quasi bi-slant submanifold of a Kaehler manifold \(\bar{M}\) with bi-slant angles \(\{\theta_{1},\theta_{2}\}\). Then, we have \[g(\sigma(X,Z),\omega W)=g(\bar{\nabla}_{Z}X,JW)-g(\nabla_{Z}X,\phi W), \tag{6.1}\] for any \(X\in\Gamma(M_{1})\) and \(Z,W\in\Gamma(M_{2})\). Thus, by using (2.1)-(2.4) and (5.1), we obtain \[g(\sigma(X,Z),\omega W) = -g(\bar{\nabla}_{Z}JX,W)-(Xlnf)g(Z,\phi W)\] \[= -g(\bar{\nabla}_{Z}\phi PX,W)-g(\bar{\nabla}_{Z}\phi QX,W)-g(\bar {\nabla}_{Z}\omega QX,W)\] \[-g(\bar{\nabla}_{Z}\phi RX,W)-g(\bar{\nabla}_{Z}\omega RX,W)-(Xlnf )g(Z,\phi W).\] Thus, it follows from (2.5) and (5.1) \[g(\sigma(X,Z),\omega W) = -(\phi PXlnf)g(Z,W)-(\phi QXlnf)g(Z,W)\] \[+g(A_{\omega QX}Z,W)-(\phi RXlnf)g(Z,W)\] \[+g(A_{\omega RX}Z,W)-(Xlnf)g(Z,\phi W).\] Using (2.6), we arrive at \[g(\sigma(X,Z),\omega W) = -(\phi PXlnf)g(Z,W)-(\phi QXlnf)g(Z,W) \tag{6.2}\] \[+g(\sigma(Z,W),\omega QX)-(\phi RXlnf)g(Z,W)\] \[+g(\sigma(Z,W),\omega RX)-(Xlnf)g(Z,\phi W).\] Interchanging \(Z\) by \(W\) in (6.2) and using (2.2), we get \[g(\sigma(X,W),\omega Z) = -(\phi PXlnf)g(Z,W)-(\phi QXlnf)g(Z,W) \tag{6.3}\] \[+g(\sigma(Z,W),\omega QX)-(\phi RXlnf)g(Z,W)\] \[+g(\sigma(Z,W),\omega RX)+(Xlnf)g(Z,\phi W).\] Subtracting (6.2) from (6.3) and by applying Lemma 5.2, we arrive at \[(Xlnf)g(Z,\phi W)=0. \tag{6.4}\] Again, after interchanging \(W\) by \(\phi W\) in (6.4) and using Lemma 3.4, we get \[(\cos^{2}\theta_{2})(Xlnf)g(Z,W)=0.\] Therefore, either \(f\) is constant or \(\cos\theta_{2}=0\) holds. Consequently, either \(M\) is a Riemannian product manifold or \(\theta_{2}=\frac{\pi}{2}\). In the second case, \(M\) is a warped product quasi hemi-slant submanifold which hasbeen studied in [24]. ## 7 Some examples on warped product quasi bi-slant submanifolds of Kaehler manifold. _Example 1_.: Let \(\mathbb{E}^{2n}\) be the Euclidean \(2n\)-space with the standard metric and let \(\mathbb{C}^{n}\) denotes the complex Euclidean \(n\)-space \((\mathbb{E}^{2n},J)\) equipped with the canonical complex structure \(J\) defined as \[J\bigg{(}\frac{\partial}{\partial x_{i}}\bigg{)}=\frac{\partial}{\partial y_{ i}},\hskip 14.226378ptJ\bigg{(}\frac{\partial}{\partial y_{j}}\bigg{)}=- \frac{\partial}{\partial x_{j}},1\leq i,j\leq n.\] Consider a submanifold \(M\) of \(\mathbb{C}^{6}\) defined by \[\chi(u,v,w,r,s,t) = (u\cos\theta_{1},v\cos\theta_{1},u\sin\theta_{1},v\sin\theta_{1}, w\cos\theta_{2},r\cos\theta_{2},\] \[w\sin\theta_{2},r\sin\theta_{2},-u-w+v+r,u+w+v+r,s,t).\] It is easy to see that the tangent bundle \(TM\) of \(M\) is spanned by the following vectors \[Z_{1}=\cos\theta_{1}\frac{\partial}{\partial x_{1}}+\sin\theta_{1}\frac{\partial }{\partial x_{2}}-\frac{\partial}{\partial x_{5}}+\frac{\partial}{\partial y_{5 }},\] \[Z_{2}=\cos\theta_{1}\frac{\partial}{\partial y_{1}}+\sin\theta_{1}\frac{\partial }{\partial y_{2}}+\frac{\partial}{\partial x_{5}}+\frac{\partial}{\partial y_ {5}},\] \[Z_{3}=\cos\theta_{2}\frac{\partial}{\partial x_{3}}+\sin\theta_{2}\frac{ \partial}{\partial x_{4}}-\frac{\partial}{\partial x_{5}}+\frac{\partial}{ \partial y_{5}},\] \[Z_{4}=\cos\theta_{2}\frac{\partial}{\partial y_{3}}+\sin\theta_{2}\frac{ \partial}{\partial y_{4}}+\frac{\partial}{\partial x_{5}}+\frac{\partial}{ \partial y_{5}},\] \[Z_{5}=\frac{\partial}{\partial x_{6}},\hskip 14.226378ptZ_{6}=\frac{\partial}{ \partial y_{6}}.\] Then, clearly we obtain \[JZ_{1}=\cos\theta_{1}\frac{\partial}{\partial y_{1}}+\sin\theta_{1}\frac{ \partial}{\partial y_{2}}-\frac{\partial}{\partial y_{5}}-\frac{\partial}{ \partial x_{5}},\] \[JZ_{2}=-\cos\theta_{1}\frac{\partial}{\partial x_{1}}-\sin\theta_{1}\frac{ \partial}{\partial x_{2}}+\frac{\partial}{\partial y_{5}}-\frac{\partial}{ \partial x_{5}},\] \[JZ_{3}=\cos\theta_{2}\frac{\partial}{\partial y_{3}}+\sin\theta_{2}\frac{ \partial}{\partial y_{4}}-\frac{\partial}{\partial y_{5}}-\frac{\partial}{ \partial x_{5}},\] \[JZ_{4}=-\cos\theta_{2}\frac{\partial}{\partial x_{3}}-\sin\theta_{2}\frac{ \partial}{\partial x_{4}}+\frac{\partial}{\partial y_{5}}-\frac{\partial}{ \partial x_{5}},\] \[JZ_{5}=\frac{\partial}{\partial y_{6}},\hskip 14.226378ptJZ_{6}=-\frac{\partial}{ \partial x_{6}}.\] Then, we find that \(D=span\{Z_{5},Z_{6}\}\) is an invariant distribution, \(D_{1}=span\{Z_{1},Z_{2}\}\) is a proper slant distribution with slant angle \(\theta_{1}=\cos^{-1}(\frac{1}{3})\) and \(D_{2}=span\{Z_{3},Z_{4}\}\) is again a proper slant distribution with slant angle \(\theta_{2}=\cos^{-1}(\frac{1}{3}).\) Hence the submanifold \(M\) defined by \(\chi\) is a proper quasi bi-slant submanifold of \(\mathbb{C}^{6}.\) It is easy to verify that \(D\) and \(D_{1}\oplus D_{2}\) are integrable. If we denote the integrable manifolds of \(D\), \(D_{1}\) and \(D_{2}\) by \(M_{T}\), \(M_{\theta_{1}}\) and \(M_{\theta_{2}}\), respectively. Then the metric tensor \(g\) of product manifold \(M\) is given by \[ds^{2} = ds^{2}+dt^{2}+3(du^{2}+dv^{2})+3(dw^{2}+dr^{2})\] \[= g_{M_{T}}+3g_{M_{2}},\] such that, \[g_{M_{T}}=ds^{2}+dt^{2}\hskip 14.226378ptand\hskip 14.226378ptg_{M_{2}}=3(du^{2}+ dv^{2})+dw^{2}+dr^{2},\] where \(M_{2}=M_{\theta_{1}}\times M_{\theta_{2}}.\) In this case the warping function \(f=\sqrt{3}\) a constant, and hence \(M\) is simply a Riemannian product. _Example 2_.: Consider a submanifold \(M\) of \(\mathbb{C}^{5}\) defined by \[\chi(u,v,w,s,t)=(v\cos u,w\cos u,v\sin u,w\sin u,-v+w,v+w,0,0,s,t),\] with almost complex structure \(J\) defined by \[J\biggl{(}\frac{\partial}{\partial x_{i}}\biggr{)}=\frac{\partial}{\partial y _{i}},\hskip 14.226378ptJ\biggl{(}\frac{\partial}{\partial y_{j}}\biggr{)}=- \frac{\partial}{\partial x_{j}},1\leq i,j\leq 5.\] It is easy to see that its tangent space \(TM\) of \(M\) is spanned by the following vectors \[v_{1}=-v\sin u\frac{\partial}{\partial x_{1}}+v\cos u\frac{\partial}{\partial x _{2}}-w\sin u\frac{\partial}{\partial y_{1}}+w\cos u\frac{\partial}{\partial y _{2}},\] \[v_{2}=\cos u\frac{\partial}{\partial x_{1}}+\sin u\frac{\partial}{\partial x _{2}}-\frac{\partial}{\partial x_{3}}+\frac{\partial}{\partial y_{3}},\] \[v_{3}=\frac{\partial}{\partial x_{3}}+\cos u\frac{\partial}{\partial y_{1}}+ \sin u\frac{\partial}{\partial y_{2}}+\frac{\partial}{\partial y_{3}},\] \[v_{4}=\frac{\partial}{\partial x_{5}},\hskip 14.226378ptv_{6}=\frac{\partial}{ \partial y_{5}}.\] Then, we have \[Jv_{1}=-v\sin u\frac{\partial}{\partial y_{1}}+v\cos u\frac{\partial}{\partial y _{2}}+w\sin u\frac{\partial}{\partial x_{1}}-w\cos u\frac{\partial}{\partial x _{2}},\] \[Jv_{2}=\cos u\frac{\partial}{\partial y_{1}}+\sin u\frac{\partial}{\partial y _{2}}-\frac{\partial}{\partial y_{3}}-\frac{\partial}{\partial x_{3}},\] \[Jv_{3}=\frac{\partial}{\partial y_{3}}-\cos u\frac{\partial}{\partial x_{1}}- \sin u\frac{\partial}{\partial x_{2}}-\frac{\partial}{\partial x_{3}}.\] \[Jv_{4}=\frac{\partial}{\partial y_{5}},\hskip 14.226378ptJv_{5}=-\frac{\partial}{ \partial x_{5}}.\] Let us put \(D=span\{v_{4},v_{5}\}\) is an invariant distribution, \(D_{1}=span\{v_{2},v_{3}\}\) a proper slant distribution with slant angle \(\theta_{1}=\cos^{-1}\left(\frac{1}{3}\right)\) and \(D_{2}=span\{v_{1}\}\) an anti-invariant distribution with slant angle \(\theta_{2}=\frac{\pi}{2}\). Hence the submanifold \(M\) defined by \(\chi\) is a quasi bi-slant submanifold of \(\mathbb{C}^{5}\). It is easy to verify that \(D\) and \(D_{1}\oplus D_{2}\) are integrable. If we denote the integrable manifolds of \(D\), \(D_{1}\) and \(D_{2}\) by \(M_{T}\), \(M_{\theta_{1}}\) and \(M_{\perp}\), respectively. Then the metric tensor \(g\) of product manifold \(M\) is given by \[ds^{2}=ds^{2}+dt^{2}+3(dv^{2}+dw^{2})+(v^{2}+w^{2})du^{2},\] such that \[g_{M_{T}}=ds^{2}+dt^{2}\hskip 14.226378ptand\hskip 14.226378ptg_{M_{2}}=3(dv^{2} +dw^{2})+du^{2},\] where \(M_{2}=M_{\theta_{1}}\times M_{\perp}\). In this case the warping function \(f=\sqrt{v^{2}+w^{2}}\) and hence \(M\) is a case of warped product quasi hemi-slant submanifold. So we have discussed both the case of Theorem 6.1. **Data Availability Statement**: The authors declare that this research is purely theoretical and does not associate with any datas. **Conflicts of Interest**: The authors declare that they have no conflict of interest, regarding the publication of this paper.
2309.08530
From Hubble to Bubble
The detection of a stochastic Gravitational Wave (GW) background sourced by a cosmological phase transition would allow us to see the early Universe from a completely new perspective, illuminating aspects of Beyond the Standard Model (BSM) physics and inflationary cosmology. In this study, we investigate whether the evolution of the scalar potential of a minimal SM extension after inflation can lead to a strong first-order phase transition. In particular, we focus on a BSM spectator scalar field that is non-minimally coupled to gravity and has a dynamical double-well potential. As inflation ends, the potential barrier diminishes due to the evolution of the curvature scalar. Therefore, a phase transition can proceed through the nucleation of true-vacuum bubbles that collide as they fill the Universe and produce GWs. We consider high and low scales of inflation, while also taking into account a kination period between inflation and the onset of radiation domination. With this prescription, we showcase a proof-of-concept study of a new triggering mechanism for BSM phase transitions in the early Universe, whose GW signatures could potentially be probed with future detectors.
Maciej Kierkla, Giorgio Laverda, Marek Lewicki, Andreas Mantziris, Matteo Piani, Javier Rubio, Mateusz Zych
2023-09-15T16:46:41Z
http://arxiv.org/abs/2309.08530v1
# From Hubble to Bubble ###### Abstract The detection of a stochastic Gravitational Wave (GW) background sourced by a cosmological phase transition would allow us to see the early Universe from a completely new perspective, illuminating aspects of Beyond the Standard Model (BSM) physics and inflationary cosmology. In this study, we investigate whether the evolution of the scalar potential of a minimal SM extension after inflation can lead to a strong first-order phase transition. In particular, we focus on a BSM spectator scalar field that is non-minimally coupled to gravity and has a dynamical double-well potential. As inflation ends, the potential barrier diminishes due to the evolution of the curvature scalar. Therefore, a phase transition can proceed through the nucleation of true-vacuum bubbles that collide as they fill the Universe and produce GWs. We consider high and low scales of inflation, while also taking into account a kination period between inflation and the onset of radiation domination. With this prescription, we showcase a proof-of-concept study of a new triggering mechanism for BSM phase transitions in the early Universe, whose GW signatures could potentially be probed with future detectors. ## 1 Introduction There exists a long-lasting link between first-order phase transitions and the very early history of the Universe. This relationship stretches from the first works on inflation of the early 80s [1; 2; 3] up to the modern interpretation as beyond-the-standard-model phenomenology. Indeed, nowadays the most compelling settings for such transitions are realised at the quark-hadron transition scale [4; 5] and the electroweak scale (EW) [6; 7]. Given the thermalised state of the Universe at both energy scales, the evolution of true-vacuum bubbles proceeds with a non-trivial interplay between the scalar sector and the primordial plasma [8]. The end results can vary and constitute the main motivation to investigate first-order phase transitions: from the generation of baryon asymmetry [9; 10; 11; 12; 13] through the production of a stochastic background of gravitational waves [6; 7; 14; 15] to the seeding of primordial magnetic fields [16; 17; 18; 19; 20] and primordial black holes [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]. The standard picture of a thermal phase transition [2; 33] entails the nucleation of bubbles, their collision, the interaction between bubbles and plasma, and the final percolation in a thermalised medium. The potential barrier responsible for the separation between vacua is typically generated by thermal contributions to the scalar effective potential and tunnelling becomes efficient as the Universe cools down to the energy scale of the process. In this work, we present an innovative and flexible mechanism beyond the traditional thermal picture to trigger a First-Order Phase Transition (FOPT), where the formation of true-vacuum bubbles is simply induced by the evolution of the Hubble rate in the primordial Universe. The field content of the model is kept to a minimum by considering only the inflaton and a subdominant spectator scalar field. The inflaton is responsible for the overall background cosmological dynamics during and immediately after inflation. The spectator field is a prototypical component of a beyond-the-standard-model sector, and it is characterised by a direct coupling to the background curvature. At the heart of the mechanism is precisely the non-minimal interaction between quantum fields and the geometry of spacetime at high energies, which constitutes a simple cosmic clock that sets off the phase transition soon after inflation (see for instance Refs. [34; 35; 36]). Indeed, at the end of the slow-roll phase, the overall cosmological equation of state (e.o.s.) evolves from \(w=-1\) to \(w>-1\) since the inflationary kinetic energy density becomes dominant. The spectator field experiences the change of e.o.s. as a variation of its time-dependent effective mass. A first-order vacuum phase transition occurs if the decreasing effective mass becomes comparable to the negative cubic self-coupling of the spectator field. Bubbles of true vacuum form via tunnelling through the potential barrier, and in turn, gravitational waves are produced by bubble-wall collisions. This novel mechanism is minimal and natural, as it relies only on the presence of a Hubble-dependent effective mass of the spectator field. Its simplicity allows the mechanism to be easily included in the phenomenology of the early Universe independently of the specific choice of inflationary scenario. This feature makes it a flexible tool for studying the production of stochastic backgrounds of gravitational waves in a wide range of frequencies, where the spectrum's peak frequency is determined by the energy scale at the time of the transition. For simplicity, in this work we focus on the prototypical case of a spectator field non-minimally coupled to the Ricci curvature, a setup analogous to Hubble-induced second-order phase transitions [37; 38; 34] and related scenarios [39; 40; 41; 42; 43; 44; 45; 46]. The spectator potential contains the usual renormalisable operators up to fourth order, a minimal choice that still endows all the necessary features for the desired phenomenology. The background dynamics are defined by a quintessential inflation-like scenario with an epoch of kinetic domination (\(w=1\)) following the end of the slow-roll phase (see Ref. [47] for a review). This choice comes with the benefits of realising one single first-order phase transition and of amplifying the gravitational waves signal throughout kination. We assume a specific sigmoid function for the evolution of the equation-of-state parameter \(w(t)\) with one free parameter determining the speed of the shift from \(w=-1\) to \(w=1\). The nucleation rate and the energy released as bubbles can be computed from the O(4)-symmetric Euclidean action for the critical bubble [48] and indicate that the transition happens rapidly, even before the Universe fully enters the kination epoch. We then estimate the typical size of the bubbles at percolation time, which ultimately allows us to study the characteristics of the gravitational wave spectrum produced by wall collisions. Its peak frequency and amplitude are completely determined by a few parameters: the Hubble scale at the end of inflation, the speed of the shift from inflation to kination, the typical bubble size at percolation and the strength of the transition proportional to the energy difference between the vacua. Finally, we discuss the implications of such curvature-induced phase transitions for the upcoming gravitational wave observatories. This work is organised as follows. Section 2 introduces the evolution of the cosmological background and the non-minimal interaction of the BSM scalar sector with gravity. There, the spectator potential is defined alongside a parameterisation for the inflation-kination transition. Section 3 focuses on the nucleation process and the production of gravitational waves through bubble-wall collisions. Section 4 presents the results of scanning the available parameter space and shows the shape of the estimated gravitational-waves spectra at present time. In Section 5 we discuss our findings and summarise the results. Cosmological evolution of the scalar potential First-order phase transitions are a common feature of many theories beyond the Standard Model [8]. In the context of cosmological phase transitions, the evolution of a scalar field is typically characterised by a change from an initial high-temperature phase in which the symmetry is restored to the global minimum of scalar potential. In the most common case, thermally driven transitions are considered; however, a FOPT can also proceed through quantum tunnelling, where the impact of temperature corrections is negligible as long as the scalar potential has all the necessary properties. To illustrate this type of transition we consider a minimal extension of the Standard Model, featuring an extra scalar singlet \(\chi\), endowed with a renormalisable potential and a non-minimal coupling to gravity. In the action \[S=\int d^{4}x\sqrt{-g}\left[\frac{M_{P}^{2}-\xi\chi^{2}}{2}\mathcal{R}-\frac{1 }{2}\partial_{\mu}\chi\partial^{\mu}\chi-\frac{m^{2}}{2}\chi^{2}+\frac{\sigma} {3}\chi^{3}-\frac{\lambda}{4}\chi^{4}\right]\,, \tag{1}\] \(M_{P}=2.435\times 10^{18}\) GeV is the reduced Planck mass, \(\xi\) the non-minimal coupling1, \(\mathcal{R}\) the Ricci scalar, \(m\) the mass of the field in flat spacetime, and \(\sigma\) and \(\lambda\) the cubic and quartic self-couplings, respectively. All of the model parameters are taken to be real and positive. The motivation for the inclusion of the non-minimal coupling to curvature is two-fold. Firstly, its presence is necessary to ensure the renormalisability of the energy-momentum tensor in curved space-time [49]. Secondly, such a term is fully compliant with the symmetries of the action, and it will arise from radiative corrections, even if it is set to zero at a specific scale. For a flat FLRW metric, the Ricci scalar is given by Footnote 1: As this action does not feature any \(Z_{2}\) symmetry, one could introduce a second non-minimal coupling \(g\phi\mathcal{R}\) for completeness. Since this operator affects only the position of the vacua, it will be neglected in what follows. \[\mathcal{R}=3(1-3w)H^{2}\,, \tag{2}\] where \(w=p/\rho\) is the e.o.s. parameter relating the pressure and energy density of the cosmic fluid, according to the dominating component of the energy-momentum tensor. Assuming that, in the initial stage of the cosmic evolution, the field \(\chi\) is subdominant and does not affect the dynamics of \(w\), we can treat the non-minimal coupling as a contribution to the effective mass. Thus, the effective potential is given by \[V_{\rm eff}=\frac{M^{2}}{2}\chi^{2}-\frac{\sigma}{3}\chi^{3}+\frac{\lambda}{4 }\chi^{4}\,,\hskip 28.452756ptM^{2}=m^{2}+\xi\mathcal{R}\,. \tag{3}\] It is clear from Eq. (2) that an increasing e.o.s. parameter corresponds to a decreasing effective mass, which becomes negative for fluids stiffer than radiation \(w>1/3\). In the regime \(0<M^{2}<2\sigma^{2}/(9\lambda)\), the effective potential in Eq. (3) has a false vacuum in \(\chi=0\) and a true vacuum at \[\chi_{\rm tv}=\frac{\sigma}{2\lambda}\left(1+\sqrt{1-\frac{4M^{2}\lambda}{ \sigma^{2}}}\right)\,. \tag{4}\] The existence of two different vacua sets the stage for a cosmological first-order phase transition in the post-inflationary epoch. During inflation, the inflaton field is slowly rolling down an almost flat potential with negligible kinetic energy. In this regime, the e.o.s parameter is \(w_{\rm inf}=-1\), the Hubble parameter is approximately constant, and the Ricci scalar gives a positive contribution to the effective mass so that \(M^{2}\) is maximal. As inflation ends, the inflaton leaves the plateau region of its potential and starts to roll down towards its minimum, until all potential energy is converted into kinetic and \(w_{\rm kin}=1\). In this context, our study is performed under the following simplifying assumptions: 1. The field \(\chi\) remains energetically subdominant until the phase transition takes place. 2. The mass of the field in flat spacetime is much smaller than the gravitational contribution and thus, we can safely set \(m=0\). 3. A period of kinetic domination (kination) follows the end of inflation. The first condition ensures that the field does not play any role in the initial dynamics of the e.o.s. parameter \(w(t)\), allowing for a model-independent parameterisation of its evolution. The second condition ensures that the potential always enters a broken phase until it eventually turns tachyonic around the origin (\(\chi=0\)) when \(w=1/3\). In principle, one could consider a non-negligible value for \(m\) but this would lead to a more complicated analysis due to the non-trivial interplay between \(m\), \(\xi\) and \(H\). The third condition, albeit not strictly necessary, offers some conceptual advantages. For typical models of single-field inflation, the inflaton starts oscillating around its minimum at the end of the slow-roll phase and the e.o.s parameter will periodically change value between \(-1\leq w\leq 1\). In this case, the potential of the spectator field would also periodically shift between phases of broken and restored symmetry. In doing so, a series of tunnelling and rolling events might ensue, but the study of their non-trivial dynamics goes beyond the scope of this work. Moreover, in the absence of any heating mechanism, the radiation produced during the phase transition will grow over the kination-dominated background, and therefore it will naturally fix the duration of the heating stage. In the following analysis, we will assume that the Universe reaches thermal equilibrium as soon as it enters the radiation domination (RD) epoch. Parameterising the evolution of the global e.o.s. parameter as \[w(t)=\tanh\left(\beta_{w}(t-t_{0})\right), \tag{5}\] allows us to perform explicit analytical computations, where \(t_{0}\) is the time at which \(w=0\) and the free parameter \(\beta_{w}>0\) controls the speed of the transition between boundary values. The corresponding Hubble rate is given by \[H(t)=\left[\frac{3}{2}\left(t+\frac{1}{\beta_{w}}\ln\left[\cosh\left(\beta_{w} (t-t_{0})\right)\right]+c\right)\right]^{-1}\,, \tag{6}\] where \(c=\frac{2}{3H_{\rm inf}}+\frac{\ln 2}{\beta_{w}}-t_{0}\) is an integration constant and \(H_{\rm inf}=H(t\to-\infty)\) is the scale of inflation. For convenience, we rewrite this expression in terms of Hubble time as \[\frac{H(t)}{H_{\rm inf}}=\frac{2}{3}\left(H_{\rm inf}(t-t_{0})+\left(\frac{ \beta_{w}}{H_{\rm inf}}\right)^{-1}\ln\left[2\cosh\left(\left(\frac{\beta_{w }}{H_{\rm inf}}\right)H_{\rm inf}(t-t_{0})\right)\right]+\frac{2}{3}\right)^{- 1}\,. \tag{7}\] Figure 1 displays the time evolution of \(H(t)\), \(w(t)\) and \(R(t)\) according to the chosen parameterisation in Eq. (5). Bubble nucleation and gravitational wave production from collisions In order to calculate the necessary parameters describing the vacuum transition one has to be able to fully describe the evolution of scalar potential and then calculate the tunnelling probability. For the scope of this work, we shall adopt a semi-analytical method described in [50]. The scalar potential in Eq. (3) can be expressed in the reduced dimensionless form as \[\widetilde{V}(\tilde{\varphi},t)=\frac{1}{4}\tilde{\varphi}^{4}-\tilde{\varphi} ^{3}+\frac{\delta(t)}{2}\tilde{\varphi}^{2}\,, \tag{10}\] where \[\tilde{\varphi}=\frac{3\lambda}{\sigma}\chi\qquad\text{and}\qquad\delta(t)= \frac{9M(t)^{2}\lambda}{\sigma^{2}}\,. \tag{11}\] The parameter \(\delta\) fully controls the evolution of the potential and therefore, it determines when the transition takes place. Its value varies from 2 to 0, as it follows the evolution of the effective mass term from degenerate minima to a vanishing barrier, respectively. In this formalism, the position of the true vacuum minimum (4) reads \[\chi_{\text{tv}}=\frac{\sigma}{2\lambda}\left(1+\sqrt{1-\frac{4\delta}{9}} \right)\,. \tag{12}\] Therefore, the energy difference \(\Delta V\) between the minima can be expressed as \[\Delta V=V(0)-V(\chi_{\text{tv}})=\frac{\sigma^{4}}{96\lambda^{3}}\left(1+ \sqrt{1-\frac{4\delta}{9}}\right)^{2}\left(1-\frac{2\delta}{3}+\sqrt{1-\frac{4 \delta}{9}}\right). \tag{13}\] As we shall see in the following section, the dynamics of the phase transition depend mainly on \(\sigma\) and \(\lambda\), while the strength of the gravitational wave signal depends on the details of the inflationary model and on the non-minimal coupling to gravity, i.e on \(\beta_{w}\) and \(\xi\). Figure 1: _The evolution of the e.o.s. parameter, the Hubble rate and the Ricci scalar from inflation to kination. The rate of change is set to \(\beta_{w}/H_{\text{inf}}=1\). The shaded area denotes the period in which a FOPT can naturally occur._ The probability of quantum tunneling in vacuum is typically measured via the bubble nucleation rate [51, 52] \[\Gamma=\chi_{\rm tv}^{4}\left(\frac{S_{E}}{2\pi}\right)^{2}\exp(-S_{E})\,, \tag{10}\] where \(\chi_{\rm tv}\) is the vacuum expectation value that \(\chi\) acquires after tunnelling and \(S_{E}\) represents the Euclidean action associated with the \(O(4)\)-symmetric bounce solution \[S_{E}=2\pi^{2}\int_{0}^{\infty}\varrho^{3}{\rm d}\varrho\left[\left(\frac{1}{2 }\frac{{\rm d}\chi}{{\rm d}\varrho}\right)^{2}+V\right]\,. \tag{11}\] We avoid using well-known expressions that contain the radius of the critical bubble [2, 33, 48, 53, 54] as the initial radius is not well defined in the limit of vanishing barrier, where the assumption about the thin-wall profile is not valid. Using an analytical formula derived from a numerical fit [50, 55], the Euclidean action of the critical bubble can be expressed as \[S_{E}=\frac{4\pi^{2}}{3\lambda}\frac{\alpha_{1}\delta+\alpha_{2}\delta^{2}+ \alpha_{3}\delta^{3}}{(2-\delta)^{3}}\,, \tag{12}\] with \(\alpha_{1}=13.832\), \(\alpha_{2}=-10.819\), \(\alpha_{3}=2.0765\), and \(\delta\) the potential-dependent coefficient defined in (11). We have numerically verified the validity of this approximation at the limit of the vanishing barrier. The nucleation time \(t_{n}\) is defined by the condition that, on average, one bubble per horizon is nucleated [54], namely \[\int_{t_{c}}^{t_{n}}dt\frac{\Gamma(t)}{H(t)^{3}}=1\,, \tag{13}\] where \(t_{c}\) represents the moment when the two minima are degenerate. Assuming a constant Hubble rate during nucleation, the condition (13) reduces to \(\Gamma(t_{n})=H(t_{n})^{4}\), which will be used in this work. The moment \(t_{*}\) when the transition completes is usually estimated as the moment of percolation [54, 56, 57, 58]. However, for relatively fast transitions \(t_{*}\simeq t_{n}\) is a good approximation, which we adopt for simplicity. The evolution of the decay rate and the scalar potential for benchmark values of the couplings are shown in Figure 2. An important parameter of the phase transition is its strength \(\alpha\), which scales according to the amount of vacuum energy released with respect to the energy of the cosmological background [55, 8]. During the period of kination, the strength of the transition is given by \[\alpha\equiv\frac{\rho_{V}}{\rho_{\rm kin}}=\frac{\Delta V}{3M_{P}^{2}H_{\rm inf }^{2}}\approx\frac{\sigma^{4}\left[1+\mathcal{O}(\delta)\right]}{36\lambda^{3 }M_{P}^{2}H_{\rm inf}^{2}}\,. \tag{14}\] Another crucial parameter for describing the phase transition, and the resulting GW spectrum, is its time or length scale, which is associated with the inverse duration \(\beta\) of the transition, \[\beta=-\frac{d}{dt}S_{E}\Big{|}_{t=t_{n}}\,. \tag{15}\] In order to derive the value of the \(\beta\) parameter, one needs to specify the actual realisation of the temporal dependence of the \(\delta\) parameter. We define \(\xi_{\rm min}\) in such a way that it is the smallest possible non-minimal coupling ensuring that during inflation the true vacuum is located at \(\phi=0\), and the secondary minimum, if present, is at most degenerate with the former \[\xi\geq\xi_{\rm min}=\frac{\sigma^{2}}{54\lambda H_{\rm inf}^{2}}\,. \tag{23}\] Taking into account that the time-dependent mass is given by \(M(t)=\xi\mathcal{R}(t)\) together with the parameterisation for the e.o.s. in Eq. (5), we can derive an analytic expression for \(\beta_{*}\) in terms of the value of \(\delta\) at the time of nucleation (see Figure 3), \[\beta_{*} =\frac{108\pi^{2}\beta_{w}H_{*}^{2}\xi}{\sigma^{2}\cosh^{2}( \beta_{w}(t_{*}-t_{0}))(2-\delta_{*})^{3}}\left[(\alpha_{1}+2\alpha_{2}\delta_ {*}+3\alpha_{3}\delta_{*}^{2})+\frac{3\left(\alpha_{1}\delta_{*}+\alpha_{2} \delta_{*}^{2}+\alpha_{3}\delta_{*}^{3}\right)}{2-\delta_{*}}\right]=\] \[=\frac{108\pi^{2}\beta_{w}H_{*}^{2}(2\alpha_{1}(1+\delta_{*})+ \delta_{*}(6\alpha_{3}\delta_{*}+\alpha_{2}(4+\delta_{*})))\xi\left(1-\left( \frac{1}{3}-\frac{\delta_{*}\sigma^{2}}{81\lambda\xi H_{*}^{2}}\right)^{2} \right)}{(-2+\delta_{*})^{4}\sigma^{2}}\geq\] \[\geq\frac{8\pi^{2}\beta_{w}(\delta_{*}+1)(2\alpha_{1}(\delta_{*}+1 )+\delta_{*}(\alpha_{2}(\delta_{*}+4)+6\alpha_{3}\delta_{*}))}{9(2-\delta_{*}) ^{3}\lambda}\,,\] where we have removed the explicit time dependence in the second line by inverting the relation for \(t(M)\) from Eqns. (2) and (5), \[t_{*}-t_{0}=\beta_{w}^{-1}{\rm arctanh}\left(\frac{1}{3}-\frac{\delta_{*} \sigma^{2}}{81\lambda\xi H_{*}^{2}1}\right), \tag{24}\] and using hyperbolic trigonometric relations. The lower bound in the third line is obtained by fixing \(\xi\) to the minimal value from Eq. (23). The only viable source of gravitational wave signal generated during the non-thermal phase transitions are collisions between true vacuum bubbles. The spectrum of the signal produced at the time of the transition is Figure 2: _Left panel: Nucleation rate normalised to Hubble volume at the end of inflation for \(\sigma/H_{\rm inf}=100\) and different values of \(\lambda\). The grey points represent the numerical values obtained from the computation of the bubble profiles, while the colourful lines are the fits from the analytical approximation (7). Right panel: Temporal evolution of the scalar potential for the benchmark values \(\sigma/H_{\rm inf}=100\), \(\lambda=0.1\), and \(\delta_{*}=0.12\)._ described by [59] \[\Omega_{*}(f)=\bigg{(}\frac{\beta_{*}}{H_{*}}\bigg{)}^{-2}\left(\frac{\rho_{V}}{ \rho_{\rm total}}\right)^{2}\!\!S(f)\,, \tag{23}\] where the spectral shape \(S(f)\) is defined using a numerically-derived broken power law [23; 60] \[S(f)=25.10\left[2.41\left(\frac{f}{f_{*}}\right)^{-0.56}+2.34\left(\frac{f}{f_ {*}}\right)^{0.57}\right]^{-4.2}\,, \tag{24}\] with \(f_{*}=0.13\beta_{*}\) defined as the peak frequency at the time of production. The energy budget factor in Eq. (23) can be calculated using the transition strength parameter \(\alpha\) (see Eq.(19)), \[\frac{\rho_{V}}{\rho_{\rm total}}=\frac{\rho_{V}}{\rho_{V}+\rho_{\rm kin}}= \frac{\alpha}{\alpha+1}\,. \tag{25}\] Then, in order to obtain the present-day spectrum we have to redshift the signal, i.e. re-scale the amplitude and peak frequency. This must be done with caution since the transition takes place during the kination domination period. Splitting the redshift factor into a kination part and the regular radiation domination contribution results in [59] \[\Omega_{\rm GW,0} =\bigg{(}\frac{a_{*}}{a_{0}}\bigg{)}^{4}\bigg{(}\frac{H_{*}}{H_{ 0}}\bigg{)}^{2}\Omega_{*}=\bigg{(}\frac{a_{*}}{a_{\rm RD}}\bigg{)}^{4}\bigg{(} \frac{a_{\rm RD}}{a_{0}}\bigg{)}^{4}\bigg{(}\frac{H_{*}}{H_{\rm RD}}\bigg{)}^{2 }\bigg{(}\frac{H_{\rm RD}}{H_{0}}\bigg{)}^{2}\Omega_{*}\] \[=\frac{1.67\cross 10^{-5}\ {\rm Hz}}{h^{2}}\left(\frac{\alpha}{ \alpha+1}\right)^{2}\!\!\bigg{(}\frac{H_{*}}{H_{\rm RD}}\bigg{)}^{2}\frac{3w-1 }{3w+3}\left(\frac{\beta_{*}}{H_{*}}\right)^{-2}S(f)\,, \tag{26}\] \[f_{\rm peak,0} =\frac{a_{*}}{a_{0}}f_{*}=\frac{a_{\rm RD}H_{\rm RD}}{a_{0}} \frac{a_{*}H_{*}}{a_{\rm RD}H_{\rm RD}}\frac{f_{*}}{H_{*}}\] \[=1.65\cross 10^{-5}\ {\rm Hz}\ \bigg{(}\frac{T_{\rm RD}}{100\ { \rm GeV}}\bigg{)}\bigg{(}\frac{f_{*}}{H_{*}}\bigg{)}\bigg{(}\frac{H_{*}}{H_{\rm RD }}\bigg{)}^{\frac{3w-1}{3w+3}}\,. \tag{27}\] The temperature at the onset of radiation domination can be calculated straight from the Hubble parameter, under the assumption of instantaneous thermalisation, as \[T_{RD}=\left(3M_{p}^{2}\xi_{g}^{2}H_{\rm RD}^{2}\right)^{\frac{1}{4}}, \tag{28}\] where \(\xi_{g}=\sqrt{30/\pi^{2}g_{*}}\), and \(1\leq g_{*}\lesssim 106.75\) is the number of effective degrees of freedom at \(t_{\rm RD}\). As we do not specify the decay rate of the spectator field here, we will use the upper limit coming from the SM, assuming that \(\chi\) has already decayed into the SM plasma before radiation domination. Different scenarios can also be considered here, but the overall impact on the spectra will not be significant. Finally, one has to take into account the implications of the modified expansion for the spectral shape. In particular, for kination, the slope of the GW spectrum changes to \(\Omega_{\rm GW}(f)\propto f^{4}\)[61] for modes entering the horizon before going back to the standard \(\Omega_{\rm GW}(f)\propto f^{3}\)[62; 63; 64] as the radiation domination period begins, i.e. \[\Omega_{\rm GW}(f)\propto\begin{cases}S(f)&\quad\text{for $f\gtrsim\frac{a_{*}}{a_{0}} \frac{H_{*}}{2\pi}$}\,,\\ f^{4}&\quad\text{for $f\lesssim\frac{a_{*}}{a_{0}}\frac{H_{*}}{2\pi}$}\,,\\ f^{3}&\quad\text{for $f\lesssim\frac{a_{\rm RD}}{a_{0}}\frac{H_{\rm RD}}{2\pi}$}\,, \end{cases} \tag{29}\] with \(S(f)\) denoting the model-dependent spectral shape defined in Eq. (3.15). Notice that the factors present in the comoving Hubble radius \(\frac{a_{*}}{a_{0}}H_{*}\) can be easily computed by using Eq. (3.17). Moreover, the value of the parameter \(\alpha\) inevitably poses an upper bound on the duration of the kination epoch. Since \[\left.\frac{\rho_{\rm rad}}{\rho_{\rm kin}}\right|_{a=a_{\rm RD}}=\left(\frac{ \rho_{\rm rad,0}}{\rho_{\rm kin}}\right)a_{\rm RD}^{2}=\alpha a_{\rm RD}^{2}=1\,, \tag{3.21}\] we are able to express the ratio of Hubble parameters simply as \[\left(\frac{H_{*}}{H_{\rm RD}}\right)^{2}=\frac{\rho_{\rm kin}}{\rho_{\rm rad, RD}}=\frac{\rho_{\rm kin}}{\rho_{\rm rad,0}a_{\rm RD}^{-4}}=\alpha^{-3}\,. \tag{3.22}\] Now, combining these expressions with Eq. (3.17) and setting \(w=1\), we can obtain the following simple formulae for the current GW spectra and frequency, \[\Omega_{\rm GW,0} =\frac{1.67\crosscross 10^{-5}}{h^{2}}\alpha\left(\frac{\beta_{*}} {H_{*}}\right)^{-2}S(f)\,, \tag{3.23}\] \[f_{\rm peak,0} =1.65\cross 10^{-5}\ {\rm Hz}\ \frac{\sqrt[4]{3M_{P}^{2}\xi_{ 2}^{2}\alpha H_{*}^{2}}}{100\ {\rm GeV}}\bigg{(}\frac{f_{*}}{H_{*}}\bigg{)}\,, \tag{3.24}\] where we have dropped the negligible \((\alpha+1)^{-2}\) factor because the inflaton background energy density dominates at the time of the transition. ## 4 Parameter space and gravitational wave signal We present the scan of the parameter space of our model, displaying plausible regions leading to first-order phase transitions with strengths compatible with the assumption of initial kination domination. We performed a scan for the discussed scalar potential and computed the relevant parameters for the phase transition, as shown in Fig. 3. We restrict ourselves to the interval \(10^{-4}<\alpha<10^{-1}\). The lower bound comes from the fact that lower values of \(\alpha\) lead to very weak transitions. The upper bound is simply a requirement ensuring that the energy density of the field remains subdominant during the evolution of the e.o.s. parameter. In fact, if the difference \(\Delta V\) between the false and the true vacuum reaches values comparable to the background energy density, we would need to consider its contribution to the total energy-momentum tensor. Since it scales like vacuum energy, it would tend to restore the e.o.s. parameter to \(w=-1\). Similarly, we have focused on the values for which \(\beta_{w}/H<\beta_{*}/H_{*}<10^{4}\), where the hierarchy between the rate of the FOPT and the shift from inflation to kination is maintained. Higher values of \(\beta_{*}\) would lead to transitions that closely resemble second-order ones and would not produce any visible signal. Regarding the computation of the GW spectra, we consider a typical inflationary scale of \(H_{\rm inf}\sim 10^{12}\) GeV, and a lower scale, \(H_{\rm inf}\sim 10^{-8}\) GeV as shown in Figure 4. The change of the transition strength \(\alpha\) for constant inverse time scale \(\beta_{*}/H_{*}\) trivially affects the amplitude. However, due to the non-standard redshift scenario, larger values of \(\alpha\) translate t generically into higher frequencies. Varying the timescale \(\beta_{*}/H_{*}\) of transition produces the usual effect: the smaller value it has, the longer the transition is or the "bigger" the bubbles Figure 4: _Gravitational waves spectra from phase transitions triggered at the onset of kination. In the left panel, \(\alpha\) is treated as a free parameter while \(\beta_{*}/H_{*}=100\). On the right panel \(\alpha=0.01\), while \(\beta_{*}/H_{*}\) is treated as the free parameter. Solid lines represent typical results for the quintessential inflation (\(H_{\rm inf}\approx 10^{12}\) GeV), dashed ones the case of ultra-low inflationary scale (\(H_{\rm inf}\approx 10^{-8}\) GeV). The shaded, colourful regions denote the sensitivity ranges of future experiments._ Figure 3: _Strength of the transition \(\alpha\) (upper left panel), \(\delta\) parameter at the moment of nucleation (upper right panel), lower bound on non-minimal coupling \(\xi\) from Eq. (3.11) (lower left panel) and duration of the transition \(\beta_{*}/H_{*}\) evaluated for \(\xi_{\rm min}\) (lower right panel). The scans were performed for \(H_{*}=10^{12}\) GeV and \(\beta_{w}/H_{\rm inf}=1\)._ are, which results in larger amplitudes of the signal and lower peak frequencies. Moreover, in both cases one can observe the "drop" in the slope for the super-horizon modes as described in Eq. (3.20). This effect is important, since it not only affects detection prospects, but also because finding this feature in the observed GW spectrum would be evidence of a kination period in the early Universe. Figure 5 shows the spectra resulting from the scans we performed. Note that the signal from the transitions at the inflationary scales \(H_{*}\sim 10^{8}-10^{12}\) GeV is not in the reach of existing and future experiments. However, the signal can be within reach of future detectors such as the Einstein Telescope (ET) [65], the Big Bang Observer (BBO) [66] or even the Laser Interferometer Space Antenna (LISA) [67] for more exotic inflationary scenarios taking place closer to EW scale. ## 5 Conclusions The advent of the gravitational wave cosmology era presents us with an unprecedented possibility to explore the pre-CMB epoch, allowing us to perform a systematic search for new physics throughout energy scales. Both in the context of SM and BSM physics, first-order phase transitions embody the prototypical phenomenon that can produce a sizeable stochastic gravitational wave background that could be detected by future experiments. Given the broad range of possible frequencies and different modalities within which such transitions could take place, it is of the utmost importance to explore non-standard scenarios beyond those usually embedded in the SM, such as transitions at the EW or the QCD scale. Figure 5: _Gravitational waves spectra from phase transition triggered by onset of kination period. The signal comes from the actual points from the scan of parameter space (see Fig. 3). Solid lines represent typical results for the quintessential inflation (\(H_{\rm inf}=10^{12}\) GeV), dotted ones the case of ultra-low inflationary scale (\(H_{\rm inf}=10^{-8}\) GeV). The shaded, colourful regions denote sensitivity range of future experiments._ In this work, we analyzed the possibility of having a FOPT triggered by the change of the space-time geometry, contrary to standard scenarios where the clock for the transition is set by the thermal evolution of the Universe. In our case, the transition proceeds through quantum tunnelling to a true vacuum that dynamically appears due to the evolution of the e.o.s. parameter, and therefore the scalar curvature. In this context, we showed that a spectator field with a renormalisable potential and a non-minimal coupling to gravity can successfully lead to a phase transition under quite general conditions. As a proof of concept, we focused on the post-inflationary evolution of the Universe in the context of quintessential inflation. This choice provides an ideal setup to explore all the values of the e.o.s. in the range \(-1\leq w\leq 1\), while providing a subsequent enhancement of the signal during the kination period. Moreover, this kind of scenario naturally sets an upper bound on the duration of reheating, as the produced radiation will eventually overcome the kination background, without the requirement of specifying any coupling to the SM particles. Our results show that the strength \(\alpha\) of the transition depends only on the cubic and quartic terms of the potential, while its speed \(\beta\) is also regulated by the details of the time evolution of the e.o.s. parameter. We presented the predictions for gravitational wave spectra resulting from vacuum bubble collisions, taking into account recent developments from the simulations and the non-standard redshift for superhorizon modes caused by the kination period. We found the relation between the released energy i.e. strength of the transition and the peak frequency, and estimated the detection prospects for different energy scales of inflation. As expected for a typical inflationary scale \(H_{\rm inf}\sim 10^{12}\) GeV, the frequency of the signal lies outside of the observable range covered by current experiments. Moreover, we find that a signal compatible with LISA and BBO sensitivities would require an inflationary scale of the order \(H_{\rm inf}\sim 10^{-8}\) GeV, which is typically incompatible with quintessential inflationary scenarios. Our analysis constitutes a first step towards accurate predictions for expansion-driven phase transitions. However, there are multiple improvements that can be made in future studies. In our approach we assumed that the flat space-time limit of the field mass \(m\) is negligible, leading to dynamics solely determined by the non-minimal coupling and the \(\beta_{w}\) parameter. In the presence of a non-negligible value of \(m\), one might tune the parameters in order to delay the nucleation condition very close to the onset of kination. In such a case, due to the chosen parameterisation of the e.o.s., lower values of \(\beta_{*}\) can be achieved. The framework presented in this paper can also be adjusted to study different realizations of the scalar potential with various field content and gravitational couplings. One can also analyze various inflationary models, going beyond the quintessential inflation, in which case the methods described in this work allow for a novel probe of the early Universe. We thank Dr. Michal Artymowski for useful discussions. We acknowledge funding from the Polish National Agency for Academic Exchange (NAWA) and the Fundacao para a Ciencia e a Tecnologia (FCT) within the bilateral Programme for Cooperation in Science between Portugal and Poland, project 2021.09261.CBM. This work was supported by the Polish National Agency for Academic Exchange within Polish Returns Programme under agreement PPN/PPO/2020/1/00013/U/00001 and the Polish National Science Center grant 2018/31/D/ST2/02048. J. R. is supported by a Ramon y Cajal contract of the Spanish Ministry of Science and Innovation with Ref. RYC2020-028870-I. G. L. is supported by a fellowship from "la Caixa" Foundation (ID 100010434) with fellowship code LCF/BQ/DI21/11860024. G. L. and M. P. acknowledge the Fundacao para a Ciencia e a Tecnologia (FCT), Portugal, for the financial support to the Center for Astrophysics and Gravitation-CENTRA, Instituto Superior Tecnico, Universidade de Lisboa, through the Project No. UIDB/00099/2020. M. P. thanks also the support of this agency through the Grant No. SFRH/BD/151003/2021 in the framework of the Doctoral Program IDPASC-Portugal.
2309.10419
Learning from Teaching Assistants to Program with Subgoals: Exploring the Potential for AI Teaching Assistants
With recent advances in generative AI, conversational models like ChatGPT have become feasible candidates for TAs. We investigate the practicality of using generative AI as TAs in introductory programming education by examining novice learners' interaction with TAs in a subgoal learning environment. To compare the learners' interaction and perception of the AI and human TAs, we conducted a between-subject study with 20 novice programming learners. Learners solve programming tasks by producing subgoals and subsolutions with the guidance of a TA. Our study shows that learners can solve tasks faster with comparable scores with AI TAs. Learners' perception of the AI TA is on par with that of human TAs in terms of speed and comprehensiveness of the replies and helpfulness, difficulty, and satisfaction of the conversation. Finally, we suggest guidelines to better design and utilize generative AI as TAs in programming education from the result of our chat log analysis.
Changyoon Lee, Junho Myung, Jieun Han, Jiho Jin, Alice Oh
2023-09-19T08:30:58Z
http://arxiv.org/abs/2309.10419v1
Learning from Teaching Assistants to Program with Subgoals: Exploring the Potential for AI Teaching Assistants ###### Abstract. With recent advances in generative AI, conversational models like DataGPT have become feasible candidates for TAs. We investigate the practicality of using generative AI as TAs in introductory programming education by examining novice learners' interaction with TAs in a subgoal learning environment. To compare the learners' interaction and perception of the AI and human TAs, we conducted a between-subject study with 20 novice programming learners. Learners solve programming tasks by producing subgoals and subsolutions with the guidance of a TA. Our study shows that learners can solve tasks faster with comparable scores with AI TAs. Learners' perception of the AI TA is on par with that of human TAs in terms of speed and comprehensiveness of the replies and helpfulness, difficulty, and satisfaction of the conversation. Finally, we suggest guidelines to better design and utilize generative AI as TAs in programming education from the result of our chat log analysis. Generative AI, CS Education, Human-AI Interaction, Subgoal Learning + Footnote †: footnote]Authors contributed equally as second authors to this research. ## 1. Introduction A common goal of introductory programming courses is to teach learners how to program. While writing code is an essential requirement of programming, students are expected to learn other important concepts such as debugging, designing algorithms, and techniques in programming (Hardt et al., 2017). Learners also learn computational thinking (Hardt et al., 2017), a new way of thinking that involves the abstraction of problems, decomposing them, and re-composing them into working solutions (Hardt et al., 2017; Krizhevsky et al., 2014). Having to learn these new concepts in a single course presents difficulties to the learners (Krizhevsky et al., 2014), and if this difficulty is not appropriately alleviated, the learners may lose motivation and even drop out of the course (Krizhevsky et al., 2014; Krizhevsky et al., 2014). Therefore, many programming courses employ teaching assistants (TAs) to closely attend to the learners' needs and provide feedback. TAs play a crucial role in correcting learners' misconceptions and fixing errors in their code, thereby enhancing their overall learning gain (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014). Having a sufficient number of TAs allows learners to receive individual care by getting help in solving programming tasks and clarifying programming concepts (Krizhevsky et al., 2014; Krizhevsky et al., 2014). With recent advances in generative AI and Large Language Models (LLMs), the educational field has discovered some exciting opportunities for assisting learners. Open-domain generative models are trained to be empathetic (Krizhevsky et al., 2014) and perform well in unseen contexts for better reliability (Beng et al., 2016), which are important qualities for TAs when interacting with learners. More recent large generative models such as ChatGPT1, LLaMA (Liang et al., 2017), and Bard2 can also participate in longer conversations by remembering the context of the conversations. Footnote 1: [https://chat.openai.com/](https://chat.openai.com/) Footnote 2: [https://bard.google.com/](https://bard.google.com/) In the context of programming education, these models also show a remarkable ability to understand, generate, and explain code, making them strong candidates for TAs in programming courses (Liang et al., 2017). They can fix errors present in the code, explain why the errors occur, and discuss possible approaches to solve various programming tasks. AI coding assistants have been shown to relieve the cognitive load and struggles of learners, allowing them to perform better and faster in solving programming tasks (Liang et al., 2018). However, prior work has explored the pedagogical abilities of conversational agents and showed that they still fall behind human teachers in providing necessary help to the students (Liang et al., 2017). Also, generative models sometimes produce inconsistent code, generating different responses to the same prompt on different occasions (Liang et al., 2018). This underscores the importance of a meticulous approach when implementing LLMs within real classrooms. In this note, we introduce the concept of subgoal learning to novice learners when conversing with the AI TAs to learn programming concepts. Subgoal learning is well known to be an effective learning strategy in the STEM domain by helping students break down complex problems into smaller counterparts (Liang et al., 2018). We investigate the potential of AI TAs in educating novice learners in a structural manner by comparing their interactions to those of human TAs. To explore the feasibility of employing generative AI as TAs in introductory programming courses and study how learners and AI TAs interact, we conducted a between-subject study with 20 novice programming learners, of whom 10 had no previous experience in programming, and the remaining 10 had only taken an introductory programming course before. Half of the learners in each group solved 4 programming tasks in each of the two sessions separated by a week with the aid of an AI TA powered by ChatGPT while the other half solved with the aid of a human TA. The learners followed our learning workflow of dividing the task into subgoals and solving them by writing subsolutions, through which learners are expected to develop computational thinking skills. An assessment of AI's ability to help plan an algorithm and write the code for it within our learning workflow was conducted. Learners participated in a survey after each programming session that asked for their perceptions about their satisfaction with the conversation with the TA and the learning workflow. Seven learners participated in a retention test and an interview conducted a week after the second programming session. Our results show that learners who received help from the AI TA solved the programming tasks faster and attempted more tasks. They also achieved comparable scores for the tasks. In the survey, learners reported that AI TA's replies were prompt, sufficiently detailed, and helpful throughout the workflow. Moreover, learners were satisfied with the conversation with the AI and perceived that it was generally uncomplicated and helpful for learning programming. The analysis of the chat log reveals the different behaviors of the AI and human TAs and design opportunities for a better AI TA. The contributions of this paper are as follows: * Results from a between-subject study that shows the learners' programming performance with human and AI TAs. * An analysis of the strengths and weaknesses of AI TA in programming from the chat log analysis * A set of design guidelines for designing an effective AI TA in programming ## 2. Related Work ### Subgoal Learning Subgoal learning is a method designed to assist students in breaking down complex problem-solving procedures into smaller structural components within the STEM domain (Liang et al., 2018). This strategy has demonstrated significant effectiveness in improving learners' ability to transfer knowledge across tasks that share similar subgoals (Liang et al., 2018; Lee et al., 2019). In the context of programming education, subgoal learning is known to be helpful in reducing the extraneous cognitive load of the learners, thereby enhancing their problem-solving performance (Liang et al., 2018; Lee et al., 2019). The effectiveness of subgoal learning is further amplified when implemented as an active learning strategy. Conventional subgoal learning involved the provision of pre-defined subgoal labels, making learners learn in a passive manner (Liang et al., 2018). Such a passive learning approach was found to be less effective compared to self-directed learning methods, which involve self-reflection and explanation of the hierarchical structure of the solutions (Liang et al., 2018; Liang et al., 2018). Yet, proper guidance or feedback is necessary to correct learners' misconceptions of the concept when creating subgoals by themselves (Liang et al., 2018; Liang et al., 2018). Previous studies on integrating self-motivated subgoal learning into the programming domain focus on utilizing learners' experience as guidance to other novice learners through interactive platforms. Crowdy introduces the concept of _learnersourcing_ to generate subgoal labels from introductory web programming videos, thereby assisting learners in more effective comprehension of programming concepts (Liang et al., 2019). Algosolve supports novice programming learners in creating high-quality subgoal labels by leveraging peer examples (Liang et al., 2019). However, there is still a lack of research done on utilizing AI models as an instructor to clarify learners' questions and guide them in the process of self-labeling subgoal tasks. ### Generative AI for Programming Education Generative AI exhibits remarkable performance in various programming tasks, such as code repair (Liang et al., 2018; Lee et al., 2019), code summarization (Liang et al., 2018), code generation (Liang et al., 2018; Lee et al., 2019), and even code explanation (Liang et al., 2018). This recent advancement in generative AI opens up numerous opportunities to support programming education for various stakeholders, including instructors, students, and teaching assistants. #### 2.2.1. Instructors Leveraging LLMs can significantly elevate the effectiveness of instructors in programming courses. These models serve as valuable tools for addressing various instructional tasks, such as answering students' questions, assessing assignments and quizzes, and developing lesson plans (Liang et al., 2018). Codex (Liang et al., 2018), a descendant of OpenAI GPT-3 optimized for programming, serves as an effective aid for university instructors by automating the generation of programming exercises, tests, and solutions, thus saving time and enhancing the quality of educational materials (Sutskar et al., 2017). Instructors can also harness LLMs to deliver personalized learning support to their students, for instance by identifying specific areas where students are struggling with (Sutskar et al., 2018). #### 2.2.2. Students LLMs can also enhance students' programming learning experience. Codex performs better on answering most introductory programming problems than average students (Larson et al., 2017). Therefore, novice learners can gain a deeper understanding of basic programming concepts with line-by-line code explanations generated by LLMs (Larson et al., 2016). They can also receive feedback and detect bugs before they submit their assignments for grading (Sutskar et al., 2017). The nearly instantaneous provision of feedback and explanations makes generative AIs more accessible and convenient for learners compared to traditional human instructors. ### Generative AI as TA TAs in programming education are generally more approachable to the students while being skilled at identifying and addressing each student's special needs (Sutskar et al., 2018). However, TAs face challenges of managing multiple students simultaneously (Larson et al., 2016; Larson et al., 2016). Students often possess varying levels of knowledge and different needs, making it even more demanding for TAs to provide personalized feedback effectively (Larson et al., 2016). Because of this, LLMs can be utilized to offer personalized and immediate responses (Larson et al., 2017), just as TAs are supposed to do. Nevertheless, the straightforward integration of LLMs in this role may lead to various challenges. Despite GPT-3's strong performance on conversation uptake, it falls behind in pedagogical ability when compared to human teachers (Larson et al., 2016). Additionally, an over-reliance on generative AI for generating answers could potentially hinder the development of learners' critical thinking and problem-solving skills (Sutskar et al., 2018). These limitations underscore the importance of strategic planning when implementing LLMs in the classroom. Yet, to the best of our knowledge, examining the performance of LLM compared to human TA has been underexplored. Moreover, the existing literature scarcely explores how to optimally leverage LLMs as TAs, especially in introductory programming education settings. ## 3. Learning Workflow Design With the advances in generative AI technology, programmers can receive significant help when writing the code. The same is true for programming learners; generative AI can write a code outline or even generate the solution code snippet for a given task when only given natural language instructions. This makes learning to write code a less crucial part of programming education as the generative AI can help to write most of the code. On the other hand, learning computational thinking, designing the solution, and understanding and debugging code remain important for a programmer. Our learning workflow is designed to focus on helping learners practice computational thinking and planning out the solution to a problem with the help of generative AI or a human teaching assistant. Learners are guided through a series of steps that help them think about how to divide the task into smaller problems and solve them. In each step, learners come up with mini-solutions that are used in the following step and lead to the final solution. Throughout the learning workflow, we observe how students interact and communicate with a generative AI-powered TA compared to a human TA and how feasible using generative AI as a TA for CS education is. We describe the workflow and steps in detail. ### Overall Learning Workflow In our learning workflow, learners solve each programming task as they progress through three steps: 1) **Subgoal Formulation step**, where learners break down the task into smaller and more manageable problems, 2) **Subsolution Generation step**, where learners tackle each of the subgoals they have formulated and implement a solution for them, and 3) **Solution Generation step** where learners combine their subsolutions into a single code that solves the programming task. The subgoal formulation step drives learners to understand and organize the task and devise a plan for the final solution. The subsolution generation step helps learners focus on a subgoal at a time and progressively write the program. The solution generation step allows learners to review their subsolutions and debug them. Breaking down the problem into subgoals and writing the solution code can be challenging for the learners, especially if they are new to programming. Therefore, at every step of the workflow, a TA, either an instance of prompted generative AI or a human TA, is available for the learners to ask questions. The TA is told about the steps that the learners go through and is asked to guide learners through the steps by answering their questions and giving feedback. Figure 2 shows the environment that the learner uses to solve the programming tasks. The learner can chat with the TA, navigate between the subgoals, check the task description, edit the code, and run the code to check the outputs on the system. ### Subgoal Formulation When a learner selects a programming task to solve, the learner proceeds to the subgoal formulation step. The learner is initially provided with the task description that includes the requirements of the task, sample inputs, and sample outputs. The code editor shows the skeleton code for the problem. It remains uneditable for this step to help the learner focus on formulating the subgoals while giving the learner an overview of the code. The chat interface on the system allows the learner to communicate with either an AI or a human TA in real time. The learner engages in a conversation with the TA to produce a set of subgoals for the programming task. Once the learner is satisfied with the set of subgoals, the learner moves on to the next step. This step is designed to encourage the learner to plan ahead instead of jumping right into coding to solve the programming task. Chatting with the TA gives a chance to the learner to clarify any requirements or misunderstandings they might have about the task. The TAs are instructed to remind the learner about the task description and encourage them to write specific subgoals for the task. In order for the learner to take the initiative in formulating the subgoals, the TAs are also instructed not to provide the subgoals the learner has not already mentioned and only correct the learner if the subgoals seem incorrect. As the learner formulates the subgoals on their own, we expect them to familiarize themselves with the programming task and form an idea of what to expect in the following steps. ### Subsolution Generation After the learner formulates a set of subgoals to the task, they move on to the subsolution generation step. They are presented with the subgoals they have formulated in the previous step, and the system asks them to write the solution code for each of the subgoals. The system provides a code editor that the learner writes their subsolutions on. The code can be executed on the page itself, with input and output features to test their subsolutions as they write. The learner thus can easily check the outcome of their code without leaving the system. The learner communicates with the TA in the same chat interface as the previous step in real time to get help on how to implement the code that solves each of the subgoals they have formulated. For the generative AI TA, a separate chat session is made for each subgoal and subsolution to isolate each subgoal context for better AI performance. The learner translates their subgoals into code implementation that solves them in this step. Although generative AI can provide significant assistance when writing code, learning to write code is still an important part of computer science education. The learner communicates with the TA to ask about how to approach the problem algorithmically, how to use a certain syntax or debug the code they are writing. The TAs are instructed to help the learner step by step to create a program solution for a subgoal. The TAs should not provide the solution directly and refrain from providing the solution code to the learner. Only when the learner is struggling the TAs help them by providing some code snippets that can help the learner progress in the task. The learner practices writing code and reading and understanding the code that the TA presents as an example. ### Solution Generation In the final step, the learner revises the code that they have written in the subsolution generation stage. The user interface of the system is identical to that in the subsolution generation stage. The learner is presented with a new chat session and a code editor that contains all the code that the learner has written in the previous step. The learner takes another look at the code and tests the code by running some sample inputs and checking that the code's output is the same as the expected output. For instance, the learner can ask for more sample inputs to test their code, verify their solution with the TA, and modify the code as needed. The solution generation step is designed to give the learner an opportunity to organize the code they have written into a coherent program. Additionally, if the TA helped the learner write parts of the code in the subsolution generation step, the learner can review the code and understand how the subsolution code works and what role it plays in the overall program. The TAs are instructed to help the learner combine the subsolutions into a final solution. They are also reminded not to provide the solution directly and to provide code only when necessary, as in the previous step. The learner finishes up the implementation in this step and submits the solution. Figure 2. System for solving the programming tasks using the learning workflow. ## 4. User Study We evaluate the feasibility of using an AI TA in an educational setting. We observe learners' perceptions of the AI TA and how they interact to solve the programming tasks given. The goals of our user study were (1) to assess the feasibility of using a generative AI-powered TA, (2) to observe and analyze how learners interact and perceive a generative AI-powered TA, and (3) to examine the strengths and weaknesses of a generative AI-powered TA in CS education when compared to a human TA. We conducted a between-subjects study where a participant solves programming tasks using our learning workflow by communicating with either a generative AI-powered TA or a human TA in two sessions, separated by approximately a week. After the two sessions, we interviewed 7 participants to ask about their learning experience and gauge how much they remembered about the content they learned. ### Participants We recruited 20 participants (7 male, 13 female, mean age 23.85, stdev=2.16, max=26, min=19) in Korea who have little to no experience in programming. Our recruitment process involved posting in two online university communities to participate in our two-session study. They reported their perceived proficiency in Python, the programming language used to solve programming tasks in our study, on a 5-point Likert scale. Among the 20 participants, 10 participants had a self-reported proficiency of 1 and had not taken any computer science course. The remaining 10 participants had a self-reported proficiency of 2 and had taken only introductory courses in computer science. We randomly assigned 5 participants in each proficiency level to solve the programming tasks with the help of a generative AI-powered TA and assigned the remaining 5 participants to solve with the help of a human TA. Out of all participants, 19 participants completed both sessions. One participant who was assigned to a generative AI-powered TA and had a self-reported proficiency of 2 dropped out of the study and only participated in the first session. Each session lasted for a maximum of 3 hours, and participants were paid 40,000 KRW (-USD 30) per session. Willing participants were interviewed for an hour and were paid an additional 20,000 KRW (-USD 15). ### Generative AI-powered TA We used ChatGPT (gpt-3.5-turbo) as the model behind the AI TA since GPT-3.5 performs better on programming when prompted in non-English language compared to Codex (Codex, 2015). The default temperature of 1.0 was used. All participants prompted the AI TA in Korean, which was their native language. For the subgoal formulation step, the model was prompted to help the learners create subgoals by first reminding them about the task, providing hints, reiterating the subgoals, and giving feedback. The task description was provided to the model. For the subsolution generation step, the model was prompted to help the learners write a program to solve a subgoal by helping the learners debug, answering questions, likely regarding syntax or function usage, and giving feedback. The subgoal was provided to the model. For the solution generation step, the model was prompted to help the learners combine the subsolutions to create a program by debugging and correcting any mistakes. All the subgoals were provided to the model. The learner's code was appended to each of the learner's messages to provide a full context of the learner's status to the AI TA. In every step, the model was prompted not to provide direct answers, such as subgoals or code snippets, to the learners. Providing the answers to the learners could have a negative impact on learning, as learners may be tempted to merely copy the provided solution, which would remove the opportunity for the learners to practice producing the answers by themselves and learn on their own. The exact prompts given to the model are reported in Appendix A. ### Programming Tasks Participants were given 4 programming tasks to solve in each session. Programming tasks of similar difficulties were presented in the two sessions. Different sets of programming tasks were selected for the participants with different proficiency and experience in programming. For the participants with a reported proficiency of 1, easier problems that test programming concepts such as input, output, loops, if statements, strings, and lists were given. For the participants with a reported proficiency of 2, more demanding problems that test programming concepts such as sorting, searching, replacing, and indexing were given. The difficulty of the tasks was extracted from a crowdsourced platform that collects the difficulty rating of programming tasks 3 when possible; otherwise, the authors who have TA experience in introductory programming courses gauged the difficulty. Footnote 3: [https://solved.ac/](https://solved.ac/) The programming tasks were chosen to present enough challenge to the learners while not being too difficult for novices which can make them give up. The tasks' contents span basic programming concepts necessary for programmers to learn and can comprehensively assess the learners' computer science knowledge. We provide the full description of the tasks in Appendix B. ### Instructions Participants were first given instructions on how to use our online system and were told whether they would be solving the programming tasks with a generative AI-powered TA or a human TA. They were also given some examples of subgoals to help them generate their own in the main study. Once they accessed the system, they were presented with four programming tasks and asked to solve them using our learning workflow as much as possible within the time limit of 3 hours. After the programming session, participants answered an online survey where they reported about their experience with the TA and the system. ### Measurement #### 4.5.1. Qualitative Measurements Learners were asked to rate their experience with the TA and the helpfulness of the learning workflow and the system in the online survey. All of the survey questions were rated using a 7-point Likert scale (1: strongly disagree, 7: strongly agree) followed by an open-ended question asking the reason behind the choice. To measure the learners' satisfaction with the experience with the TA, we asked the learners how prompt and detailed the TA's responses were, how useful the TA's responses were for each of the three steps in the learning workflow, how much difficulty they faced when talking to the TA, how helpful the TA was, and how satisfied they were with the communication with the TA. Learners also reported on how helpful the TA was in learning programming and how willing they are to use the system when they learn programming in the future. The authors conducted a full review of the chat logs for all the learner participants in this study. We went through the messages sent by both the TAs and the learners and collected the number of interesting events occurring in the conversation between the TAs and the learners to understand the interaction between them. We constructed a few categories of replies that would influence the learners' experience in solving the tasks, such as the TAs providing subgoals or code solutions, TAs debugging the learners' code, and pedagogical questions to help the learners learn. #### 4.5.2. Quantitative Measurements The learner's progress and success in solving the programming tasks were gauged with quantitative measurements. We take note of the time taken for learners to complete each task, the number of tasks they attempted, and the scores for the tasks. The scores are calculated by counting the number of test cases the learner's code successfully passes. The authors wrote the 10 test cases for each of the programming tasks. The test cases are designed to test common errors learners make to verify the efficiency, robustness, and accuracy of the code. The number of messages containing interesting events mentioned above was also counted. Observing the number of these messages can show a trend of how learners participate in the conversation differently with AI or human TAs. Similarly, the number of interesting behaviors by the TAs was counted and analyzed as well. ## 5. Results We report the results of the user study, comparing how the learners' performance in the programming tasks differ and how the interaction with the TAs differ between the two types of TAs and two groups of learners with different proficiency level. We refer to the group of learners with a self-reported proficiency in Python of 1 as Group 1 and of 2 as Group 2. We number the programming tasks from P1 to P8. P1 to P4 were presented to the learners in the first session of the user study, and P5 to P8 in the second session. ### Performance Measures In this subsection, we report results that show how well the learners solved the programming tasks and compare them between the two types of TAs. #### 5.1.1. Task Completion Rates We first compare the task completion rate of the learners for the four tasks they were given in each session. We define completion rate as the percentage of tasks a learner made an attempt and produced a code solution within the 3-hour time limit. The task completion rate differs from scores as it does not take into account the correctness of the solution. The completion rates for each task for Groups 1 and 2 are shown in Figure 3. Learners in both groups were able to produce code solutions for most of the programming tasks. Learners who solved the tasks with the AI TA showed higher or equal completion rates for all the tasks across the two groups, although there was no statistical significance that the completion rates were different from an independent samples t-test with an alpha level of 0.05. A noticeable difference between completion rates for learners with the AI TA and the human TA is that learners who solved the tasks with a human TA showed a more dramatic decrease in completion rates for the later tasks in a session. Learners in the AI TA setting attempted to solve more tasks within the same time limit. #### 5.1.2. Time Taken We report the average amount of time a learner spent on each of the programming tasks. The time taken is measured as the time difference between the first learner utterance and the final learner utterance for that task. Figure 4 shows the average time spent on each task in minutes for learners in Groups 1 and 2. We only included the time taken in the data to calculate the average if the learner completed solving the task without reaching the time limit or moving on to the next task before writing the solution. Results show that the learners solved the tasks faster with the AI TA than with the human TA. The difference is more evident with Group 1 learners who had no experience in programming before. Group 1 learners solved all tasks faster with the AI TA while Group 2 learners solved all but one task (P2) faster with the AI TA. The differences in the time taken to solve the task between the two types of TAs in Group 1 learners for P1 and P7 are statistically significant (P-value of 0.024 and 0.0096 respectively from the t-test). The differences in the time taken to solve the task between the two types of TAs in Group 2 learners for P1 and P6 are statistically significant (P-value of 0.036 and 0.0022 respectively). Overall, the average time taken to finish solving a task for Group 1 learners was 31.5 minutes for those with AI TA and 59.8 minutes for those with human TA. The difference in the same time taken between the two types of TAs in both Group 1 and Group 2 is statistically significant from a t-test, with P-values of 0.000096 and 0.0044 respectively. The shorter time taken for learners to solve the task with the AI TA could be the reason why the completion rates for learners with the AI TA are higher for the tasks. A possible explanation of why learners solved tasks faster with AI TA is that the AI TA tends to provide more code to the learners when learners ask for help or struggle. #### 5.1.3. Scores We test the correctness of the learners' solution code for each programming task by comparing its output for 10 test cases with the correct answer. Figure 5 shows the average score for each task in the percentage of test cases passed for the learners. Learners who solved the task with AI TA showed higher or equal scores than those with human TA across all tasks in the first session of the user study for both proficiency groups, although the differences did not reach statistical significance. The scores in the second session show mixed results, with learners in the AI setting performing better for some tasks while the learners in the human setting perform better for the other tasks. The average score for all tasks for learners in the AI setting was 59.5 for Group 1 and 71 for Group 2. The average score for all tasks for learners in the human Figure 4. Average time taken to complete solving the programming tasks for the two groups of learners. Tasks with significant differences between the TA types are indicated with the asterisk (*). Figure 5. Scores for the programming tasks for the two groups of learners. Figure 3. Mean completion rate for programming tasks. setting was 50.25 for Group 1 and 53 for Group 2. The differences did not reach statistical significance. An interesting observation is how learners in Group 1 scored 100% for the first two tasks in the second session with the human TA. The chat logs show that the human TAs made sure that the students passed the test cases for those two questions by guiding them attentively to produce the correct solution and debugging the code when necessary. Overall, both the AI and the human TAs helped the learners achieve similar scores for the programming tasks over the two sessions of user study. The results from the three performance measures suggest that the AI TA performs no worse or even better than the human TA when helping learners solve programming tasks. The AI TA helped the learners solve more tasks much faster in both Groups 1 and 2 while getting comparable scores for the tasks. ### Perception of the TAs Learners participated in a survey consisting of 7-point Likert scale questions that asked about their perception of the speed and comprehensiveness of the TA's replies, the usefulness of the conversation with the TA, and difficulties and satisfaction of the communication during the user study. Each Likert scale question was followed by a short open-ended question asking for an explanation for their choice. The learners' responses to the Likert scale questions are shown in Figure 6. We refer to the learners by assigning them numbers from L1 to L20 when quoting their replies. #### 5.2.1. Speed and Concreteness of the TA's Replies Learners in both groups were generally satisfied with the promptness of the TA's replies regardless of the TA type. There was no statistically significant difference in the learners' perception of TA's promptness between the TA types. However, there were some negative remarks by L16 (Group 2, human TA) and L20 (Group 2, human TA) who mentioned "_I had to wait for a long time before moving on to the next step as the replies were not immediate._" and "_I had a lot to ask because I have not programmed much, but it was too slow_". #### 5.2.2. Usefulness of TA's Replies Learners rated the usefulness of the TA's replies in the three steps of the user study: subgoal formulation, subsolution generation, and solution generation. Learners generally felt positive that the TA's replies were useful in all of the three steps. However, there were some negative remarks about the usefulness of the TA in the subgoal formulation stage. L4 (Group 1, AI TA) mentioned that "_sometimes, a subgoal that I thought was good was not needed_", and L5 (Group 1, AI TA) said, "_The reply was not detailed enough when I asked how the three subgoals were different_". L15 (Group 1, human TA) and L16 did not talk to the TA during the subgoal formulation step because as L15 said, "_I did not use the chat as I thought there are no 'correct' subgoals and it is up to me to set them._". L20 said the TA's replies were too slow. Learners gave the best scores for the usefulness of the TAs in the subsolution generation step. Learners felt that the TAs helped them translate the subgoals into subsolutions well. For example, L6 (Group 2, AI TA) said "_The TA explained thoroughly what functions were necessary and showed examples of how to use them_". L18 (Group 2, human TA) mentioned that "_once the subgoals are formulated, it becomes a problem of programming knowledge rather than critical thinking. I could ask the TA for information_". L19 (Group 2, human TA) said that the TA was helpful when debugging. Learners also mentioned that the TA's feedback was useful in this step. L10 (Group 2, AI TA) mentioned that "_Even though my questions were vague, the TA provided detailed feedback and hints by catching the problems in my code_". For the usefulness of the TAs in the solution generation step, there is a statistically significant difference between the scores given by Group 1 learners in the two TA settings. Group 1 learners in the human TA setting felt the conversation with the TA was more useful (_U_=18.5, _p_=0.034) compared to Group 1 learners with the AI TA. Some Group 1 learners gave a low score for this question because, according to L3 (Group 1, AI TA), "_The conversation was not necessary as I have already written the final code in the previous step_". This phenomenon was observed across all learners in all groups, as the average number of turns in the conversation during the solution generation step is significantly lower compared to that during the other steps, being less than 10% of the number of turns of conversation during the subsolution generation step. #### 5.2.3. Learners' Satisfaction and Learning Effect Learners generally felt that having a conversation with the TA was not difficult for both the AI and human TAs. Some learners, however, felt that the conversation could be improved. Interestingly, the reasons for the difficulty are different for the AI TA and the human TA. With the human TA, the difficulty came from relational problems. L11 (Group 1, human TA) and L16 reported that since they did not know the basic programming syntax, it was difficult to communicate with the TA. L19 said, "_I had no idea what kind of conversation I could make with the TA, and I could not get help_". L15 felt daunted as she thought she was asking stupid questions. On the other hand, when talking to an AI TA, learners faced difficulty with the contents of the reply. L6 said, "_The AI gave a response different from what I wanted, or sometimes it wrote all of the code_". Furthermore, L7 (Group 2, AI TA) mentioned "_When I ask follow-up questions, I felt like the answer was not connected with the previous reply_" and L3 reported "_I was stuck when the AI does not understand me_". Despite the difficulties faced during the conversation, learners were mostly satisfied with the conversation with the TAs. L13 (Group 1, human TA) mentioned "_I might have been frustrating to teach, but the TA replied kindly till the end_". On the other hand, L20 said "_The TA was kind and tried to teach me what I didn't know, but the communication was not always smooth_". For the AI TA, L3 said "_I could talk to the TA naturally, and it helped me solve the problem, but sometimes I could not understand the reply_". The learners reported how helpful the TA was for learning programming. While the learners felt that the TAs were overall helpful, there was negative feedback, especially for Group 1 learners who had the AI TA. Group 1 learners in the AITA setting felt that the TA was less helpful compared to Group 1 learners with the human TA (_U_=11, _p_=0.006). The main reason for this is that without the basic programming knowledge, the AI TA allowed the learners to solve the tasks, but they were unsure whether they had picked up knowledge in the process. As L4 said, "_The TA was helpful for programming, but since it provided most of the code, I felt that it was not effective for learning_". However, Group 2 learners felt that the AI TA was helpful, as the TA taught the learners new ways to solve the problem and how to write concise code. Finally, learners reported if they would want to use the platform again when learning programming. Most learners would use the platform again for different reasons. L1 (Group 1, AI TA) and L3 felt that the fact that they could learn without any time constraints with the AI was a reason for using the platform again. Other learners reported that having a TA to help make the learning more efficient and fun. ### Interview Results In the interview, learners solved one question that they had solved in the first user study session within a 30-minute time limit. Group 1 learners solved P2, while Group 2 learners solved P1 again. The learners went through the same steps of subgoal formulation, subsolution generation, and solution generation in the interview. We summarize the interview results. All learners were able to formulate the subgoals correctly regardless of the group and TA settings in the interview. Learners reported that they were able to produce the subgoals better in the interview, as they could think about how the program would run and how to code and write them down as executable subgoals. Learners with the AI TA mentioned that they used the TA to confirm that their subgoals were correct and to make the subgoals more detailed with the TA's feedback. On the other hand, learners who talked with a human TA reported that they did not talk about subgoals much during the subgoal formulation step. Instead, the learners confirmed with the TA to make sure their understanding of the question was correct. For the subsolution generation step, learners with the AI TA mentioned that the TA helped them solve different kinds of tasks and write concise code. However, they also reported a problem with the AI TA that the TA provided too many hints and code, which the learners made use of to solve the problem. Specifically, L4 said that she was more focused on solving the tasks with the TA rather than trying to learn from the experience, as the TA easily provided the code when asked. L6 mentioned that if he asks the AI TA after thinking about the problem on his own, the TA will give the best answers. Learners with the human TA, on the other hand, preferred to have the TA provide more information. L11 mentioned that he would prefer the TA to provide example code rather than just mentioning what function or structures he needed to use. L17 would have liked the TA to provide several different approaches to solve the problem. ### Chat log Analysis The authors went through all the chat logs between the learners and the TAs to analyze the characteristics of messages learners, AI TAs, and human TAs send during each of the steps. We report some of the interesting characteristics of the interaction with the TAs. #### 5.4.1. Number of messages sent The total number of turns the learners and TAs make in the user study is 4,180 turns across the three steps, counting consecutive utterances as one turn. The number of messages sent in each of the steps is 885, 3,001, and 294 for subgoal formulation, subsolution generation, and solution generation steps respectively. In the subgoal formulation step, the number of turns in the conversation with the AI TA (607) is approximately double the number of turns with the human TA (278). The reason behind this could be that ChatGPT which powers the AI TA sends a reply to every user message. When learners send each subgoal in a different message, the AI TA replies to the messages individually, while the human TA will answer them collectively in a single message. Furthermore, as the AI TA gives a comment for every message, it sometimes leads to other messages such as clarification from the learner, further increasing the number of turns. In the subsolution generation step, there was not much difference in the number of turns between the two TA types. For human TA, 1,405 turns were made, and for AI TA, 1,596 turns were made in the conversations. As mentioned before, not many turns were present in the conversation in the solution generation step. On average, 3.36 turns were made with the AI TA, and 7.4 turns were made with the human TA. Nearly all the conversations made in this stage were to debug the code when the code did not produce the correct output for the sample input. #### 5.4.2. Feedback on subgoals In the subgoal formulation step, learners talk to the TAs to divide the programming task into subgoals. The ideal role of the TA in this step is to provide feedback on the learner's subgoals such as changing an abstract subgoal into a more concrete and executable subgoal. From the chat logs, we observed that the AI TA provides more feedback to the learners and the feedback is more detailed. Human TAs, on the other hand, preferred to leave the formulation step up to the learner and give feedback during the subsolution generation step instead. For example, a user came up with a subgoal, "_Remove numbers from the list if it is divisible by a square number_", which involves several steps such as setting a range of square numbers and calculating the square numbers. However, the human TA did not give any feedback to the subgoal and moved on. On the other hand, when a learner gave a subgoal "_check if the word is a palindrome_" to the AI TA, the AI TA split the subgoal into two steps, "_flip the word_" and "_check if the two words are equal_", creating two more executable and concrete subgoals to aid learning. However, the AI TA sometimes provided the full set of subgoals to solve the task voluntarily, which was observed 13 times for the Group 1 learners and 24 times for the Group 2 learners. No such case was observed for the human TAs. The AI TA often misunderstood the prompt given and provided the subgoals at the beginning of the subgoal formulation step. Other times, the AI TA provided the remaining subgoals when the learner only came up with the first subgoal for the task. The AI TA also provided the concrete subgoal when the learner just asked for a hint. #### 5.4.3. Number of times the TA showed the answer code Showing an example of code is often necessary in programming education to teach the learners about syntax, function usage, and ideation. The TAs sometimes show a part of the answer code to the learner if the learner seems to struggle or when the learner makes a mistake in the code and correction is required. Knowing when to provide code is important as showing the answer code too early can take away the opportunity for the learner to learn by doing and showing the answer code too late may lead to learners' frustration. The authors counted the number of times the TA provided the answer code. In total, the AI TA provided the answer code 175 times during the conversation while the human TAs provided the answer code only 53 times. In order to analyze when the TAs offered code, the authors annotated the occurrence into three categories: (1) the TA provides the code that the learner asked for, (2) the TA provides more code than what was asked for, (3) the TA provides the code voluntarily. The number of occurrences for the three cases is shown in Table 1. The results show that the AI TA voluntarily provides the answer code more often than the human TA. This corroborates the Group 1 learners' perception of the inadequate helpfulness of the TA in learning programming; the AI TA may provide the answer code too often, potentially depriving learners of the opportunity to independently engage in coding. The AI TA usually voluntarily offered the part of the answer code corresponding to the following subgoal when the learner finished coding a subgoal. For Group 1 learners, the AI TA also provided a part of the answer code when the learner mentioned that they did not understand the task or only mentioned what they intended to do next. On the other hand, the human TA provided the code voluntarily only when the learner was stuck at a step for an extended duration. Also, they offered the code when the programming concept was difficult to explain only in words, such as when explaining the formatted printing statement. What is interesting is that the learners asked for the code more often with the AI TA than with the human TA, with more than double the number of requests for code. Furthermore, human TAs often refused to provide the answer code directly when the learner asked for help; they tried to explain the syntax or the algorithm in words first and gave a chance for the learner to come up with the code on their own. The AI TA, on the other hand, provided the code nearly always when the learner asked. The AI TA also provided additional code that would help the learner solve the problem, such as the code that dealt with the following subgoal. Therefore, learners with the AI TA might have been more inclined to ask for the answer code to solve the tasks faster. #### 5.4.4. Understanding the learners and step-by-step guidance The TAs often asked questions to gauge the learner's knowledge and understanding of programming. Both the AI TA and human TA asked questions like "_Do you know how to sort the result in ascending order?_" (AI TA) and "_Do you remember how to split a string with respect to a certain character?_" (Human TA). By asking these questions, the TA estimated the learner's knowledge and adjusted the teaching plan by explaining only what was necessary. Additionally, the TAs asked the learners if they understood the TA's explanation to verify if further explanation was required. It is noteworthy that the AI TA exhibited the ability to ask these types of questions, even though it could simply provide an explanation of the concept and move on without seeking learners' feedback. Both the human and AI TAs sometimes asked learners to run a sample code that was not part of the solution code to give the learners an understanding of how the code executes. Especially when the TA was explaining new concepts such as for-loops and if statements, the TA provided a sample code for the learners to execute and asked them to report the results. By doing so, the TA can explain the details of the code using the sample output the learner produced, and the learner can learn by doing instead of Figure 6. Summary of the survey results on the perception of the TAs. Questions that show a statistically significant difference between the two types of TAs are marked with the asterisk (\(*\)). just listening to the TA's explanations. The human TAs (16) had a higher number of such cases than the AI TAs (7). ## 6. Discussion The results from our user study show how learners and the two types of TAs interact with each other in the learning workflow to learn programming with subgoals. We discuss the results in terms of the goals of our user study. ### Feasibility of Using Generative AI as a TA The performance measures of the learners for our user study show that generative AI can function as a TA in teaching introductory programming with subgoals. Learners are able to solve various programming tasks covering different concepts taught in introductory programming with the help of the AI TA. The AI TA helps the learners solve more tasks faster when compared to the learners who receive help from the human TAs while achieving comparable scores on the tasks. Remarkably, the AI TA can assist even absolute beginners in programming to create the subgoals and solve the programming tasks. The interview results show that the learners retained what they had learned to a certain extent, allowing them to correctly divide the subgoals and write functional code on their own about two weeks after the first study. ### Interaction and Perception of the AI TA Learners' perception of the AI TA was generally positive and on par with that of the human TA in several aspects. Learners feel that the AI TA's responses are fast and detailed enough to help them solve the programming tasks. The learners have mixed feelings about the usefulness of the conversation with the TAs across the three steps. The conversation in the subsolution generation step was perceived as the most useful, while the conversation in the other two steps showed some negative remarks. Conversation with an AI TA is not very challenging for the learners and is generally satisfactory, but complete beginners expressed some doubts about the helpfulness of the AI TA in learning programming. The survey results show some differences between the perception of the AI TA by Group 1 and Group 2 learners. Group 2 learners showed a much better perception of the AI TA compared to Group 1 learners, even exceeding the perception of the human TAs by the learners in the same group. This shows that the AI TA is better suited for programming learners who already possess some prior programming knowledge. From the chat log analysis, learners showed a higher tendency to request code assistance when talking to the AI TA. In contrast, with the human TA, learners asked questions about the code indirectly by describing the issues in their code and their intentions. However, learners make direct questions to the AI TA, asking for explanations about what is wrong with their code and how to fix it. ### Strengths and Weaknesses of AI TA A noticeable strength of the AI TA is the large amount of detail that it provides in the replies to the learners' questions. In the subgoal formulation stage, the AI TA leaves feedback on individual subgoals, providing the reason why the subgoal is essential for the task. On the other hand, the human TA provides fewer explanations for the subgoals and usually assesses if they are reasonable. In the subsolution and solution generation steps, the AI TA provides more details than the human TA. The AI TA's replies are more structured, reiterating the learner's question and providing the answer with an explanation for the answer. Such structured replies can be beneficial for learning as the learner is reminded of the full context of the problem and how to solve it. The AI TA is also more supportive of the learners. The AI TA frequently encourages the learners by saying "good job!" or "that is correct!". With such remarks, learners grow more confident in programming and are encouraged to learn more. However, the main weakness of AI TA lies in providing excessive information to the learners. The AI TA is oriented to help the learner solve a task without a strong focus on the educational benefits of the learners. As shown in our results from the chat log analysis, the AI TA provides the subgoals and subsolutions much more often than the human TA. While assistance is valuable, providing too much code can hinder the learning process by depriving learners of the opportunity to solve the problem on their own. For complete beginners in programming, providing too much information often overwhelms them and leads to even more questions. ### Design Guidelines We observe that the the AI TA is most effective when learners inquire about the subsequent steps after they have completed coding for all previous subgoals. The AI TA also guides the learner step by step if the learner's question is phrased in a way that seeks confirmation of the next step of the problem-solving process from the TA, not asking for direct answers. If a learner can learn to ask questions in this manner, an AI TA can provide detailed information to solve a programming task to learner. On the other hand, for complete beginners in programming, human TAs are better suited to guide them in solving programming tasks. The human TA is often better able to understand the learner's struggles and is more attentive to the small details in programming \begin{table} \begin{tabular}{c c c c c} \hline \hline Learner Setting & Code that was asked for & More code than asked & Voluntary code & Sum \\ \hline Group 1 - AI & 64 & 12 & 24 & 100 \\ Group 2 - AI & 44 & 15 & 16 & 75 \\ Group 1 - Human & 31 & 1 & 7 & 39 \\ Group 2 - Human & 12 & 2 & 0 & 14 \\ \hline \hline \end{tabular} \end{table} Table 1. Number of times the TA offered the answer code to the learners, divided into three categories: (1) TA provides the code that the learner asked for, (2) TA provides more code than what was asked for, and (3) TA provides the code voluntarily. that beginners have to pay attention to. Also, when testing the code, human TAs are better at catching the edge cases and removing rare errors in the code. With these observations in mind, we suggest a few guidelines for the successful integration of AI TAs in programming education. 1. _Restrict the information the TA provides_. The main weakness of the AI TA is that it provides the answer too easily. Regardless of whether the goal is to write the subgoals or the solution code, the AI often provides the answer when the learner ever-so-slightly requests a sample solution. One way to achieve this is to append an additional prompt at the end of the learner's prompt that explicitly requests the AI not to provide the answer before the learner makes a certain number of the same request. Completely prohibiting the AI TA from providing some form of solution might lead to learner's frustration. 2. _Teach the learners to write better prompts_. The AI TA's performance is highly affected by the prompt that it receives. By writing a better prompt, learners are able to obtain precisely what they ask for, with answers that consider the learning effect. We observed that describing the learner's context and what help the learner needs specifically often leads to the best results. A template prompt, for example, can be provided to the learners. 3. _Motivate the learners_. Motivated learners are more eager to learn and ask more questions. AI TAs have knowledge comparable to human TAs, but they only provide information when asked. Additionally, motivated learners are less likely to directly ask for the solution, which addresses the main weakness of the AI TA. ### Limitations The number of learners who participated in the user study is relatively small, at 20 participants. One participant dropped out in the middle of the user study and did not participate in the second session, which caused an imbalance in the number of participants in each setting. Although the number of participants was sufficient to show some trends in the learner's experience, a study with more participants will result in more reliable and generalizable findings. The user study ran for two weeks, which can be a short period of time to measure the learning gain of the participants. The participants mentioned in the interview that two sessions of learning were not enough for them to learn a lot about programming. A study with more sessions would reveal the long-term effects of learning with the two types of TAs and is left as future work. As all participants' native language is Korean, Korean was used as the language for communication with the AI TA. As the performance of generative AI may change with the language, it is unsure how the AI may perform differently in a different language. We believe that the results will not differ drastically in another language, but further work is necessary to determine the effect. ## 7. Conclusion The advances in generative AI have opened the opportunity for Als to take the role of teaching assistants in programming. We explore the potential for AI teaching assistants to teach computational thinking and writing code to a programming novice and how the learners interact and perceive the AI as teaching assistants. Our results show that AI TAs are as capable as human TAs in programming education, with learners showing similar performance in terms of score and time taken with both types of TAs. Learner's perception of the AI TA is also positive, especially for learners with some previous experience in programming. The analysis of the chat log between the learners and the TAs shows some characteristics of the conversations, such as a more common request for code to the AI TA, and reveals opportunities for designing better AI TAs in programming education.
2309.08256
Sampling-Free Probabilistic Deep State-Space Models
Many real-world dynamical systems can be described as State-Space Models (SSMs). In this formulation, each observation is emitted by a latent state, which follows first-order Markovian dynamics. A Probabilistic Deep SSM (ProDSSM) generalizes this framework to dynamical systems of unknown parametric form, where the transition and emission models are described by neural networks with uncertain weights. In this work, we propose the first deterministic inference algorithm for models of this type. Our framework allows efficient approximations for training and testing. We demonstrate in our experiments that our new method can be employed for a variety of tasks and enjoys a superior balance between predictive performance and computational budget.
Andreas Look, Melih Kandemir, Barbara Rakitsch, Jan Peters
2023-09-15T09:06:23Z
http://arxiv.org/abs/2309.08256v1
# Sampling-Free Probabilistic Deep State-Space Models ###### Abstract Many real-world dynamical systems can be described as _State-Space Models_ (SSMs). In this formulation, each observation is emitted by a latent state, which follows first-order Markovian dynamics. A _Probabilistic Deep SSM_ (ProDSSM) generalizes this framework to dynamical systems of unknown parametric form, where the transition and emission models are described by neural networks with uncertain weights. In this work, we propose the first deterministic inference algorithm for models of this type. Our framework allows efficient approximations for training and testing. We demonstrate in our experiments that our new method can be employed for a variety of tasks and enjoys a superior balance between predictive performance and computational budget. State-Space Model, Gaussian Filter, Moment Matching, Weight Uncertainty. ## 1 Introduction Modeling unknown dynamics from data is challenging, as it requires accounting for both the intrinsic uncertainty of the underlying process and the uncertainty over the model parameters. Parameter uncertainty, or epistemic uncertainty, is necessary to address the uncertainty arising from incomplete data. Intrinsic uncertainty, also known as aleatoric uncertainty, is essential to represent the inherent stochasticity of the system [1, 2]. Deep state-space models [3, 4, 5] offer a principled solution for modeling the intrinsic uncertainty of an unidentified dynamical process. At their core, they assign a latent variable to each data point, which represents the underlying state and changes over time while considering uncertainties in both observations and state transitions. Neural networks with deterministic weights describe the nonlinear relationships between latent states and observations. Despite offering considerable model flexibility, these deterministic weights ultimately limit the models' ability to capture epistemic uncertainty. On the other hand, most prior works that take weight uncertainty into account make either the simplifying assumption that the transition dynamics are noiseless [6, 7, 8] or that the dynamics are fully observed [2, 9]. Both assumptions are not satisfied by many real-world applications and can lead to miscalibrated uncertainties. There also exists a large body of work for state-space models [10, 11, 12] that use Gaussian Processes to model state transition kernels instead of probabilistic neural networks. While these methods respect both sources of uncertainty, they do not scale well with the size of the latent space. Finally, there is the notable exception of [13] that aims at learning deep dynamical systems that respect both sources of uncertainty jointly. However, this approach requires to marginalize over the latent temporal states and the neural network weights via plain Monte Carlo, which is infeasible for noisy transition dynamics. We address the problem of learning dynamical models that account for epistemic and aleatoric uncertainty. Our approach allows for epistemic uncertainty by attaching uncertainty to the neural net weights and for aleatoric uncertainty by using a deep state-space formulation (see Sec. 3). While this model family promises flexible predictive distributions, inference is doubly-intractable due to the uncertainty over the weights and the latent dynamics. The main contribution of this paper is a sample-free inference scheme that addresses this pain point and allows us to efficiently propagate uncertainties along a trajectory. Our deterministic approximation is computationally efficient and accurately captures the first two moments of the predictive distribution. It can be used as a building block for multi-step ahead predictions (see Fig. 0(a)) and Gaussian filtering (see Fig. 0(b)). Furthermore, our model approximation can be used as a fully deterministic training objective (see Sec. 4). The runtime of our method is analyzed in Sec. 5. The paper is complemented by an empirical study (see Sec. 6) that begins with an in-depth examination of each individual building block, showcasing their unique strengths. Afterward, we integrate all components and apply our approach to two well-established dynamical modeling benchmark datasets. Our method particularly excels in demanding situations, such as those involving noisy transition dynamics or high-dimensional outputs. ## 2 Background We recap relevant background material before we introduce our model. In Sec. 2.1, we give an introduction to deep state-space models. Assumed density approximations and Gaussian filtering form the core of our deterministic inference algorithm and are reviewed in Sec. 2.2 and Sec. 2.3. ### _Deep State Space Models_ A _State Space Model_ (SSM) [14] describes a dynamical system that is partially observable. The true underlying process with latent state \(x_{t}\in\mathbb{R}^{D_{x}}\) emits at each time step \(t\) an observation \(y_{t}\in\mathbb{R}^{D_{y}}\). The latent dynamics follow a Markovian structure, i.e., the state at time point \(x_{t+1}\) only depends on the state of the previous time point \(x_{t}\). More formally, the generative model of a SSM can be expressed as \[x_{0} \sim p(x_{0}), \tag{1}\] \[x_{t+1} \sim p(x_{t+1}|x_{t}),\] (2) \[y_{t} \sim p(y_{t}|x_{t}). \tag{3}\] Above, \(p(x_{0})\) is the initial distribution, \(p(x_{t+1}|x_{t})\) is the transition density, and \(p(y_{t}|x_{t})\) is the emission density. A _Deep State-Space Model_ (DSSM) is a SSM with neural transition and emission densities. Commonly, these densities are modeled as input-dependent Gaussians [5, 15]. However, there exists also concurrent work that proposes more expressive densities [16]. ### _Assumed Density Approximation_ The \(t\)-step transition kernel propagates the latent state forward in time and is recursively computed as \[p(x_{t+1}|x_{0})=\int p(x_{t+1}|x_{t})p(x_{t}|x_{0})dx_{t}, \tag{4}\] where \(p(x_{t+1}|x_{t})\) follows Eq. (2). Except for linear transition functions [14], there exists no analytical solution. Various approximations to the transition kernel have been proposed that can be roughly divided into two groups: (a) _Monte Carlo_ (MC) based approaches [17, 18] and (b) deterministic approximations based on _Assumed Densities_ (AD) [19]. While MC based approaches can, in the limit of infinitely many samples, approximate arbitrarily complex distributions, they are often slow in practice, and their convergence is difficult to assess. In contrast, deterministic approaches often build on the assumption that the \(t\)-step transition kernel can be approximated by a Gaussian distribution. In the context of machine learning, AD approaches have been recently used in various applications such as deterministic variational inference [20] or traffic forecasting [21]. We follow the AD approach and approximate the \(t\)-step transition kernel from Eq. (4) as \[p(x_{t+1}|x_{0}) \approx\int p(x_{t+1}|x_{t})\mathcal{N}(x_{t}|m_{t}^{x},\Sigma_{t }^{x})dx_{t},\] \[\approx\mathcal{N}(x_{t+1}|m_{t+1}^{x},\Sigma_{t+1}^{x}). \tag{5}\] where the latent state \(x_{t}\) is recursively approximated as a Gaussian with mean \(m_{t}^{x}\in\mathbb{R}^{D_{x}}\) and covariance \(\Sigma_{t}^{x}\in\mathbb{R}^{D_{x}\times D_{x}}\). This simplifies the calculations for solving Eq. (5) to approximating the first two output moments. There exist generic approximation methods [22] as well as specialized algorithms for DSSMs [21]. In this work, we will build on the algorithm from [23] that approximates the first two output moments via moment propagation across neural net layers, similarly as [20, 24]. ### _Gaussian Filtering_ In filtering applications, we are interested in the distribution \(p(x_{t}|y_{1:t})\), where \(y_{1:t}=\{y_{1},\ldots,y_{t}\}\) denotes the past observations. For deep state-space models, the filtering distribution is not tractable, and we can approximate its distribution with a general Gaussian filter [14, 25] by repeating the subsequent two steps over all time points. Following concurrent literature [25], we refer to \(p(x_{t}|y_{1:t-1})\) as the prior and to \(p(x_{t},y_{t}|y_{1:t-1})\) as the joint prior. _Prediction:_ Approximate the prior \(p(x_{t}|y_{1:t-1})\) with \[p(x_{t}|y_{1:t-1}) =\int p(x_{t}|x_{t-1})p(x_{t-1}|y_{1:t-1})dx_{t-1},\] \[\approx\int p(x_{t}|x_{t-1})\mathcal{N}(m_{t-1}^{x},\Sigma_{t-1}^ {x})dx_{t-1},\] \[\approx\mathcal{N}(m_{t|t-1}^{x},\Sigma_{t|t-1}^{x}), \tag{6}\] where \(p(x_{t+1}|x_{t})\) refers to the transition model defined in Eq. (2). We arrive at Eq. (6) by multiple rounds of moment matching. First, we approximate the filtering distribution as a normal distribution, and then we approximate the one-step transition kernel as another normal. Here, the index \(t|t^{\prime}\) explicitly denotes prior moments, i.e., the moments at time step \(t\) conditioned on the observations up to time step \(t^{\prime}\). If \(t=t^{\prime}\), we omit the double index. _Update:_ Approximate the joint prior \(p(x_{t},y_{t}|y_{1:t-1})\) \[p(x_{t},y_{t}|y_{1:t-1}) =p(y_{t}|x_{t})p(x_{t}|y_{1:t-1})\] \[\approx p(y_{t}|x_{t})\mathcal{N}(m_{t|t-1}^{x},\Sigma_{t|t-1}^{x})\] \[\approx\mathcal{N}\left(\begin{bmatrix}m_{t|t-1}^{x}\\ m_{t|t-1}^{x}\end{bmatrix},\begin{bmatrix}\Sigma_{t|t-1}^{x}&\Sigma_{t|t-1}^{ xy}\\ \Sigma_{t|t-1}^{yx}&\Sigma_{t|t-1}^{yx}\end{bmatrix}\right), \tag{7}\] where \(\Sigma_{t|t-1}^{xy}\in\mathbb{R}^{D_{x}\times D_{y}}\) is the cross-covariance between \(x_{t}\) and \(y_{t}\) and the density \(p(y_{t}|x_{t})\) is defined in Eq. (3). Building a Gaussian approximation to the joint prior (Eq. (7)) can be performed by similar moment matching schemes Fig. 1: We simulate a dynamical system \(p(x_{t+1}|x_{t},w_{t})\) with uncertainty over the weights \(w_{t}\sim p(w_{t})\). Our deterministic approximation scheme is shown in blue, where the solid line depicts the mean and the shaded area is the 95% confidence interval. In Panel (a), we compare our approach with Monte Carlo (orange) for multi-step ahead predictions. Our deterministic approximation accurately captures the first two moments of the Monte Carlo generated samples. In Panel (b), we move the dynamical system to a latent space and introduce an emission function \(p(y_{t}|x_{t})\). We compare our filtering distribution with the true latent state (orange). The true latent trajectory lies within the 95% confidence interval of the approximate filtering distribution. as discussed in Sec. 2.2. Afterwards, we can calculate the posterior \(p(x_{t}|y_{1:t})\) by conditioning on the observation \(y_{t}\) \[p(x_{t}|y_{1:t})\approx\mathcal{N}(m_{t}^{x},\Sigma_{t}^{x}), \tag{8}\] where Eq. (8) can be obtained from Eq. (7) by standard Gaussian conditioning (e.g. [14]). The resulting distribution has the below moments \[m_{t}^{x} =m_{t|t-1}^{x}+K_{t}(y_{t}-m_{t|t-1}^{y}), \tag{9}\] \[\Sigma_{t}^{x} =\Sigma_{t|t-1}^{x}-K_{t}\Sigma_{t|t-1}^{y}K_{t}^{\top}, \tag{10}\] where \(K_{t}\in\mathbb{R}^{D_{x}\times D_{y}}\) is the Kalman gain \[K_{t}=\Sigma_{t|t-1}^{xy}\left(\Sigma_{t|t-1}^{y}\right)^{-1}. \tag{11}\] Prior work in the context of DSSM and Gaussian Filters [16] encodes observations into an auxiliary latent space with an invertible neural net and then relies on a linear SSM formulation in order to be able to exactly solve the Gaussian Filter equations. To the best of our knowledge, there exists no prior work that applies Gaussian Filters on general DSSMs. ## 3 Probabilistic Deep State-Space Models We present our _Probabilistic Deep State-Space Model_ (ProDSSM) family in Sec. 3.1. Our model can account for epistemic uncertainty by attaching uncertainty to the weights of the neural network and for aleatoric uncertainty by building on the deep state-space formalism. By integrating both sources of uncertainties, our model family promises well-calibrated uncertainties. However, joint marginalization over the weights of the neural network and the latent dynamics presents a significant inference challenge. To this end, we present novel algorithms for assumed density approximations (Sec. 3.2) and for Gaussian filtering (Sec. 3.3) that jointly handle the latent states and the weights. Both algorithms are tailored towards ProDSSMs, allow for fast and sample-free inference with low compute, and lay the basis for our deterministic training objective (Sec. 4). ### _Uncertainty Weight Propagation_ Following [26], we consider two variants of propagating the weight uncertainty along a trajectory: the local and global approach. For the local approach, we resample the weights \(w_{t}\in\mathbb{R}^{D_{w}}\) at each time step (see Fig. 1(a)). Contrarily, for the global approach, we sample the weights only once at the initial time step and keep them fixed for all remaining time steps (see Fig. 1(b)). Assuming Gaussian additive noise, the transition and emission model of ProDSSMs are defined as follows \[x_{0} \sim p(x_{0}), \tag{12}\] \[w_{0} \sim p(w_{0}|\phi),\] (13) \[x_{t+1} \sim\mathcal{N}\left(x_{t+1}|f(x_{t},w_{t}),\text{diag}(l(x_{t}, w_{t}))\right),\] (14) \[w_{t+1} \sim\begin{cases}p(w_{t+1}|\phi),&\text{if Local}\\ \delta(w_{t+1}-w_{0}),&\text{if Global}\end{cases}\] (15) \[y_{t} \sim\mathcal{N}\left(y_{t}|g(x_{t}),\text{diag}\left(r\right) \right), \tag{16}\] where \(f(x_{t},w_{t}):\mathbb{R}^{D_{x}\times D_{w}}\to\mathbb{R}^{D_{x}}\) models the transition mean, \(l(x_{t},w_{t}):\mathbb{R}^{D_{x}\times D_{w}}\to\mathbb{R}^{D_{x}}_{+}\) the transition variance, \(g(x_{t}):\mathbb{R}^{D_{x}}\to\mathbb{R}^{D_{y}}\) the mean emission, and \(r\in\mathbb{R}^{D_{y}}_{+}\) the emission variance. We further model the weight distribution \(p(w_{t}|\phi)\) as a Gaussian distribution \[p(w_{t}|\phi)=\mathcal{N}(w_{t}|m_{t}^{w},\text{diag}(\Sigma_{t}^{w})), \tag{17}\] with mean \(m_{t}^{w}\in\mathbb{R}^{D_{w}}\) and diagonal covariance \(\Sigma_{t}^{w}\in\mathbb{R}^{D_{w}}_{+}\). Both together define the hyperparameters \(\phi=\{m_{t}^{w},\Sigma_{t}^{w}\}_{t=0}^{T}\) of our model, where \(T\) is the horizon. In order to avoid cluttered notation, we introduce the augmented state \(z_{t}=[x_{t},w_{t}]\) that is a concatenation of the latent state \(x_{t}\) and weight \(w_{t}\), with dimensionality \(D_{z}=D_{x}+D_{w}\). The augmented state \(z_{t}\) follows the transition density \(\mathcal{N}\left(z_{t+1}|F(z_{t}),\text{diag}(L(z_{t}))\right)\), where the mean function \(F(z_{t}):\mathbb{R}^{D_{z}}\to\mathbb{R}^{D_{z}}\) and the covariance function \(L(z_{t}):\mathbb{R}^{D_{x}}\to\mathbb{R}^{D_{z}}_{+}\) are defined as \[(F(z_{t}),L(z_{t}))=\begin{cases}\left(\begin{bmatrix}f(x_{t},w_{t})\\ m_{t+1}^{w}\end{bmatrix},\begin{bmatrix}l(x_{t},w_{t})\\ \Sigma_{t+1}^{w}\end{bmatrix}\right)&\text{if Local}\\ \left(\begin{bmatrix}f(x_{t},w_{t})\\ w_{t}\end{bmatrix},\begin{bmatrix}l(x_{t},w_{t})\\ 0\end{bmatrix}\right)&\text{if Global}.\end{cases} \tag{18}\] In the following, we extend the moment matching algorithm from [23] towards ProDSSMs and Gaussian filters. Our algorithmic advances are general and can be combined with both weight uncertainties propagation schemes. ### _Assumed Density Approximation_ In this section, we present our main contribution, which is a novel approximation to the \(t\)-step transition kernel \(p(z_{t+1}|z_{0})\) for ProDSSMs. Our approximation takes an assumed density approach and propagates moments along time direction and across neural network layers, similarly, as in [23]. Prior work either deals with non-recurrent neural network architectures [20] or deterministic weights [23], while our new model family, ProDSSM, requires both. In contrast to prior work, we need to account for the correlation between weights and states. We follow the general assumed density approach (see Sec. 2.3) on the augmented state \(z_{t}\). As a result, we obtain a Fig. 2: Given a dynamical system \(p(x_{t+1}|x_{t},w_{t})\) with uncertainty over the weights \(w_{t}\sim p(w_{t})\) we compare in Panel (a) and (b) two different sampling strategies. In Panel (a) we resample at each time step the weights, while in Panel (b) the weights are sampled only at the initial time step. We visualize Monte Carlo simulations as orange solid lines and our deterministic output approximation in blue, where the solid line depicts the mean and the shaded area the 95% confidence interval. Gaussian approximation \(p(z_{t+1}|z_{0})\approx\mathcal{N}(z_{t+1}|m^{z}_{t+1},\Sigma^{z}_{t+1})\) to the \(t\)-step transition kernel that approximates the joint density over the latent state \(x_{t}\) and the weights \(w_{t}\). The mean and the covariance have the structure \[m^{z}_{t}=\begin{bmatrix}m^{x}_{t}\\ m^{w}_{t}\end{bmatrix}, \Sigma^{z}_{t}=\begin{bmatrix}\Sigma^{x}_{t}&\Sigma^{x}_{t}\\ \Sigma^{wx}_{t}&\Sigma^{w}_{t}\end{bmatrix} \tag{19}\] where \(\Sigma^{x}_{t}\in\mathbb{R}^{D_{x}\times D_{x}}\) is the covariance of \(x_{t}\) and \(\Sigma^{xw}_{t}\in\mathbb{R}^{D_{x}\times D_{w}}\) is the cross-covariance between \(x_{t}\) and \(w_{t}\). For a standard DSSM architecture, the number of weights exceeds the number of latent dimensions. Since the mean and the covariance over the weights are not updated over time, the computational burden of computing \(\Sigma^{z}_{t}\) is dominated by the computation of the cross-covariance \(\Sigma^{xw}_{t}\). This covariance becomes zero for the local approach due to the resampling step at each time point. Consequently, the local approach exhibits reduced runtime and memory complexity compared to the global approach. In the following, we will detail how the remaining terms can be efficiently computed by propagating moments through the layers of a neural network. We start by applying the law of unconscious statistician, which tells us that the moments of the augmented state at time step \(t+1\) are available as a function of prior moments at time step \(t\)[23] \[m^{z}_{t+1}=\mathbb{E}[F(z_{t})], \Sigma^{z}_{t+1}=\text{Cov}[F(z_{t})]+\text{diag}(\mathbb{E}[L(z _{t})]). \tag{20}\] Now, we are left with calculating the first two output moments of the augmented mean \(F(z_{t})\) and covariance update \(L(z_{t})\). In the following, we discuss the approximation of the output moments for the augmented \(F(z_{t})\) and omit the discussion on the augmented covariance update \(L(z_{t})\) as its moments can be approximated similarly. Typically, neural networks are a composition of \(L\) simple functions (layers) that allows us to write the output as \(F(z_{t})=U^{L}(\ldots U^{1}(z^{0}_{t})\ldots)\), where \(z^{l}_{t}\in\mathbb{R}^{D^{l}_{z}}\) is the augmented state at layer \(l\) at time point \(t\). We denote the input as \(z^{0}_{t}=z_{t}\). The function \(U^{l}(z^{l-1}_{t}):\mathbb{R}^{D^{l-1}_{z}}_{x}\rightarrow\mathbb{R}^{D^{l}_{ z}}\) at the \(l\)-th layer receives the augmented state \(z^{l-1}_{t}\) from the previous layer and calculates the output \(z^{l}_{t}\) as \[U^{l}(z^{l-1}_{t})=\begin{bmatrix}x^{l}_{t}\\ w^{l}_{t}\end{bmatrix}=\begin{bmatrix}u^{l}(x^{l-1}_{t},w^{l-1}_{t})\\ w^{l-1}_{t}\end{bmatrix}, \tag{21}\] where \(x^{l}_{t}\in\mathbb{R}^{D^{l}_{x}}\) is the state at layer \(l\) at time point \(t\) and \(u^{l}(x^{l-1}_{t},w^{l-1}_{t}):\mathbb{R}^{D^{l-1}_{x}}_{x}\times\mathbb{R}^{D _{w}}\rightarrow\mathbb{R}^{D^{l}_{x}}\) is the function that updates the state. The weights \(w^{l}_{t}\in\mathbb{R}^{D_{w}}\) are not altered in the intermediate layers and the last layer returns the weight for the global approach or its mean \(m^{w}_{t}\) for the local approach. We approximate the output distribution of each layer recursively as \[p(z^{l}_{t})=p(U^{l}(z^{l-1}_{t}))\approx\mathcal{N}(z^{l}_{t}|m^{l}_{t}, \Sigma^{l}_{t}), \tag{22}\] where \(m^{l}_{t}\in\mathbb{R}^{D^{l}_{x}}\) and \(\Sigma^{l}_{t}\in\mathbb{R}^{D^{l}_{x}\times D^{l}_{z}}\) are the mean and covariance of \(z^{l}_{t}\). We refer to calculating \(m^{l}_{t}\) and \(\Sigma^{l}_{t}\) for each layer as layerwise moment propagation [23]. In the remainder of this subsection, we will present the output moments for the linear layer and ReLU activation function for the global as well as local approach. #### 3.2.1 Output Moments of the Linear Layer A linear layer applies an affine transformation \[U(z^{l}_{t})=\begin{bmatrix}A^{l}_{t}x^{l}_{t}+b^{l}_{t}\\ w^{l}_{t}\end{bmatrix}, \tag{23}\] where the transformation matrix \(A^{l}_{t}\in\mathbb{R}^{D^{l+1}_{x}\times D^{l}_{x}}\) and bias \(b^{l}_{t}\in\mathbb{R}^{D^{l+1}_{x}}\) are both part of weights \((A^{l}_{t},b^{l}_{t})\in w^{l}_{t}\). We note that the set of all transformation matrices and biases \(\{(A^{l}_{t},b^{l}_{t})\}_{l=1}^{L}\) define the weights \(w^{l}_{t}\). As the cross-covariance matrix \(\Sigma^{l,xw}_{t}\) is non-zero for global weights, the transformation matrix \(A^{l}_{t}\), bias \(b^{l}_{t}\), and state \(x^{l}_{t}\) are assumed to be jointly normally distributed. The mean and the covariance of the weights \(w_{t}\) are equal to the input moments due to the identity function. The remaining output moments of the affine transformation can be calculated as \[m^{l+1,x}_{t} =\mathbb{E}[A^{l}_{t}x^{l}_{t}]+\mathbb{E}[b^{l}_{t}], \tag{24}\] \[\Sigma^{l+1,x}_{t} =\text{Cov}[A^{l}_{t}x^{l}_{t},A^{l}_{t}x^{l}]+\text{Cov}[b^{l}_{t },A^{l}_{t}x^{l}_{t}]\] \[\quad+\text{Cov}[A^{l}_{t}x^{l}_{t},b^{l}_{t}]+\text{Cov}[b^{l}_{t },b^{l}_{t}],\] (25) \[\Sigma^{l+1,xw}_{t} =\text{Cov}[A^{l}_{t}x^{l}_{t},w^{l}]+\text{Cov}[b^{l}_{t},w^{l}_ {t}], \tag{26}\] which is a direct result of the linearity of the \(\text{Cov}[\bullet,\bullet]\) operator. In order to compute the above moments, we need to calculate the moments of a product of correlated normal variables, \(\mathbb{E}[A^{l}_{t}x^{l}_{t}],\text{Cov}[A^{l}_{t}x^{l}_{t},A^{l}_{t}x^{l}_{t}]\), and \(\text{Cov}[A^{l}_{t}x^{l}_{t},w^{l}]\). Surprisingly, these computations can be performed in closed form for both local and global weights provided that \(x^{l}_{t}\) and \(w^{l}_{t}\) follow a normal distribution. We provide a detailed derivation and the final results in App. A. For the case of local weights, the cross-covariance matrix \(\Sigma^{l,xw}_{t}\) becomes zero, i.e., weights and states are uncorrelated. In addition, the computation of the remaining terms simplifies significantly (see also App. A), and, as a result, we can recover the results from [20]. #### 3.2.2 Output Moments of the ReLU Activation The ReLU activation function applies element-wise the max-operator to the latent states while the weights stay unaffected \[U(z^{l}_{t})=\begin{bmatrix}\text{max}(0,x^{l}_{t})\\ w^{l}_{t}\end{bmatrix}. \tag{27}\] Mean \(m^{l+1,x}_{t}\) and covariance \(\Sigma^{l+1,x}_{t}\) of the state \(x^{l+1}_{t}\) are available in related literature [20]. Mean \(m^{l+1,w}_{t}\) and covariance \(\Sigma^{l+1,w}_{t}\) of the state \(w^{l+1}_{t}\) are equal to the input moments, \(m^{l,w}_{t}\) and \(\Sigma^{l,w}_{t}\). For the case of global weights, it remains open to calculate the cross-covariance \(\Sigma^{l+1,xw}_{t}\). Using Stein's lemma [27], we can calculate the cross-covariance after the ReLU activation as \[\Sigma^{l+1,xw}_{t}=\mathbb{E}[\nabla_{x^{l}_{t}}\text{max}(0,x^{l}_{t})] \Sigma^{l,xw}_{t}, \tag{28}\] where \(\mathbb{E}[\nabla_{x^{l}_{t}}\text{max}(0,x^{l}_{t})]\) is the expected Jacobian of the ReLU activation. The expected Jacobian is equal to the expectation of the Heaviside function, which can be closely approximated [20]. ### _Gaussian Filtering_ Our approximation to the filtering distribution, \(p(z_{t}|y_{1:t})\), follows the Gaussian filter (see Sec. 2.3). In contrast to prior work, we extend the filtering step to the augmented state consisting of the latent dynamics and the weights. In standard architectures, the number of latent states is small compared to the number of weights, which makes filtering in our new scenario more demanding. We address this challenge by applying our deterministic moment matching scheme that propagates moments across neural network layers. Additionally, we combine it with our previously derived approximation to the \(t\)-step transition kernel \(p(z_{t+1}|z_{0})\) from Sec. 3.2. We also verify empirically in Sec. 6.2 that standard numerical integration schemes are not well suited for filtering tasks of this type. The Gaussian filter alternates between the prediction and the update step. In the following, we explain in more detail how our deterministic moment matching scheme can be integrated into both steps. For the prediction step, Eq. (6), we can reuse the assumed density approach that we just derived in order to compute a Gaussian approximation to the predictive distribution \(p(z_{t}|y_{1:t-1})\). For the update step, we need to first find a Gaussian approximation to the joint distribution of the augmented state \(z_{t}\) and observation \(y_{t}\) conditioned on \(y_{1:t-1}\) (see also Eq. (7)) \[p(z_{t},y_{t}|y_{1:t-1})\approx\mathcal{N}\left(\begin{bmatrix}m^{z}_{t|t-1}\\ m^{y}_{t|t-1}\end{bmatrix},\begin{bmatrix}\Sigma^{z}_{t|t-1}&\Sigma^{zy}_{t|t-1 }\\ \Sigma^{yz}_{t|t-1}&\Sigma^{y}_{t|t-1}\end{bmatrix}\right). \tag{29}\] The mean and the covariance of the latent state \(z_{t}\) are known from the prediction step, while their equivalents of the emission \(y_{t}\) are available as \[m^{y}_{t|t-1}=\mathbb{E}[g(x_{t})],\quad\Sigma^{y}_{t|t-1}=\text{Cov}[g(x_{t}) ]+\text{diag}(r), \tag{30}\] with \(x_{t}\sim\mathcal{N}(m^{x}_{t|t-1},\Sigma^{x}_{t|t-1})\). These moments can be approximated with layerwise moment propagation, as described in the previous section. Finally, we facilitate the computation of the cross-covariance \(\Sigma^{yz}_{t|t-1}\) be using Stein's lemma [27] \[\Sigma^{yz}_{t|t-1}=\text{Cov}[g(x_{t}),z_{t}]=\mathbb{E}[\nabla_{x_{t}}g(x_{ t})]\Sigma^{xz}_{t|t-1}. \tag{31}\] where the expected Jacobian \(\mathbb{E}[\nabla_{x_{t}}g(x_{t})]\) of the mean emission function cannot be computed analytically. We follow the approximation of [23] that reduces the computation to estimate the expected Jacobian per layer. The latter is often available in closed form, or close approximations exist. Once we have calculated the joint distribution, we approximate the conditional as another normal distribution, \(p(z_{t}|y_{1:t})\approx\mathcal{N}(m^{z}_{t},\Sigma^{z}_{t})\), as shown in Eq. (11). For the global approach, the Kalman gain has the structure \(K_{t}=\Sigma^{zy}_{t}(\Sigma^{y}_{t})^{-1}\), and the updated covariance matrix \(\Sigma^{z}_{t}\) of augmented state \(z_{t}\) is dense. As a consequence, the weights \(w_{t}\) have a non-zero correlation after the update, and the overall variance gets reduced. For the local approach, only the distribution of the states \(x_{t}\) will be updated since the lower block of the gain matrix is zero. The weight distribution, as well as the cross-covariance between the states and weights, is hence not affected by the Kalman step. ## 4 Training and Predictions In this section, we derive efficient and sample-free training and testing routines for ProDSSMs. These routines build on the assumed density approximation and the Gaussian filter that we introduced in the previous section. ### _Training_ We train the ProDSSMs by fitting the hyperparameters \(\phi\) to a dataset \(\mathcal{D}\). The hyperparameters \(\phi\) describe the weight distribution. For the sake of brevity, we introduce the shorthand notation \(p(w_{0:T}|\phi)=p(w|\phi)\) to refer to the weights at all time steps with arbitrary horizon \(T\). We propose to train the ProDSSM on a Type-II _Maximum A Posteriori_ (MAP) objective (see [28] Chap. 5.6) \[\underset{\phi}{\text{argmax}}\log\int p(\mathcal{D}|w)p(w|\phi)dw+\log p( \phi). \tag{32}\] This objective is also termed as predictive variational Bayesian inference by [29] as it directly minimizes the Kullback-Leibler divergence between the true data generating distribution and the predictive distribution, which we aim to learn. Compared to other learning objectives, Eq. (32) provides better predictive performance, is more robust to model misspecification, and provides a beneficial implicit regularization effect for over-parameterized models. We refer to [29, 30, 31] that studies this learning objective for probabilistic neural nets in more detail from a theoretical as well as an empirical point of view. In our work, we show that the typically hard to evaluate likelihood \(p(\mathcal{D}|\phi)=\int p(D|w)p(w|\phi)dw\) can be closely approximated with deterministic moment matching routines. The exact form of the likelihood hereby depends on the task at hand, and we specify in our experiments how the likelihood can be closely approximated for regression problems in Sec. 6.1 and for dynamical system modeling in Sec. 6.3. We are now left with defining the hyper-prior \(p(\phi)\). Remember, \(\phi\) defines the weight distribution that is defined by its two first moments \(m^{w}=m^{w}_{0:T}\) and \(\Sigma^{w}=\Sigma^{w}_{0:T}\). In order to arrive at an analytical objective, we model each entry in \(p(\phi)\) independently. We define the hyper-prior of the \(i\)-th entry of the mean as a standard Normal \[\log p(m^{w}_{i}) =\log\mathcal{N}(m^{w}_{i}|0,I)\] \[=-\frac{1}{2}(m^{w}_{i})^{2}+\text{const.} \tag{33}\] and, assuming that the covariance is diagonal, chose the Gamma distribution for the \((i,i)\)-th covariance entry \[\log p(\Sigma^{w}_{ii}) =\log\text{Ga}(\Sigma^{w}_{ii}|\alpha=1.5,\beta=0.5)\] \[=\frac{1}{2}\log\Sigma^{w}_{ii}-\frac{1}{2}\Sigma^{w}_{ii}+\text{ const.}, \tag{34}\] where \(\alpha\) is the shape parameter and \(\beta\) is the rate parameter. We insert the above hyper-prior of the mean and covariance into \(\log p(\phi)\) and arrive at \[\log p(\phi) =\log p(m^{w})+\log p(\Sigma^{w})\] \[=\frac{1}{2}\sum_{i=1}^{D_{w}}\log\Sigma^{w}_{ii}-(m^{w}_{i})^{2} -\Sigma^{w}_{ii}+\text{const.}, \tag{35}\] which leads to a total of \(2D_{w}\) hyperparameters, i.e., one for the mean and one for the variance of each weight. In contrast, the classical Bayesian formalism keeps the prior \(p(w|\phi)\) constant during learning and the posterior \(p(w|\mathcal{D})\) is the quantity of interest. As an analytical solution to the posterior is intractable, either _Markov Chain Monte Carlo_ (MCMC) [32] or _Variational Inference_ (VI) [33] is used. It is interesting to note that the only difference between our formulation and the objective in VI, with a suitable prior choice, is the position of the logarithm in the likelihood \(p(\mathcal{D}|\phi)\). Please see App. B for more details. However, we are not aware of any prior work that applies VI in the context of ProDSSMs. Closest to our work is most likely [2] that approximates the posterior over the weights for fully observed stochastic dynamical systems, i.e., without latent states. ### _Predictive Distribution_ During test time, we are interested in the predictive distribution \(p(y_{t}|y_{-H:0})\) at time step \(t\) conditioned on the observations \(y_{-H:0}=\{y_{-H}\ldots,y_{0}\}\) with conditioning horizon \(H\in\mathbb{N}_{+}\). The predictive distribution is computed as \[p(y_{t}|y_{-H:0}) =\int p(y_{t}|z_{t})p(z_{t}|z_{0})p(z_{0}|y_{-H:0})dz_{0},z_{t},\] \[=\int p(y_{t}|z_{t})p(z_{t}|y_{-H:0})dz_{t}. \tag{36}\] Above, \(p(z_{0}|y_{-H:0})\) is the filtering distribution, \(p(z_{t}|z_{0})\) is the \(t\)-step transition kernel and \(p(z_{t}|y_{-H:0})\) the \(t\)-step marginal. Prior work on general deep SSMs [5, 15, 34] relies on auxiliary networks in order to approximate the filtering distribution and then uses MC integration in order to compute predictive distribution. Contrarily, we replace the need for auxiliary networks and MC integration with our deterministic moment matching scheme. The computation of the predictive distribution is performed by a series of Gaussian approximations: \[p(y_{t}|y_{-H:0}) \approx\int p(y_{t}|z_{t})p(z_{t}|z_{0})\mathcal{N}(m_{0}^{z}, \Sigma_{0}^{z})dz_{0},z_{t}\] \[\approx\int p(y_{t}|z_{t})\mathcal{N}(m_{t|0}^{z},\Sigma_{t|0}^{z })dz_{t}\] \[\approx\mathcal{N}(m_{t|0}^{y},\Sigma_{t|0}^{y}), \tag{37}\] where the density \(\mathcal{N}(m_{0}^{z},\Sigma_{0}^{z})\) approximates the filtering distribution. Its computation is described in Sec. 3.3. We obtain the density \(\mathcal{N}(m_{t|0}^{z},\Sigma_{t|0}^{z})\) as an approximation to the \(t\)-step marginal kernel \(p(z_{t}|y_{-H:0})\) in Eq. (36) by propagating the augmented latent state forward in time as described in Sec. 3.2. Finally, we approximate the predictive distribution \(p(y_{t}|y_{-H:0})\) with the density \(\mathcal{N}(m_{t|0}^{y},\Sigma_{t|0}^{y})\) in Eq. (37), which can be done by another round of moment matching as also outlined in Eq. (30). We present pseudo-code for approximating the predictive distribution in Alg. 1 that relies on Alg. 2 to approximate the filtering distribution \(p(z_{0}|y_{-H:0})\approx\mathcal{N}(z_{0}|m_{0}^{z},\Sigma_{0}^{z})\) Both algorithms explicitly do a resampling step for the local weight setting. In practice, it is not necessary, and we just omit the calculation. ``` Inputs:\(f(x_{t},w_{t})\)\(\triangleright\) Mean update \(l(x_{t},w_{t})\)\(\triangleright\) Covariance update \(g(x_{t})\)\(\triangleright\) Mean emission \(p(z_{t},h)\)\(\triangleright\) Initial distribution \(p-H:0\)\(\triangleright\) Observations Output:\(p(yr|y_{-H:0})\approx\mathcal{N}(yr|m_{T|0}^{y},\Sigma_{T|0}^{y})\)\(\triangleright\) Predictive Distribution \(m_{0}^{z},\Sigma_{0}^{z}\leftarrow\text{Detit}(f,g,r,p(z_{t}-h),y_{-H:0})\)\(\triangleright\) Initialize step \(t\in\{0,\cdots T\}\)do if Local then\(m_{t}^{y},\Sigma_{t}^{y},\Sigma_{t}^{y},\Sigma_{t}^{w},\Sigma_{t|0}^{w}\gets m_{H }^{w},\Sigma_{-H}^{w},0,0\)\(\triangleright\) Resample endif\(m_{t+1}^{z_{t+1}}\leftarrow\mathbb{E}[F(z_{t})]\)\(\triangleright\) Eq. 20 \(\triangleright\) Mean emission \(p(z_{t+1}|y_{-H:0})\leftarrow\mathcal{N}(z_{t+1}|m_{t+1}^{z_{t+1}},\Sigma_{t+1 }^{z_{t+1}})\)\(\triangleright\) Eq. 20 endfor\(m_{t}^{y}\leftarrow\mathbb{E}[g(x_{t})]\)\(\triangleright\) Observations Output:\(p(z_{t+1}|y_{-H:0})\approx\mathcal{N}(yr|m_{T|0}^{y},\Sigma_{T|0}^{y})\)\(\triangleright\) Filtering Distribution \(p(z_{0}|y_{t+1})\leftarrow p(z_{0})\) for time step \(t\in\{0,\cdots,T-1\}\)do if Local then\(m_{t+1}^{y},\Sigma_{t}^{z},\Sigma_{t}^{w},\Sigma_{t|0}^{w}\gets m_{0}^{w}, \Sigma_{-H}^{w},0,0\)\(\triangleright\) Resample endif\(m_{t+1}^{z_{t+1}}\leftarrow\mathbb{E}[F(z_{t})]\)\(\triangleright\) Eq. 20 \(\triangleright\) Covariance emission \(\Sigma_{t+1}^{y}\leftarrow\mathbb{E}[F(z_{t})]\)\(\triangleright\) Covariance emission \(m_{t+1}^{y}\leftarrow\mathbb{E}[g(x_{t})]\)\(\triangleright\) Initial distribution \(p(z_{t+1}|y_{-H:0})\leftarrow\mathcal{N}(z_{t+1}|m_{t+1}^{z_{t+1}},\Sigma_{t+1 }^{z_{t+1}})\)\(\triangleright\) Eq. 30 endfor return\(\mathcal{N}(z_{T}|m_{T}^{z},\Sigma_{T}^{y})\) ``` **Algorithm 2** Deterministic Filtering (DetFilt) ## 5 Runtime We first analyze the theoretical runtime of our algorithm in Sec. 5.1 and then measure its wall clock time in Sec. 5.2. ### _Theoretical Runtime_ In our theoretical runtime analysis, we first investigate the runtime for simulating forwards in time and, secondly, the runtime for filtering applications. We further assume that we have a ProDSSM with maximal hidden layer width \(H\) and that the dimensions of \(D_{x}\) and \(D_{y}\) are less than or equal to \(H\). Independent of the weight modeling scheme, predicting the next observation \(x_{t+1}\) conditioned on the latent state \(x_{t}\) is done by propagating the state through a series of affine transformations and non-linear activities. The affine transformations scale polynomially with the hidden layer width, whereas the non-linearities are elementwise operations and can be neglected. Approximating the first two output moments (see. Eq. 30) by MC simulation requires propagating \(S\) particles, resulting thus in a total cost of \(\mathcal{O}(SH^{2})\). Our method approximates the \(S\rightarrow\infty\) limit. For global weights, the computational cost of our method is \(\mathcal{O}(H^{4}+D_{w}H^{2})\) where \(D_{w}\) is the number of weight parameters. The first term, \(\mathcal{O}(H^{4})\), is due to the computational cost of the covariance \(\text{Cov}[A_{t}^{l}x_{t}^{l},A_{t}^{l},x_{t}^{l}]\in\mathbb{R}^{H\times H}\) in Eq. (25), where the computation of each matrix entry scales with \(O(H^{2})\) due to the linearity of the covariance operator. The second term, \(\mathcal{O}(H^{2}D_{w})\), is due to the cross-covariance \(\text{Cov}[A_{t}^{l}x_{t}^{l},w_{t}^{l}]\in\mathbb{R}^{H\times W}\) in Eq. (26), where the computation of each entry scales with \(O(H)\), again due to the linearity of the covariance operator. For local weights, the weights and the states are independent. As a result, we can simplify the computation of the first term to \(\text{Cov}[A_{t}^{l}x_{t}^{l},A_{t}^{l},x_{t}^{l}]=\mathbb{E}[A_{t}^{l}]\text{ Cov}[x_{t}^{l},x_{t}^{l}]\mathbb{E}[A_{t}^{l}]^{\top}\) and the second term, \(\text{Cov}[A_{t}^{l}x_{t}^{l},w_{t}^{l}]\) becomes zero. This leads to a runtime reduction to \(O(H^{3})\). Our filtering algorithm necessitates \(\mathcal{O}(H^{3})\) computations to approximate the output moments of the emission independent of the weight modeling scheme. For global weights approximating the cross-covariance between the emissions and augmented latent state involves \(\mathcal{O}(H^{3}+H^{2}D_{w})\) computations. Forming the gain matrix involves \(\mathcal{O}(H^{3}+H^{2}D_{w})\) computations. The first term is caused by inverting the covariance matrix of the emissions, and the second term is caused by multiplying the inverse covariance matrix with the cross-covariance of the augmented latent state (see Eq. 11). Lastly, updating the moments of latent state (see Eq. 10) involves \(\mathcal{O}(H(H+D_{w})^{2})\) computations which is the most time-consuming step and dominates the total runtime. Similarly, the computational cost of our algorithm for local weights can be derived and has a total cost of \(\mathcal{O}(H^{3})\). ### _Measured Runtime_ We visualize in Fig. 3 the wallclock time of our method for approximating the mean and covariance of the observation \(y_{t+1}\) conditioned on the mean and covariance of the latent state \(x_{t}\) at the prior time step. Additionally, we visualize the runtime of the MC baseline with different sampling strategies as a function of the dimensionality \(D=D_{x}=D_{y}=H\). The early stops indicate when we run out of memory. We conduct the experiment on a CPU with 32GB memory. For \(S=D\) particles the MC baseline has the same theoretical runtime as our method for local weights. In practice, we observe our method for local weights to be faster than the MC baseline with \(S=D\) when we include the runtime of the weight sampling procedure. When we exclude the runtime of the weight sampling procedure, our method is faster for \(D>64\). Furthermore, our method for global weights is slower and runs out of memory earlier than all baselines. We leave optimizing the runtime of our method for global weights as a direction for future work. ## 6 Experiments Our paper provides an efficient and sample-free algorithm for learning unknown dynamics from data. By taking epistemic and aleatoric uncertainty into account, our model family, ProDSSM, can produce flexible and well-calibrated predictions over a wide range of scenarios. Core to our algorithm is a new moment matching scheme that can be applied for assumed density approximation (see Sec. 3.2) and for Gaussian filtering (see Sec. 3.3). In our experiments, we first analyze each of these algorithmic advances in isolation before putting everything together. For this, we first explore in Sec. 6.1 our assumed density approximation in the context of deep stochastic layers on eight UCI datasets. Then, we study our approximation to the Gaussian Filter in Sec. 6.2 on a non-linear filtering task. We connect both steps and benchmark our full method in Sec. 6.3 on two well-established dynamical modeling datasets. Finally, we summarize our empirical findings in Sec. 6.4. ### _Deep Stochastic Layers_ We first demonstrate the usefulness of our uncertainty propagation scheme as proposed in Sec. 3.2 on a regression task with inputs \(x\in\mathbb{R}^{D_{x}}\) and outputs \(y\in\mathbb{R}\). Here, we interpret the input as the latent state at the initial time step, \(x=x_{0}\). Conditioned on the initial latent state, we can calculate the predictive distribution \(p(y|x,\phi)\) as \[p(y|x,\phi)=\int p(y|x_{T})p(x_{T}|x,w_{0})p(w_{0}|\phi)dw_{0},x_{T}. \tag{38}\] The transition kernel \(p(x_{T}|x,w_{0})\) is defined by the augmented dynamics, as discussed in Sec. 3.2, and the emission density \(p(y|x_{T})\) follows Eq. 16. The mapping from \(x\) to \(x_{T}\) can be interpreted as a deep stochastic layer. As the latent state is given, the filtering step of our algorithm becomes futile. The dataset \(\mathcal{D}=\{(x^{n},y^{n})\}_{n=1}^{N}\) consists of \(N\) input-output tuples. The likelihood term \(p(\mathcal{D}|\phi)\) in Eq. (32) is given by \[p(\mathcal{D}|\phi)=\prod_{n=1}^{N}p(y^{n}|x^{n},\phi), \tag{39}\] where \(p(y^{n}|x^{n},\phi)\) follows Eq. (38). Similar models have also been developed in the context of continuous depth layers Fig. 3: We visualize the runtime of approximating mean \(m_{t+1}^{g}\) and covariance \(\Sigma_{t+1}^{y}\) of the observation \(y_{t+1}\) conditioned on the augmented state \(z_{t}\) at the prior time step with mean \(m_{t}^{z}\) and covariance \(\Sigma_{t}^{z}\). We vary on the x-axis the dimensionality \(D\). We use the same dimensionality for the observation \(y_{t}\) and latent state \(x_{t}\), i.e., \(D_{x}=D_{y}=D\). We use randomly initialized transition and emission functions with one hidden layer of width \(H=D\). The solid/dashed line represents the runtime of our deterministic approximation for local/global weights. The colored lines represent the runtime of the MC approximation with varying number of particles \(S\) as a function of dimensionality \(D\). In the left panel, we take into account the runtime of the weight sampling procedure for the MC baseline. In the right panel, we ignore the runtime of the weight sampling procedure. for neural ordinary differential equations (ODEs) [35] and stochastic differential equations (SDEs) [23, 36]. #### 6.1.1 Datasets We use eight UCI datasets with varying input dimensionality and size that can be downloaded from here. These datasets are used in prior art for benchmarking stochastic models such as probabilistic neural networks [37, 38] or Gaussian processes [39, 40]. We follow the experimental protocol as defined in [37]. In short, we use 20 random splits. For each split, we use \(90\%\) of the data for training and \(10\%\) for testing. We follow [23] for the design of the network architecture. The mean/covariance functions are neural nets with one hidden layer and 40/10 hidden units. The observation function is a single linear layer. Similarly as [23], we add a residual connection to the transition density, i.e., we use \(x_{t}+f(x_{t},w_{t})\) instead of \(f(x_{t},w_{t})\) in Eq. 14. #### 6.1.2 Baselines We compare different variants of our method ProDSSM and provide benchmarks against commonly used regression baselines. _i) ProDSSM variants:_ * Det. vs MC: We may approximate Eq. 38 either via _Monte Carlo_ (MC) simulation or by using our _Deterministic_ (Det.) method that we introduced in Sec. 3.2. We vary the number of particles, i.e., MC simulations, during training and test time. * Local vs. Global: In the local approach, the weights are resampled at each time step. Contrarily, the weights are sampled once at the initial time step and then kept constant throughout the remaining time steps in the global approach (see. Eq. (15)). _ii) DSSM [23]:_ This method is equal to our contribution when removing the weight uncertainty. _iii) Dropout [38]:_ This method uses a single feed-forward neural net to predict the output, i.e., it does not rely on continuous depth layers. Stochasticity is introduced by applying a Bernoulli distributed masking scheme in all affine layers. _iv) DVI [20]:_ This method proposes a deterministic inference scheme for Bayesian neural nets. Uncertainty is introduced by allowing for weight uncertainty over the neural net weights. Similarly as Dropout, this method uses a feed-forward neural net. #### 6.1.3 Results We report the _Negative Log-Likelihood_ (NLL) in Tab. I, and the _Root Mean Squared Error_ (RMSE) in Tab. IV in App. C. First, we compare the local and the global weight approach (see Sec. 3.1) using our deterministic approximation scheme. For five datasets, the differences between both methods are less than one standard error. For the remaining three datasets, the global variant did not converge within the time limit of 72 hours1, and is therefore outperformed by its local alternative. Footnote 1: We use a time limit for the training runs in order to limit our carbon footprint. This is motivated by the high computational cost of the deterministic approximation for the global weight setting. The time limit is a multiple of 24 and at least \(10\times\) the runtime of the deterministic approximation for the local setting. For training, we use a NVIDIA Tesla V100 with 32GB. Next, we compare the local and global weight approach when using an MC approximation and varying the number of particles. We observe lower NLL and RMSE as we increase the number of particles. In order to achieve good predictive performance, a high number of particles is required. The local variant is in five datasets, while the global variant is only in three datasets among the best-performing methods. We conjecture that the difference in performance can be attributed to the higher gradient variance for the global variant, which makes training more difficult. Using 128 MC samples and focusing on the local weight variant, MC sampling and our deterministic approximation perform en par except for the Naval dataset2. However, it is important to note that our deterministic approximation is computationally more efficient, and restricting the MC approach to the same computational budget would result in approximately 12 samples, which is not sufficient for good performance. Footnote 2: There is little uncertainty in the Naval dataset, and the better predictive performance of the MC variant can most likely be attributed to numerical issues. Lastly, we compare our method against established baselines. ProDSSM with local weights is for five out of eight datasets among the best-performing methods in terms of NLL, thereby outperforming its competitors. ### _Filtering_ Next, we benchmark our new moment matching propagation scheme from Sec. 3.3 for Gaussian filters on a standard filtering task. #### 6.2.1 Datasets In order to ensure that this experiment only evaluates the performance with respect to the filtering task, we use a two-step approach for creating data. In the first step, we create probabilistic ground-truth models; in the second step, we apply our newly created models in order to generate data for the filtering task. _Step 1:_ We first train DSSM and our two ProDSSM variants on the kink dataset, which describes a non-linear dynamical system with varying emission noise \(r=\{0.008,0.08,0.8\}\). See also Sec. 6.3 for more details. After this step, we obtained nine trained models with three different emission noise levels and with three different model variants. _Step 2:_ For each trained model, we construct a new dataset by simulating trajectories with a (Pro-)DSSM. Each trajectory has length \(T=120\), and we simulate 10 trajectories per dataset. We evaluate the performance of different filtering methods on the NLL and RMSE of observing the true latent state on these nine newly created datasets. The transition and emission functions in this experiment are thereby fixed to the ground truth dynamics from Step 1. #### 6.2.2 Baselines We benchmark our filtering algorithm against two established baselines. _i) Unscented Kalman Filter_ (UKF). This filter and our method share similarities as they are both instances of the Gaussian filter (see. Sec. 2.3). In contrast to our moment propagation approach, the intractable integrals are solved by using the unscented transform that is a numerical integration scheme [14]. _ii) Neural Filter_ (NF). In DSSM literature [15], it is common practice to train a neural net based filter or smoother jointly with the generative model by maximizing ELBO. During training, we fix the transition and emission function to the ground truth from Step 1. We follow [5] for network design and use a recurrent neural net architecture that produces a sample \(x_{t}\) at each time step \(t\) as a function of the prior latent state \(x_{t-1}\) and the observation \(y_{t}\). #### 6.2.3 Results We report results in Tab. II. We observe for all methods that with increasing emission noise, it becomes more difficult to infer the latent distribution from the observations. For deterministic weights, our method performs on par with UKF, while NF is outperformed for medium and higher noise levels. When switching to probabilistic weight modeling methods, the UKF has higher RMSE and NLL compared to our deterministic method for middle and high emission noise. Increasing the emission noise makes learning the dynamics more challenging and, as a result, leads to higher weight uncertainties. We can also observe this behavior empirically: For low/middle/high observation noise, the average variance of the weights is \(0.10/0.29/0.60\) for local weights and \(0.05/0.26/0.49\) for global weights. As a consequence, the integration steps in the Gaussian filter become more difficult for increasing noise levels, and the performance of the UKF method deteriorates. In contrast, our newly introduced moment matching scheme performs well across the complete range of noise levels. ### _Dynamical System Modeling_ Our proposed model family, ProDSSM, is a natural choice for dynamical system modeling, where we aim to learn the underlying dynamics from a dataset \(\mathcal{D}=\{Y^{n}\}_{n=1}^{N}\) consisting of \(N\) trajectories. For simplicity, we assume that each trajectory \(Y^{n}=\{y_{t}^{n}\}_{t=1}^{T}\)is of length \(T\). Using the chain rule, the likelihood term \(p(\mathcal{D}|\phi)\) in Eq. (32) can be written as \[p(\mathcal{D}|\phi)=\prod_{n=1}^{N}\prod_{t=1}^{T-1}p(y_{t+1}^{n}|y_{1:t}^{n}, \phi), \tag{40}\] where we can approximate the predictive distribution \(p(y_{t+1}^{n}|y_{1:t}^{n},\phi)\) in a deterministic way as discussed in Sec. 4.2. #### 6.3.1 Datasets We benchmark our method on two different datasets. The first dataset is a well-established learning task with synthetic non-linear dynamics, and the second dataset is a challenging real-world dataset. _i) Kink_[10]: We construct three datasets with varying degrees of difficulty by varying the emission noise level. The transition density is given by \(\mathcal{N}(x_{t+1}|f_{kink}(x_{t}),0.05^{2})\) where \(f_{kink}(x_{t})=0.8+(x_{t}+0.2)[1-5/(1+e^{-2x_{t}})]\) is the kink function. The emission density is defined as \(\mathcal{N}(y_{t}|x_{t},r)\), where we vary \(r\) between \(\{0.008,0.08,0.8\}\). We simulate for each value of \(r\) 10 trajectories of length \(T=120\). We follow the experimental protocol as defined in [12] and perform 10 training runs where each run uses data \begin{table} \begin{tabular}{l l r r r r r r r r} \hline \hline & & Boston & Energy & Concrete & Wine Red & Kin8nm & Power & Naval & Protein \\ \hline Dropout & & 2.46(0.06) & 1.99(0.02) & 3.04(0.02) & 0.93(0.01) & -0.95(0.01) & **2.80(0.01)** & -3.80(0.01) & 2.89(0.00) \\ DVI & & 2.41(0.02) & 1.01(0.06) & 3.06(0.01) & **0.90(0.01)** & -1.13(0.00) & **2.80(0.00)** & **-6.29(0.04)** & 2.85(0.00) \\ DSSM & & 2.37(0.03) & 0.70(0.06) & **2.92(0.02)** & 0.93(0.02) & -1.22(0.00) & **2.80(0.01)** & -4.45(0.02) & **2.76(0.01)** \\ \hline ProDSSM: MC, Local & & & & & & & & & \\ Train: & 8 & Test: 32 & 2.42(0.03) & 0.47(0.03) & 3.02(0.02) & 0.96(0.01) & -1.25(0.00) & 2.85(0.01) & -5.88(0.09) & 2.86(0.01) \\ Train: & 8 & Test:128 & 2.41(0.02) & **0.44(0.03)** & 3.01(0.03) & 0.95(0.01) & -1.28(0.00) & 2.83(0.01) & -5.91(0.08) & 2.84(0.01) \\ Train: & 32 & Test: 32 & 2.38(0.03) & 0.47(0.06) & 3.06(0.03) & 0.95(0.01) & -1.27(0.01) & 2.82(0.01) & -6.08(0.07) & 2.81(0.01) \\ Train: & 32 & Test: 128 & 2.37(0.02) & **0.43(0.04)** & 2.99(0.01) & 0.93(0.01) & -1.29(0.01) & **2.79(0.01)** & -6.10(0.07) & **2.77(0.01)** \\ Train: & 128 & Test: 32 & 2.42(0.04) & **0.45(0.05)** & 3.09(0.04) & 0.96(0.01) & -1.26(0.01) & 2.83(0.01) & -6.15(0.07) & 2.83(0.01) \\ Train: & 128 & Test:128 & **2.36(0.03)** & **0.42(0.04)** & 3.00(0.03) & 0.93(0.01) & **-1.30(0.01)** & **2.79(0.01)** & -6.17(0.07) & **2.77(0.01)** \\ \hline ProDSSM: MC, Global & & & & & & & & & \\ Train: & 8 & Test: 32 & 2.49(0.02) & 0.56(0.03) & 3.08(0.02) & 0.96(0.01) & -1.22(0.01) & 2.85(0.01) & -6.16(0.05) & 2.89(0.01) \\ Train: & 8 & Test: 28 & 2.46(0.02) & 0.54(0.03) & 3.06(0.01) & 0.94(0.01) & -1.24(0.01) & 2.83(0.01) & -6.19(0.05) & 2.87(0.01) \\ Train: & 32 & Test: 32 & 2.50(0.06) & 0.52(0.06) & 3.08(0.02) & 0.96(0.01) & -1.22(0.01) & 2.84(0.01) & -6.18(0.07) & 2.81(0.01) \\ Train: & 32 & Test: 128 & 2.44(0.05) & 0.50(0.06) & 3.03(0.02) & 0.93(0.01) & -1.25(0.01) & 2.81(0.01) & -6.22(0.07) & **2.77(0.01)** \\ Train: & 128 & Test: 32 & 2.44(0.04) & 0.54(0.05) & 3.10(0.04) & 0.97(0.02) & -1.22(0.01) & 2.83(0.01) & **-6.28(0.05)** & 2.82(0.01) \\ Train: & 128 & Test: 28 & 2.41(0.04) & 0.50(0.05) & 3.03(0.02) & 0.93(0.01) & -1.25(0.01) & **2.80(0.01)** & **-6.30(0.04)** & **2.77(0.01)** \\ \hline ProDSSM: Det., Local & & **2.33(0.03)** & **0.43(0.04)** & 3.00(0.03) & 0.92(0.01) & **-1.30(0.00)** & **2.79(0.01)** & -5.52(0.03) & **2.76(0.01)** \\ ProDSSM: Det., Global & & **2.34(0.02)** & **0.44(0.03)** & 2.99(0.04) & 0.92(0.00) & -1.27(0.01)* & **2.79(0.01)** & -4.75(0.08)* & 2.82(0.01)* \\ \hline \hline \end{tabular} \end{table} TABLE II: NLL and MSE on a non-linear filtering dataset. We report average and standard error over 10 runs. \begin{table} \begin{tabular}{l l r r r r r} \hline \hline & & \(r=0.008\) & \(r=0.08\) & \(r=0.08\) \\ & MSE & NLL & MSE & NLL & MSE & NLL \\ \hline NF & **0.010(0.00)** & **-0.87(0.08)** & **0.08(0.01)** & 0.25(0.10) & 0.73(0.19) & 1.23(0.11) \\ UKF & **0.010(0.00)** & **-0.89(0.02)** & **0.07(0.00)** & **0.09(0.00)** & **0.04(0.06)** & **1.30(0.08) from a single simulated trajectory only. The mean function is realized with a neural net with one hidden layer and 50 hidden units, and the variance as a trainable constant. For MC based ProDSSM variants, we use 64 samples during training. The cost of our deterministic approximation for the local approach is \(\approx\)50 samples. We compare the performance of the different methods with respect to epistemic uncertainty, i.e., parameter uncertainty, by evaluating if the learned transition model \(p(x_{t+1}|x_{t})\) covers the ground-truth dynamics. In order to calculate NLL and MSE, we place 70 evaluation points on an equally spaced grid between the minimum and maximum latent state of the ground truth time series and approximate for each point \(x_{t}\) the mean \(\mathbb{E}[x_{t}]=\int f(x_{t},w_{t})p(w_{t})dw_{t}\) and variance \(\text{Var}[x_{t}]=\int(f(x_{t},w_{t})-\mathbb{E}[x_{t}])^{2}p(w_{t})dw_{t}\) using 256 Monte Carlo samples. This dataset is commonly used for benchmarking GP based dynamical models [10, 12]. To the best of our knowledge, it has not been used in the context of DSSMs prior to this work. _ii)__Mocap_ : We follow [6] for preprocessing and designing the experimental setup. The data is available here. It consists of 23 sequences from a single person. We use 16 sequences for training, 3 for validation, and 4 for testing. Each sequence consists of measurements from 50 different sensors. We follow [6] for designing the network architecture and add a residual connection to the transition density, i.e., we use \(x_{t}+f(x_{t},w_{t})\) instead of \(f(x_{t},w_{t})\) in Eq. 14. For MC based ProDSSM variants, we use 32 samples during training and 256 during testing. The cost of our deterministic approximation for the local approach is approximately 24 samples. For numerical comparison, we compute NLL and MSE on the test sequences. #### 6.3.2 Baselines We use the same ProDSSM variants as in our deep stochastic layer experiment (Sec. 6.1). Additionally, we compare against well-established baselines from GP and neural net based dynamical modeling literature. _i)__VCDT [10]_: This method relies on GPs to model a SSM. The distribution of the latent state is forward propagated via sampling. Training is performed using doubly stochastic variational inference jointly over the GP posterior and the latent states. _ii)__Laplace GP [12]_: A GP based dynamical model that applies stochastic variational inference for the Gaussian process posterior and the Laplace approximation over the latent states. _iii)__ODE2VAE [6]_: The dynamics are modeled as latent neural ordinary differential equations. Stochasticity is introduced by accounting for uncertainty over the weights. Contrary to our method, an additional neural net is used to approximate the latent distribution of the initial latent state, and the model does not account for transition noise. _iv)__E-PAC-Bayes-Hybrid [13]_: The dynamics is modeled as a neural stochastic differential equation and accounts for aleatoric and epistemic uncertainty. Marginalization over the latent states and the weights is performed using Monte Carlo sampling. This method focuses on integrating prior knowledge, either in the form of physics or by transfer learning across similar tasks, into the dynamics, hence the term _Hybrid_. For the kink dataset, we reimplement this method without using prior knowledge. #### 6.3.3 Results First, we analyze the results on the kink dataset. We visualize the learned transition model of our model in Fig. 4. The confidence intervals capture the true transition function well, and the epistemic uncertainty increases with increasing noise levels. We present the numerical results of this benchmark in Tab. III. For low (\(r=0.008\)) and middle emission noise (\(r=0.08\)), all of our ProDSSM variants achieve on par performance with existing GP based dynamical models and outperform ODE2VAE. For high emission noise (\(r=0.08\)), our ProDSSM variants perform significantly better than previous approaches. The MC variants achieve for low and middle noise levels the same performance as the deterministic variants. As the noise is low, there is little function uncertainty, and few MC samples are sufficient for accurate approximations of the moments. If the emission noise is high, the marginalization over the latent states and the weights becomes more demanding, and the MC variant is outperformed by its deterministic counterpart. Furthermore, we observe that for high observation noise, the local weight variant of our ProDSSM model achieves lower NLL than the global variant. We cannot report results for DSSM since this model does not account for epistemic uncertainty. On the Mocap dataset, our best-performing ProDSSM variant from the previous experiments, which is the local weight variant together with the deterministic inference algorithm, is able to outperform all baselines. This is despite the fact that E-PAC-Bayes-Hybrid uses an additional dataset from another motion-capture task. Compared to the kink dataset, the differences between the MC and deterministic ProDSSM variants become more prominent: the Mocap dataset is high dimensional, and hence more MC samples are needed for accurate approximations. The ProDSSM variant with global weights and the deterministic inference was not able to converge within the time limit. ### _Summary_ Our experiments have demonstrated that our model family, ProDSSM, performs favorably compared to state-of-the-art alternatives over a wide range of scenarios. Its benefits become especially pronounced when tackling complex Fig. 4: For increasing noise level \(r\), we observe increased epistemic uncertainty. We visualize the true mean function \(f(x_{t})\) as an orange solid line. The blue solid line is the expected value of the learned mean function, and the shaded area represents the 95% confidence interval. datasets characterized by high noise levels or a high number of output dimensions. First, we compare the local and global variants of our approach. In the local variant, we resample the weight at each time step, while, for the global variant, we keep the weights fixed for the complete trajectory. Independently of the chosen inference scheme, our experiments did not find a clear winner, provided that both variants converged. However, the local variant is mathematically more convenient as it decorrelates subsequent time steps. This property can be exploited for sample-free inference, where it results in a lower computational burden. Our empirical evidence confirms that this variant leads to more feasible solutions, whereas the global alternative is much slower and often did not converge in a reasonable amount of time. Focusing on the local approach, we can observe that our moment matching inference scheme outperforms its MC counterpart when using the same computational budget. Disregarding runtime constraints, the MC variant still fails to surpass the performance of its deterministic alternative, indicating that (i) the Gaussian assumption is appropriate and (ii) the approximation error of our propagation scheme is negligible. Despite the increased computational complexity of the global approach, we believe it warrants further exploration due to its ability to facilitate uncertainty decomposition [2], i.e., allowing for the separation of aleatoric and epistemic uncertainty. In contrast, the local approach does not support uncertainty decomposition, as both sources of uncertainty are intertwined at each time step. Additionally, the global approach could prove advantageous when transitioning from discrete to continuous dynamical systems, where achieving a parsimonious solution across different numerical solvers and step sizes is desirable. ## 7 Conclusion In this work, we present ProDSSMs, a general framework for modeling unknown dynamical systems that respect epistemic and aleatoric uncertainty. Inference for this model class is hard since we need to propagate the uncertainty over the neural network weights and of the latent states along a trajectory. We address this challenge by introducing a novel inference scheme that exploits the internal structure of ProDSSMs and enjoys sample-free inference. Our algorithm is general and can be applied to a variety of tasks and account for different weight sampling strategies. In our experiments, we observe that our deterministic algorithm with local weights achieves better predictive performance in terms of lower NLL and MSE than its sampling-based counterpart under a fixed computational budget. Compared to state-of-the-art alternatives, ProDSSM performs favorably over a wide range of scenarios. The strengths of the method play out in particular on demanding datasets such as high-noise transition dynamics or high-dimensional outputs. A drawback of our algorithm is its reliance on the Gaussian assumption. A potential future research direction is the combination of our method with Gaussian mixture filtering algorithms [41, 42].
2309.11707
Efficient Long-Short Temporal Attention Network for Unsupervised Video Object Segmentation
Unsupervised Video Object Segmentation (VOS) aims at identifying the contours of primary foreground objects in videos without any prior knowledge. However, previous methods do not fully use spatial-temporal context and fail to tackle this challenging task in real-time. This motivates us to develop an efficient Long-Short Temporal Attention network (termed LSTA) for unsupervised VOS task from a holistic view. Specifically, LSTA consists of two dominant modules, i.e., Long Temporal Memory and Short Temporal Attention. The former captures the long-term global pixel relations of the past frames and the current frame, which models constantly present objects by encoding appearance pattern. Meanwhile, the latter reveals the short-term local pixel relations of one nearby frame and the current frame, which models moving objects by encoding motion pattern. To speedup the inference, the efficient projection and the locality-based sliding window are adopted to achieve nearly linear time complexity for the two light modules, respectively. Extensive empirical studies on several benchmarks have demonstrated promising performances of the proposed method with high efficiency.
Ping Li, Yu Zhang, Li Yuan, Huaxin Xiao, Binbin Lin, Xianghua Xu
2023-09-21T01:09:46Z
http://arxiv.org/abs/2309.11707v1
# Efficient Long-Short Temporal Attention Network for Unsupervised Video Object Segmentation ###### Abstract Unsupervised Video Object Segmentation (VOS) aims at identifying the contours of primary foreground objects in videos without any prior knowledge. However, previous methods do not fully use spatial-temporal context and fail to tackle this challenging task in real-time. This motivates us to develop an efficient _L_ong-_S_hort _T_emporal _A_ttention network (termed **LSTA**) for unsupervised VOS task from a holistic view. Specifically, LSTA consists of two dominant modules, i.e., Long Temporal Memory and Short Temporal Attention. The former captures the long-term global pixel relations of the past frames and the current frame, which models constantly present objects by encoding appearance pattern. Meanwhile, the latter reveals the short-term local pixel relations of one nearby frame and the current frame, which models moving objects by encoding motion pattern. To speedup the inference, the efficient projection and the locality-based sliding window are adopted to achieve nearly linear time complexity for the two light modules, respectively. Extensive empirical studies on several benchmarks have demonstrated promising performances of the proposed method with high efficiency. keywords: Unsupervised video object segmentation, long temporal memory, short temporal attention, efficient projection + Footnote †: journal: ArXiv ## 1 Introduction Video Object Segmentation (VOS) task is to localize and segment primary objects in videos, i.e., yielding accurate contours of objects. As a fundamental video processing technique, VOS has found widespread applications, e.g., video editing [35], autonomous driving, and surveillance environment [53], which are highly demanding in real-time processing. Generally, VOS methods are divided into two categories, i.e., _semi-supervised_ VOS (_a.k.a._, one-shot VOS) [12] which utilizes given object mask of the first frame, and _unsupervised_ VOS (_a.k.a._, zero-shot VOS) [20] for which arbitrary prior knowledge is unavailable during inference. This work concentrates on the more challenging unsupervised VOS, which faces **two** considerably critical problems: 1) how to find primary objects in video frames; 2) how to speedup object segmentation inference. For the first problem, the common insight is to consider salient objects, moving objects, and constantly present objects across video frames. While salient objects attract the visual attention from human eyes, fast moving and drastic deformations may yield objects with small appearance size, leading to less saliency. To model moving objects, someone [9] adopt optical flow technique to capture motion cues, but it is still difficult to discriminate moving objects from dynamic background and usually fails to identify objects in static scenes. From a holistic view, a natural idea is to observe whether there exist constantly present objects in the past frames, and then search objects with similar appearance in the current frame. This idea has been proved effective [20; 40] by using dot product attention to encode pixel-wise dense correlations of past frames. However, when partial area of objects are occluded, it adds much difficulty in identifying similar objects due to its strong reliance on appearance. To address these limitations, we model _constantly present objects_ and _moving objects_ at the same time, by utilizing motion cues and temporal consistency of objects in past frames from both _full-frame_ (all pixels in a frame) and _partial-frame_ (one frame is separated into many small patches) perspectives. Encoding full-frame pixel correlations facilitates tackling object deformation by modeling appearance pattern, while encoding partial-frame pixel correlations benefits handling object occlusion by modeling pixel movements in the local region of frame. For the second problem, it still remains an open issue to be explored in unsupervised VOS without any object prior knowledge. Existing models cannot be deployed in real-time applications due to their low inference speed caused by using optical flow [55; 9] or 3D Convolutional Neural Networks (CNN) [22; 1], as illustrated in Fig. 1. Accordingly, we explore the way of accelerating inference from both full-frame and partial-frame perspectives to identify objects efficiently. As is well known, the time cost of directly encoding full-frame pixel correlation increases squarely with the number of pixels, which limits its applicability. Someone proposed channel-wise attention [17] to capture the global context of past frames for fast semi-supervised VOS, but it is unable to preserve per-pixel correlations, thus deteriorating performance. Therefore, inspired by the random projection on feature map for computing efficient attention relation with nearly linear complexity [11], we propose to adopt an efficient projection skill to reveal channel-wise correlation for unsupervised VOS by doing random projection on feature maps derived from CNNs. This projection can achieve the similarity distribution approximation of frames, such that the pixel-wise similarity between the past frames and the current frame can be well preserved in the embedding space. So it is considerably beneficial for discriminating constantly present objects. Meanwhile, since the number of channel \(c\) is far less than that of pixel \(n\) in a feature map, i.e., \(c\ll n\), the time cost of encoding inherent relations among past frames is reduced from square complexity to linear level, e.g., \(\mathcal{O}(n^{2}c)\rightarrow\mathcal{O}(nc^{2})\). On the other hand, the locality-based sliding window strategy is employed to partition one full frame to many overlapped patches with size of \(k\times k\) (\(k\ll n\)), i.e., partial frames. This helps to model the local patterns of objects, such as edges, lines, and textures. By this means, encoding partial-frame pixel correlation requires linear time complexity, i.e., \(\mathcal{O}(nck^{2})=\mathcal{O}(nc)=\mathcal{O}(n)\), much less than directly encoding full-frame correlation, i.e., \(\mathcal{O}(n^{2}c)\). Therefore, we propose an end-to-end real-time unsupervised VOS framework, named **L**ong-**S**hort **T**emporal **A**ttention network (**LSTA**), to strike a good balance between performance and speed. This framework mainly includes two fast modules, i.e., _Long Temporal Memory_ (LTM) and _Short Temporal Attention_ (STA). LTM enables encoding long-term full-frame pixel spatiotemporal dependency between the past frames and the current frame, which facilitates identifying constantly present objects. Simultaneously, STA enables capturing short-term partial-frame pixel spatiotemporal relations between one nearby frame and the current frame, which benefits finding moving objects. As a matter of fact, the two modules cooperatively work together to find primary objects by modeling both long-range and short-range spatiotemporal coherence of frames. Meanwhile, it paves the way for discriminating objects from complex background and thus alleviates the object deformation or occlusion problem. More importantly, we apply our proposed efficient projection to LTM and the locality-based sliding window to STA, respectively, for greatly reducing the time complexity. Thus, both LTM and STA can be implemented at linear time complexity, making our LSTA framework very Figure 1: Overall efficiency comparison of several SOTA unsupervised VOS methods without object prior on DAVIS2016 validation set. efficient. To examine its performance, we have conducted comprehensive experiments on several benchmark databases, i.e., DAVIS2016[26], DAVIS2017[27], YouTube-Objects[28], and FBMS[24]. Empirical studies demonstrate that our method exhibits promising segmentation performances at a fast speed, e.g., 42.8 fps on 480p resolution videos from DAVIS2016. Our main contributions are highlighted in the following: * We propose an end-to-end real-time unsupervised VOS framework, called Long-Short Temporal Attention network (LSTA), which enjoys satisfying segmentation accuracy with high inference efficiency. * The Long Temporal Memory (LTM) module and the Short Temporal Attention (STA) module are developed to encode both global and local spatiotemporal pixel-wise relations among frames. Hence, constantly present and moving objects can be readily found, and the object deformation or occlusion problem can be alleviated. * LTM module and STA module can both achieve the nearly linear time complexity, by respectively adopting the efficient projection and the locality-based sliding window strategy on feature maps. * Performance comparisons and extensive ablation studies have justified the realtime segmentation ability with high precision by our method on several benchmarks. The rest of this paper is organized as follows. Section 2 reviews closely related works and Section 3 introduces the newly developed LSTA framework. After that, we report both quantitative and qualitative experimental results to verify the efficacy of the proposed method in Section 4. Finally, we conclude this work in Section 5. ## 2 Related Work This section makes a brief summary of closely related VOS methods, including unsupervised, semi-supervised, and fast scenarios. Note that _unsupervised_ and _semi-supervised_ terms are indicated by whether using the first frame mask during inference. This is a bit different from the traditional machine learning paradigm. For a thorough survey on video segmentation using deep learning techniques, please refer to [56]. ### Unsupervised VOS Unsupervised VOS methods have no prior on the first frame mask for inference, making it fairly challenging. Usually, existing methods attempt to find primary objects by considering temporal motion [55] or object saliency [33]. Here, we review two primary kinds of unsupervised methods, i.e., attention-based, and optical flow-based, which adopt 2D convolutions. **Attention-based** methods [20][40][48] find objects using appearance feature derived from video frames. For example, COSNet (Co-Attention Siamese Networks) [20] captures global per-pixel correlation and scene context by using co-attention mechanism on visual features of different frames, which helps to find constantly present objects. But COSNet models spatiotemporal relation between only two nearby frames during inference, which easily causes error accumulation by iterative updates and fails to well capture long-range context of frames. To overcome this drawback, AGNN (Attentive Graph Neural Networks) [40] builds fully-connected graph, where a node is the frame feature and an edge stands for the relation of pair-wise features. However, AGNN largely relies on object appearance similarity, and performs poorly when partial objects are occluded. Both COSNet and AGNN utilize dot product attention that requires intensive computations, consequently preventing them from being widely deployed. While attention-based methods focus more on object appearance, partial background areas sharing similar appearance with primary objects will be treated as objects by mistake. To handle this shortcoming, AD-Net (Anchor Diffusion Network) [48] adopts instance pruning as a postprocessing to filter out some noisy objects via object detection. In addition, AGS (Attention-Guided object Segmentation) [41] computes visual attention using eye tracking data, and obtains the coarse object location through dynamic visual attention prediction. Nevertheless, AGS employs ConvLSTM (Convolutional Long Short-Term Memory) to model temporal relations, which fails to fully model long-range spatiotemporal context of frames. And a variant of ConvLSTM named RNN-Conv [53] aggregates the temporal and the spatial information, such that the model can discover important objects in video. **Optical flow-based** methods [57] capture motion cues from optical flow feature as the compensation of appearance feature. The early work Segflow [5] unifies CNN and optical flow prediction network to predict object mask and optical flow simultaneously, which obtains motion cues in an end-to-end manner; another work [42] produces a spatiotemporal edge map by combining static edge probability and optical flow gradient magnitude. But they fail to fully use object appearance features, leading to inferior performance. Thereafter, some works [55][9] concentrate on how to derive and then fuse both the motion feature and the appearance feature. For instance, MATNet (Motion-Attentive Transition Network) [55] employs dot product attention to fuse motion and appearance features; RTNet (Reciprocal Transformation Network) [29] mutually evolves the two modalities such that the intra-frame contrast, the motion cues, and temporal coherence of recurring objects are holistically considered; TransportNet [51] employs the Wasserstein distance compute the global optimal flows to transport the features in one modality to the other, and formulates the motion-appearance alignment as an instance of optimal structure matching. But they require heavy computations, and to reduce time cost, FSNet (Full-duplex Strategy Network) [9] makes feature fusion by channel-wise attention, but distills out effective squeezed cues from feature, readily overlooking appearance details. Except for Segflow, the other methods require additional optical flow features, which are not end-to-end and also time-consuming. Besides, optical flow feature mainly encodes the temporal relation between only two nearby frames, failing to model the long-range relation. This usually makes the model perform not well, when drastic changes happen to objects in a long video. ### Semi-supervised VOS Semi-supervised VOS methods [7; 49] aim to capture objects in video given the first frame mask, which is class-agnostic. Previous methods can be roughly separated into three groups, i.e., _online learning-based_, _attention-based_, _detection-based_. **Online learning-based** methods [31] employ the first frame and its mask to update model parameters during inference, which can adapt to the videos containing various objects in different categories. For example, Sun _et al._[35] utilize reinforcement learning to select optimal adaptation areas for each frame, and make the model take optimal actions to adjust the region of interest inferred from the previous frame for online model updating; Lu _et al._[19] perform the memory updating by storing and recalling target information from the external memory. However, the model updating is very slow, resulting in low segmentation speed. **Attention-based** methods [25] treat the first frame mask as the model input to provide object prior during inference. They encode pairwise pixel dependency among video frames by attention mechanism, which helps to capture objects existing for a long time in the past frames. For example, MUNet (Motion Uncertainty-aware Net) [34] designs a motion-aware spatial attention module to fuse the appearance features and the motion uncertainty-aware feature. The drawback is the high computational cost of computing attention scores, i.e., pairwise pixel similarity. **Detection-based** methods [46] always use object detection model to obtain object proposals in each frame, whose feature representations are propagated along the temporal dimension, and generate object masks in line with appearance similarity. The shortcoming is the quality of object proposals will heavily affect the mask quality, and they cannot be trained in an end-to-end way, leading to sub-optimal results. ### Fast VOS Most existing fast VOS methods [31][44][39][50] belong to semi-supervised paradigm, and they strive to efficiently extract discriminant features from video frames. For example, Robinson _et al._[31] propose FRTM (Fast and Robust Target Models) that only updates partial model parameters to speedup inference for online learning-based methods. For attention-based methods, Li _et al._[17] use channel-wise similarity to substitute pixel-wise similarity for capturing the global context among frames. Although this substitution can reduce the time complexity, the performance is still far from that of pixel-wise methods such as STM (Space-Time Memory Networks) [25]. Moreover, Swiftnet [39] uses sparse features, i.e., only computing the similarity of those more informative pixels, to reduce dense pixel computations in attention-based methods. Real-time unsupervised VOS task still remains less explored in existing works. The one relevant work is WCS-Net (Weighted Correlation Siamese Network) [52] that borrows eye gaze estimation model to provide coarse object location, which is fed into a light segmentation model to obtain object masks of frames. While this approach achieves relatively high inference speed, it does require additional model to yield object prior information as pre-processing step. The other one is Dynamic Atrous Spatial Pyramid Pooling (ASPP) [53], which adopts a dynamic selection mechanism in ASPP, and the dilated convolutional kernels adaptively select appropriate features by the channel attention mechanism. This still requires large computations with an additional RNN-Conv module. Luckily, our LSTA approach can achieve a good balance between segmentation accuracy and inference speed without any object prior, and can be trained in an end-to-end manner. In addition, referring VOS has recently received more attention from the research field, such as Liang _et al._[18] explore both local and global temporal context by an improved Transformer model to query the video with the language expression in an efficient manner. Also, the spatio-temporal context is important for weakly-supervised video grounding [15], which localizes the aligned visual tube corresponding to a language query. Very recently, Ji _et al._[10] explored the Segment Anything Model (SAM) model on a variety of image segmentation tasks, such as agriculture, remote sensing, and healthcare, which may shed some light on the future research of unsupervised VOS task. ## 3 Our LSTA Method To efficiently identify primary objects, we develop a real-time end-to-end unsupervised VOS approach, i.e., LSTA, by respecting the spatiotemporal coherence structure in frame data space. First of all, we briefly describe the problem formulation. Then, the main components including Encoder, Long Temporal Memory (LTM) block, Short Temporal Attention (STA) block, and Decoder, in the LSTA framework as illustrated by Fig. 2, will be elaborated. ### Problem Formulation Given a video sequence with \(T\) frames, i.e., \(\mathcal{V}=\{\mathbf{I}_{t}\in\mathbb{R}^{H\times W\times 3}|t=1,2,\ldots,T\}\), where \(\mathbf{I}_{t}\) denotes the \(t\)-th RGB frame with width \(W\), height \(H\), and three channels. There may exist one or more than one objects in each frame, but no prior knowledge about objects are available. Unsupervised VOS aims to predict the pixel-wise object mask without specifying object in the first frame. For a video sequence \(\mathcal{V}\) with one object, the ground-truth mask sequence is \(\mathcal{P}=\{\mathbf{P}_{t}\in\{0,1\}^{H\times W}|t=1,2,...,T\}\) and the predicted mask sequence is \(\hat{\mathcal{P}}=\{\hat{\mathbf{P}}_{t}\in\{0,1\}^{H\times W}|t=1,2,...,T\}\). The frame mask is a matrix with binary entries, where '0' means background pixel and '1' means primary object pixel. During training, the model uses RGB frames and their ground-truth masks as input, while only RGB frames are available during inference. Note that LSTA does not use all past frames but averagely divides them into \(N\) bins, from each of which one frame is randomly selected. For the current frame \(\mathbf{I}_{t}\), i.e., query frame, its past frame set is denoted as \(\mathcal{I}_{t}=\{\mathbf{I}_{t}^{(1)},\mathbf{I}_{t}^{(2)},\ldots,\mathbf{I}_ {t}^{(N)}\}\). Figure 2: The framework of LSTA model for unsupervised VOS. It is composed of Encoder, LTM, STA, and Decoder. Note that, for the feature map with \(c\) channels in LTM and STA, \(h\) and \(w\) denote height and width, respectively; for STA, \(b\) is the number of local patches with size \(k\) in one frame after passing the separation layer. As illustrated in Fig. 2, Encoder adopts DeepLab v3+[3] without the last convolution layer, pre-trained on MS COCO database, and it is used to derive object-aware appearance feature from RGB frame \(\mathbf{I}\in\mathbb{R}^{H\times W\times 3}\), where \(H\) denotes height and \(W\) denotes width. LTM models long-term full-frame pixel spatiotemporal relation between the past \(N\) frames (memory) and the current frame (query) at time step \(t\) using channel-wise attention with the efficient projection, which facilitates capturing constantly present objects. STA adopts the locality-based sliding window strategy in the separation layer and attention mechanism on the appearance features of the nearby frame \(\mathbf{I}_{t}^{(N)}\) and the current frame \(\mathbf{I}_{t}\), which helps to model the pattern of those moving objects. Decoder that consists of convolution layers, anisotropic convolution block [14], and bilinear up-sampling, is used for aggregating features derived from Encoder, LTM, and STA, resulting in object-aware feature representation for computing prediction mask \(\mathbf{\hat{P}}_{t}\in\mathbb{R}^{H\times W}\) of the \(t\)-th frame. The recovery layer is used to reshape the feature map with the size of \(b\times k^{2}\times c\) to \(h\times w\times c\). ### Encoder To encode the appearance property of video frames, we use the DeepLab v3+ [3] model pre-trained on MS COCO database as Encoder, and the last convolution layer is abandoned. As well known, DeepLab v3+ is a typical semantic segmentation model with ResNet101 as its backbone, and the pre-trained model can discriminate a large number of semantic classes of objects. All frames in video sequence \(\mathcal{V}\) are fed into Encoder to derive its appearance feature map. For the \(t\)-th frame and its past \(N\) frames, we have \[\{\mathbf{F}_{t}^{(1)},\cdots,\mathbf{F}_{t}^{(N)},\mathbf{F}_{t}\}=\{\Phi( \mathbf{I}_{t}^{(1)}),\cdots,\Phi(\mathbf{I}_{t}^{(N)}),\Phi(\mathbf{I}_{t})\} \in\mathbb{R}^{h\times w\times c_{0}}, \tag{1}\] where the function \(\Phi(\cdot)\) denotes Encoder which projects RGB frame into feature map \(\mathbf{F}_{t}\) with \(c_{0}\) channels (\(c_{0}\) is 256); the height is \(h=\frac{H}{4}\) and the width is \(w=\frac{W}{4}\). Since Encoder adopts pre-trained semantic segmentation model, it is able to capture the intrinsic appearance structure of common foreground objects in video frames. This is beneficial for finding those primary objects for segmentation. ### Long Temporal Memory (LTM) To identify those constantly present objects in video, LTM block, as illustrated in Fig. 3, employs appearance features of the past frames and the current frame to encode the full-frame pixel spatiotemporal dependency in terms of appearance similarity. This not only helps the model to readily find out those objects with similar appearance in the long-range frame context, such that constantly present objects in the current frame receive more attention, but also makes the model robust to object deformation. To encode the spatiotemporal relation between the past frames (memory) and the current frame (query), inspired by STM (Space-Time Memory) [25] for semi-supervised VOS, we use the individual convolution layer to generate the feature map (embedding), which essentially plays the role of key-value maps, so as to Figure 3: The Long Temporal Memory (LTM) block in LSTA. This block mainly consists of the convolution layer and orthogonal random projection, resulting in projected feature maps. Note that \(\phi(\cdot)\) and \(\psi(\cdot)\) are convolution layers acting as linear projection. reduce the model complexity. The derived feature maps reveal visual semantics for object matching and store detailed cues such as object contours for mask estimation. To determine when-and-where to retrieve related memory feature maps from, we compute similarities between the query feature and the memory features. Query feature is learned to store appearance information for decoding object mask, while memory features are learned to embed visual semantics for object matching. However, densely matching the feature maps of the query and the memory frames, requires expensive computational overheads, i.e., square time complexity _w.r.t._ the number of pixels. This motivates us to model channel-wise correlation rather than pixel-wise one, and the cost is greatly reduced by using smaller channel number. However, channel-wise attention may break down the pixel-wise similarity distribution (e.g., probability histogram), since all the pixels of pairwise memory feature maps are taken into account channel by channel rather than pixel by pixel. Hence, we propose to make an efficient projection on the pixel-wise feature embedding of the past frames and the current frame using the similar projection skill in [11]. As illustrated in Fig. 3, the input of LTM block is the appearance feature map set \(\mathcal{F}_{t}=\{\mathbf{F}_{t}^{(1)},\mathbf{F}_{t}^{(2)},\ldots,\mathbf{F}_ {t}^{(N)},\mathbf{F}_{t}\}\in\mathbb{R}^{h\times w\times c_{0}}\) of \(N\) past frames and the current frame \(\mathbf{I}_{t}\). The feature maps of past frames and that of current frame are respectively fed into two 2D convolution layers \(\phi(\cdot)\) and \(\psi(\cdot)\) with \(1\times 1\) kernel, followed by reshaping the height and the width dimensions, i.e., \(h\times w\to hw=n\). This results in memory features \(\{\tilde{\mathbf{F}}_{t}^{(s)}\in\mathbb{R}^{hw\times c}\}_{s=1}^{N}\) (\(c\) is 128) and query feature \(\mathbf{Q}_{t}\in\mathbb{R}^{hw\times c}\), where memory features are concatenated along row dimension into one matrix, called feature memory \(\mathbf{M}_{t}\), i.e., \[\mathbf{M}_{t}=[\tilde{\mathbf{F}}_{t}^{(1)},\ldots,\tilde{\mathbf{F}}_{t}^{( N)}]\in\mathbb{R}^{Nhw\times c}, \tag{2}\] where \([\cdot,\cdot]\) denotes the concatenation, \(N\times hw=Nhw\). To preserve the pixel-wise similarity distribution, LTM conducts random projection on feature memory \(\mathbf{M}_{t}\) and query feature \(\mathbf{Q}_{t}\) at pixel level, leading to projected pixel values, i.e., \[m^{\prime} =\frac{1}{\sqrt{\lfloor c/2\rfloor}}\exp(\mathbf{u}^{T}\mathbf{m} -\frac{||\mathbf{m}||_{2}^{2}}{2}), \tag{3}\] \[q^{\prime} =\frac{1}{\sqrt{\lfloor c/2\rfloor}}\exp(\mathbf{u}^{T}\mathbf{q} -\frac{||\mathbf{q}||_{2}^{2}}{2}), \tag{4}\] where the pixel feature vector \(\mathbf{m}\in\mathbb{R}^{c}\) is stacked in each row of feature memory matrix \(\mathbf{M}_{t}\), and the pixel feature vector \(\mathbf{q}\in\mathbb{R}^{c}\) is stacked in each row of query feature matrix \(\mathbf{Q}_{t}\); the vector \(\mathbf{u}\in\mathbb{R}^{c}\) is an orthogonal projection vector, which is randomly initialized for each projection; the constant \(\lfloor c/2\rfloor\) is a scaling factor and \(\lfloor\cdot\rfloor\) rounds down fractions. Thus, we can obtain the projected pixel feature vectors \(\mathbf{m}^{\prime}\in\mathbb{R}^{\lfloor c/2\rfloor}\) and \(\mathbf{q}^{\prime}\in\mathbb{R}^{\lfloor c/2\rfloor}\) for each pixel in the memory feature map and the query feature map, respectively, by doing orthogonal random projections for \(\lfloor c/2\rfloor\) times as in (3). All projected pixel feature vectors are collected together to be reshaped into the matrix with the same size of that of unprojected feature matrix, i.e., \(\mathbf{M}_{t}^{\prime}\in\mathbb{R}^{Nhw\times\lfloor c/2\rfloor}\) and \(\mathbf{Q}_{t}^{\prime}\in\mathbb{R}^{hw\times\lfloor c/2\rfloor}\). Therefore, the pixel-wise appearance similarity between the past frames and the query frame can be revealed by the product of projected pixel feature matrices \(\mathbf{M}_{t}^{\prime}\) and \(\mathbf{Q}_{t}^{\prime}\), i.e., \(\mathbf{A}_{t}=\mathbf{Q}_{t}^{\prime}\mathbf{M}_{t}^{\prime\top}\in\mathbb{R} ^{hw\times Nhw}\), where the large values taken by the entries of appearance similarity matrix \(\mathbf{A}_{t}\) indicate that the current frame shares higher similarity with the past frames in appearance. Meanwhile, its elements can be treated as attention weights of memory frames. Thus, we obtain the global feature representation by \(\mathbf{G}_{t}=\mathbf{A}_{t}\mathbf{M}_{t}\in\mathbb{R}^{hw\times c}\), which provides the guidance to retrieve the relevant memory frames with highly similar appearance, to attend on the query frame. Usually, the same object constantly present in video will share common appearance across frames, so learning global feature representation contributes to locating primary objects. However, the above process requires high time complexity, i.e., \(\mathcal{O}(n^{2}c)\), for obtaining \[\mathbf{G}_{t}=\mathbf{Q}_{t}^{\prime}\mathbf{M}_{t}^{\prime\top}\mathbf{M}_{ t}\in\mathbb{R}^{hw\times c}. \tag{5}\] To make the model more efficient, we first compute the channel-wise memory similarity matrix, i.e., \[\hat{\mathbf{A}}_{t}=\mathbf{M}_{t}^{\prime\top}\mathbf{M}_{t}\in\mathbb{R}^{ \lfloor c/2\rfloor\times c}, \tag{6}\] where \(\mathbf{M}_{t}\in\mathbb{R}^{Nhw\times c}\) and \(\mathbf{M}^{\prime}_{t}\in\mathbb{R}^{Nhw\times\lfloor c/2\rfloor}\), and models the channel-wise correlation of memory features. Then, we calculate the cheap global feature representation by substituting Eq. (6) into Eq. (5), resulting in \[\mathbf{G}_{t}=\mathbf{Q}^{\prime}_{t}\dot{\mathbf{A}}_{t}\in\mathbb{R}^{hw \times c}, \tag{7}\] which preserves the pixel-wise similarity distribution of the feature memory and the query feature. Especially, this formulation only requires \(\mathcal{O}(nc^{2})\), which can be further simplified to \(\mathcal{O}(n)\) when \(c\ll n\). In another word, the time complexity is linear with the number of feature map pixels, allowing the model to run very efficiently. Besides, to avoid unstable numerical solutions due to large values, we do normalization on \(\mathbf{G}_{t}\), i.e., \[\mathbf{G}^{\prime}_{t}=\mathbf{G}_{t}\oslash\mathbf{D}_{t}\in\mathbb{R}^{hw \times c}, \tag{8}\] where the symbol \(\oslash\) denotes element-wise division (Hadamard division), and the matrix \(\mathbf{D}_{t}=\mathbf{Q}^{\prime}_{t}\cdot(\mathbf{M}^{\prime\top}_{t} \mathbf{1})\in\mathbb{R}^{hw\times c}\) is used for normalization. Here, \(\mathbf{1}\) is an all-one matrix with the size of \(Nhw\times c\). In the end, we reshape the matrix \(\mathbf{G}^{\prime}_{t}\) to \(\mathbf{\hat{G}}_{t}\in\mathbb{R}^{h\times w\times c}\), i.e., the global feature map, since it reflects the long-range inherent semantic relations of memory frames and the query frame. ### Short Temporal Attention (STA) To identify moving objects, STA block as illustrated in Fig. 4, encodes the partial-frame pixel spatiotemporal dependency of the nearest past frame and the current frame, by discovering the object motion pattern in terms of local patches of the appearance feature map. This not only helps to capture moving objects in short-term frame context, but also makes the model robust to occlusions in video. Motivated by the attention mechanism [36] and the fact that only partial pixels in neighboring frames will change, we propose a patch-based technique called STA, for encoding temporal attention locally. Unlike the vanilla attention modeling global relations of all pixel pairs in full frame, STA models local relations of limited pixel pairs in partial frame sequentially. This is achieved by adopting the locality-based sliding window strategy, which separates one frame into a number of much smaller regions called patches. Then, STA models the local spatiotemporal relation of pixel patches between the current frame and the nearest past frame. As illustrated in Fig. 4, the inputs of STA are the appearance feature maps of the current frame \(\mathbf{I}_{t}\) and the nearest past frame \(\mathbf{I}^{(N)}_{t}\), i.e., \(\mathbf{F}_{t}\) and \(\mathbf{F}^{(N)}_{t}\). At first, the channel dimension of \(\mathbf{F}^{(N)}_{t}\) is reduced to \(c\) by a \(1\times 1\) convolution \(\theta(\cdot)\), leading to the neighbor feature map \(\mathbf{H}_{t}=\theta(\mathbf{F}^{(N)}_{t})\in\mathbb{R}^{hw\times c}\). For \(\mathbf{F}_{t}\), we directly utilize its feature matrix \(\mathbf{Q}_{t}\in\mathbb{R}^{hw\times c}\) from the convolution layer \(\psi(\cdot)\) in LTM, and reshape it to query feature map \(\mathbf{\hat{Q}}_{t}\in\mathbb{R}^{h\times w\times c}\). STA adopts the locality-based sliding window strategy, which makes the model cheap to learn. Assume the patch size is \(k\) and each feature map is separated into \(b\) patches, as indicated by Fig. 5, we have \(n=hw=h\times w=k\times k\times b\) pixels, and the time complexity is \(\mathcal{O}(nk^{2}c)=\mathcal{O}(nc)=\mathcal{O}(n)\) (\(k^{2}\ll n\), \(c\ll n\)). Here we neglect the stride factor, as it does not change time complexity compared to the number of Figure 4: The Short Temporal Attention (STA) block in LSTA. This block mainly consists of the 2D convolution layer with \(1\times 1\) kernel, separation and recovery operations. pixels. STA models the spatial correlation of each patch with \(k\times k\times c\) pixels in a feature map, and the stride \(1\leq d<k\) affects the number of patches, i.e., \(b=(h-k+1)/d\times(w-k+1)/d\) with zero padding. Here, \(k=8\) and \(d=4\). As a result, we can obtain query patch feature tensor \(\hat{\mathbf{Q}}_{t}^{\prime}\in\mathbb{R}^{b\times k^{2}\times c}\) and neighbor patch feature tensor \(\mathbf{H}_{t}^{\prime}\in\mathbb{R}^{b\times k^{2}\times c}\), which are composed of \(b\) patch matrices, i.e., \(\{\mathbf{X}_{1},\mathbf{X}_{2},\dots,\mathbf{X}_{b}\}\in\mathbb{R}^{k^{2} \times c}\) and \(\{\mathbf{Y}_{1},\mathbf{Y}_{2},\dots,\mathbf{Y}_{b}\}\in\mathbb{R}^{k^{2} \times c}\), acting as tensor slices. To discover the pixel moving pattern in the local region of frame, STA computes the semantic similarity of query-neighbor patch pair, i.e., \((\mathbf{x}_{i},\mathbf{y}_{j})\), where \(\mathbf{x}_{i}\in\mathbb{R}^{c}\) is stacked in the \(i\)-th row of \(\mathbf{X}\) and \(\mathbf{y}_{j}\in\mathbb{R}^{c}\) is stacked in the \(j\)-th row of \(\mathbf{Y}\). Then, we obtain similarity value of each pixel pair by \[\mathbf{w}_{ij}=\frac{\exp(\mathbf{x}_{i}^{\top}\mathbf{y}_{j}/ \sqrt{c})}{\sum_{j}^{k^{2}}\exp(\mathbf{x}_{i}^{\top}\mathbf{y}_{j}/\sqrt{c})}. \tag{9}\] In this way, we can compute the similarity patch by patch, and thus get the semantic similarity tensor \(\mathbf{S}\in\mathbb{R}^{b\times k^{2}\times k^{2}}\), consisting of \(b\) matrices \(\{\mathbf{W}_{1},\mathbf{W}_{2},\dots,\mathbf{W}_{b}\}\in\mathbb{R}^{k^{2} \times k^{2}}\). They help to capture the local semantic coherence of feature pairs. For those moving objects, their semantic similarity can be well encoded in the short-term spatiotemporal context, by revealing the hidden pattern. Actually, the similarity matrices play an important role in retrieving dynamics of moving objects from neighbor frame at patch level, i.e., \(\{\mathbf{W}_{1}\mathbf{Y}_{1},\dots,\mathbf{W}_{b}\mathbf{Y}_{b}\}\in \mathbb{R}^{k^{2}\times c}\), which further act as slices of the tensor \(\hat{\mathbf{H}}_{t}\in\mathbb{R}^{b\times k^{2}\times c}\), i.e., the local feature representation. To preserve the spatial pixel correlations, we reshape \(\hat{\mathbf{H}}_{t}\) to the feature map \(\hat{\mathbf{H}}_{t}^{\prime}\in\mathbb{R}^{h\times w\times c}\) via the recovery layer, which is essentially the inverse process of patch separation. For each slice \(k^{2}\times c\), the \(k^{2}\) entries in every column are reshaped to a local patch with the size of \(k\times k\), in which way the entries in one column of all \(b\) slices are reshaped to a matrix with the size of \(h\times w\). Then, adding the channel dimension \(c\) leads to the recovered feature map \(h\times w\times c\). Due to the stride of sliding window, there is redundant samplings on feature map. Hence, we simply sum the elements of those local patch features which are projected onto the same pixel, resulting in the feature map with the same size of the original one. However, those large sum values might possibly lead to numerical instability of the data distribution. To deal with this issue, we impose the layer normalization on \(\hat{\mathbf{H}}_{t}^{\prime}\), i.e., normalization across all channels of feature map. As a result, we obtain the local feature map \(\mathbf{L}_{t}\in\mathbb{R}^{h\times w\times c}\), which encodes the short-range spatiotemporal pixel relations of the nearest neighbor frame and the query frame in terms of small patches. ### Decoder Decoder is composed of convolution layer, anisotropic convolution (AIC) [14] block (consisting of several 2D convolution layers), and up-sampling operation. Its goal is to consider both the long-range and the short-range spatiotemporal pixel correlations of the past frames and the current frame, by make a fusion on three data streams, i.e., global feature map \(\hat{\mathbf{G}}_{t}\), local feature map \(\mathbf{L}_{t}\), and query feature map \(\hat{\mathbf{Q}}_{t}\). Particularly, we first concatenate the above three feature maps along the channel dimension into a unified feature map with size of \(h\times w\times 3c\), whose channel dimension is reduced to \(c\) via one \(3\times 3\) 2D convolution layer. Then, the unified feature map is fed into the AIC module that helps to discriminate objects from the spatial context. This is followed by passing the other \(3\times 3\) 2D convolution layer for reducing the channel dimension into 2. Hereafter, the resolution of feature map is enlarged to that of original RGB frame by Figure 5: The illustration of separation and recovery layers in STA block. bilinear up-sampling. Finally, it yields the object-like pixel probability using softmax function, namely \[\tilde{\mathbf{P}}_{t}=\Theta(\hat{\mathbf{G}}_{t},\mathbf{L}_{t},\hat{\mathbf{Q} }_{t})\in\mathbb{R}^{H\times W\times 2}, \tag{10}\] where \(\Theta(\cdot)\) denotes Decoder. The elements of the first (index 0) and the second (index 1) channel denote the probabilities of pixels belonging to background and primary object, respectively. This probability tensor can be easily transformed to a binary matrix by taking the channel index of the higher probability for each pixel, i.e., \(\hat{\mathbf{P}}_{t}=\{0,1\}\in\mathbb{R}^{H\times W}\). ### Loss Function To optimize the LSTA model, we adopt the Cross-Entropy (CE) loss with the online hard example mining strategy [32]. It selects those hard pixels (usually those with larger loss values) to calculate the CE loss, which is beneficial for promoting the robustness of discriminating those ambiguous pixel regions. The model loss is computed by \[\mathcal{L}_{1}(\tilde{\mathbf{P}}_{t},\mathbf{P}_{t})=\overline{\text{Max}}_ {r}(\left\{-p_{tz}^{0}\log\tilde{p}_{tz}^{0}-p_{tz}^{1}\log\tilde{p}_{tz}^{1} \right\}_{z=1}^{HW}), \tag{11}\] where \(\overline{\text{Max}}_{r}(\cdot)\) (\(r=\left\lfloor\frac{HW}{16}\right\rfloor\) and \(HW=H\times W\)) means taking the average of those \(r\) largest loss values of pixels, \(\{\tilde{p}_{tz}^{0},\tilde{p}_{tz}^{1}\}\) are the predicted probability values of the \(z\)-th pixel in current frame \(\mathbf{I}_{t}\), and \(\{p_{tz}^{0},p_{tz}^{1}\}\) are the corresponding ground-truth object mask values. Inspired by knowledge distillation, we utilize the semi-supervised VOS model, i.e., STM [25], as teacher network, which provides guidance for our unsupervised VOS model, i.e., LSTA, as student network. Note that, the STM model trained on DAVIS17 [27] and YouTube-VOS [45] does not participate in training our LSTA model but only helps to yield the initial pixel probability of frames, i.e., soft labels, to guide the loss computation. And the inference process does not involve teacher network as well. Assume that the initial pixel probability from teacher network is \(\tilde{\mathbf{P}}\in\mathbb{R}^{H\times W\times 2}\), we compute the following loss: \[\mathcal{L}_{2}(\tilde{\mathbf{P}}_{t},\tilde{\mathbf{P}}_{t})=\frac{1}{HW} \sum_{z=1}^{HW}(-\bar{p}_{tz}^{0}\log\tilde{p}_{tz}^{0}-\bar{p}_{tz}^{1}\log \tilde{p}_{tz}^{1}), \tag{12}\] where \(\{\bar{p}_{tz}^{0},\bar{p}_{tz}^{1}\}\) are the initial probability values of the \(z\)-th pixel in the current frame \(\mathbf{I}_{t}\). Therefore, the total loss of our LSTA model is \[\mathcal{L}(\tilde{\mathbf{P}}_{t},\mathbf{P}_{t},\tilde{\mathbf{P}}_{t})= \alpha\mathcal{L}_{1}(\tilde{\mathbf{P}}_{t},\mathbf{P}_{t})+(1-\alpha) \mathcal{L}_{2}(\tilde{\mathbf{P}}_{t},\tilde{\mathbf{P}}_{t}), \tag{13}\] where the tradeoff parameter \(\alpha>0\) is used to balance the contribution of the two loss terms to the objective function. Here, we use an empirical value 0.5. To summarize our approach, we briefly show the training process in Algorithm 1 and the inference process in Algorithm 2. Our method is relevant to the widely used LSTM and Transformer, which captures the short-term and the long-term temporal relations, respectively. However, their working mechanisms are different. In particular, LSTM adopts the gating skill to control the history information in a sequence, while our STA block uses the attention mechanism to discover the object motion pattern in terms of local patches of the appearance feature map, which encodes the partial-frame pixel spatiotemporal dependency with the locality-based sliding window strategy. Moreover, Transformer adopts the self-attention to model the global dependency and needs the square time complexity, while our LTM block encodes the full-frame pixel spatiotemporal dependency in terms of appearance similarity at the cost of almost linear complexity by computing the channel-wise memory similarity matrix. ``` 1:\(\mathbf{I}_{t}\), \(\mathbf{P}_{t} ``` Input: Training videos; ground truth mask \(\mathcal{P}\); model parameters \(\Omega\); number of past frames \(N\); \(\alpha=0.5\); \(Iter_{max}=7.5e4\). 1 Randomly select \(T\) frames as query for each video. 2whilenot convergeddo 3 Randomly select one query frame \(\mathbf{I}\) from one video in sequence. 4 Select \(N\) past frames \(\mathbf{I}^{(1)},\mathbf{I}^{(2)},...,\mathbf{I}^{(N)}\). 5 Feed query frame and past frames into Encoder to obtain appearance feature map \(\mathbf{F}\) and \(\{\mathbf{F}^{(1)},\mathbf{F}^{(2)},\ldots,\mathbf{F}^{(N)}\}\). 6 Input appearance feature maps \(\{\mathbf{F},\mathbf{F}^{(1)},\mathbf{F}^{(2)},...,\mathbf{F}^{(N)}\}\) into LTM to obtain global feature map \(\hat{\mathbf{G}}\). 7 Feed appearance feature maps \(\mathbf{F}\) and \(\mathbf{F}^{(N)}\) into STA to obtain local feature map \(\mathbf{L}\) and query feature map \(\hat{\mathbf{Q}}\). 8 Obtain the object-like pixel probability \(\tilde{\mathbf{P}}\) using \(\hat{\mathbf{G}}\), \(\mathbf{L}\) and \(\hat{\mathbf{Q}}_{t}\) as in Eq. (10). 9 Calculate the first loss \(\mathcal{L}_{1}\) as in Eq. (11) and the second loss \(\mathcal{L}_{2}\) as in Eq. (12). 10 Calculate the total loss function \(\mathcal{L}\) in Eq. (13). 11 Update model parameters \(\Omega\) using SGD. 12 end while Output: Trained model. ``` **Algorithm 1**Training Process of LSTA model. ``` Input: Video frames; trained model. 1 Obtain appearance feature map for every video frame by Encoder. 2 Feed appearance feature maps of query frame and past frames to LTM to obtain global feature map. 3 Feed appearance feature maps of query frame and nearest neighbor frame to STA to obtain local feature map and query feature map. 4 Input global feature map, local feature map, and query feature map to Decoder, which yields object mask. Output: Object mask set. ``` **Algorithm 2**Inference Process of LSTA model. ### Data Sets In total, there are four publicly available VOS data sets used in the experiments. Details are shown below. **DAVIS1** short for Densely Annotated Video Segmentation, provides two kinds of frame resolution, i.e., 480p and 1080p, with pixel-level frame mask. It has two versions, i.e., DAVIS2016[26] and DAVIS2017[27], involving various scenes, such as animal, sports, and traffic vehicles. The former has 50 video sequences, which are divided into 30 training videos and 20 validation videos for inference; each video contains only one object, and there are 3,455 frames with ground-truth masks. The latter DAVIS2017 is an expansion of the former, and the number of videos increases to 150, among which there are 90 videos (60 for training and 30 for validation) with 10,459 frames with ground-truth masks; each video may contain more than one objects, adding difficulty to the task, and there are totally 376 objects. Footnote 1: [https://davischallenge.org/index.html](https://davischallenge.org/index.html) **YouTube-VOS2** collects video clips from YouTube web site, including various classes, such as animal, transportation, accessory, and human event. Each clip usually contains multiple objects, with a duration of 3s to 6s. It has three subsets, and we only use its training set, including 3,471 videos with dense (6 fps) object annotations, 65 categories, and 5,945 unique object instances. Footnote 2: [https://youtube-vos.org/dataset/vos/](https://youtube-vos.org/dataset/vos/) **YouTube-Objects3**[28] is composed of 126 videos collected from YouTube by querying for the names of 10 object classes. The duration of each video varies between 30s and 180s, and each video contains one object. Following [55][40], we use all videos for inference. **FBMS4**[24] short for Freiburg-Berkeley Motion Segmentation, contains 59 video sequences, which are separated to 29 training videos and 30 validation videos, each of which has one object. There are 720 frames with pixel-level mask annotations, which are made with an interval of 20 frames. Following [55][20], we only use validation set for inference. Footnote 4: [https://fmb.informatik.uni-freiburg.de/resources/datasets/moseg.en.html](https://fmb.informatik.uni-freiburg.de/resources/datasets/moseg.en.html) ### Evaluation Metrics We evaluate our LSTA model on DAVIS, YouTube-Objects, and FBMS benchmarks, and the model is learned using the training sets of DAVIS2017 and YouTube-VOS. Following the previous works [55][20][48], we use region similarity \(\mathcal{J}\), contour accuracy \(\mathcal{F}\), and \(\mathcal{J}\&\mathcal{F}\) as the evaluation criteria. For DAVIS, we used the official benchmark code [26]. For YouTube-Objects and FBMS, we use \(\mathcal{J}\) Mean as the metric. Region similarity \(\mathcal{J}\) is the Intersection over Union (IoU, namely Jaccard coefficient) of predicted mask \(\hat{\mathbf{P}}\) and ground-truth mask \(\mathbf{P}\), which reflects the spatial mask accuracy and is a frame size-agnostic metric. It is computed by \(\mathcal{J}=\frac{|\hat{\mathbf{P}}_{c}\cap\mathbf{P}_{t}|}{|\hat{\mathbf{P}} _{t}\cup\mathbf{P}_{t}|}\). Contour accuracy \(\mathcal{F}\) estimates whether the contour of predicted mask \(\hat{\mathbf{P}}\) is similar with that of ground-truth mask \(\mathbf{P}\). From a contour perspective, one can interpret \(\hat{\mathbf{P}}\) and \(\mathbf{P}\) as a set of closed contours \(\mathcal{C}(\hat{\mathbf{P}})\) and \(\mathcal{C}(\mathbf{P})\) delimiting the spatial extent of the mask. So one can compute the contour-based precision \(P_{c}\) and recall \(R_{c}\) via a bipartite graph matching. The \(\mathcal{F}\) measure is a harmonic value, i.e., \(\mathcal{F}=\frac{2P_{c}R_{c}}{P_{c}+R_{c}}\). \(\mathcal{\bar{J}}\) denotes \(\mathcal{J}\) Mean and \(\mathcal{\bar{F}}\) denotes \(\mathcal{F}\) Mean, each of which is the average result over all test videos. Meanwhile, we use the mean value of region similarity and contour accuracy as the overall evaluation metric, i.e., \(\mathcal{\bar{J}}\&\mathcal{\bar{F}}\) (\(\mathcal{J}\&\mathcal{F}\) Mean) over all videos. In addition, we use Frame Per Second (FPS) as the metric to evaluate the inference speed. ### Experimental Setup **Training Phase**. The Encoder of LSTA is initialized by the DeepLab v3+ [3] model pre-trained on MS COCO, while the other modules are randomly initialized using Xavier. In each iteration, we randomly sample a single frame from each of 4 videos (batch size is 4) as query frame, and its all previous frames are grouped into \(N=5\) bins, from each of which we randomly select one frame, resulting in \(N\) past frames with temporal relations. When going through all available training videos once, it finishes one epoch. For each frame, it is randomly cropped to \(465\times 465\times 3\), while random horizontal flipping and scaling are applied. The maximum iteration number is \(7.5e4\), and we adopt the SGD (Stochastic Gradient Descent) optimizer with a momentum of 0.9, a weight decay of \(1.5e\)-\(4\), and an initial learning rate of \(6e\)-\(3\). Note that our model is trained on DAVIS2017[27] and YouTube-VOS[45] with \(5e4\) iterations, and is fine-tuned on DAVIS2017[27] with \(2.5e4\) iterations to further boost the generalization ability. **Inference Phase**. For unseen videos, according to Algorithm 2, LSTA sequentially takes the current frame as query frame and previous \(N\) frames as past frames without any object prior, and outputs the corresponding object masks. Note that there will be insufficient past frames for the foremost \(N\) query frames. For such cases, we use the succeeding frames to compensate for the lacking past frames. In addition, we follow AD-Net [48] to use instance pruning to filter out some cluttered background by employing instance bounding boxes. ### Quantitative Results We show the quantitative comparison results on DAVIS2016, DAVIS2017, YouTube-Objects, and FBMS, with rigorous analysis in the following. **DAVIS2016**. The results of our LSTA model and a number of state-of-the-art alternatives are recorded in Table 1. Among them, the above seven methods are semi-supervised models, including FEELVOS (Fast End-to-End Embedding Learning) [38], AGUNet (Annotation-Guided U-Net)[49], MVOS-OL (Meta VOS Online Learning)[44], and DDEAL (Directional Deep Embedding and Appearance Learning) [50]. The remaining ones are all unsupervised models, including PDB (Pyramid Dilated Bidirectional ConvLSTM)[33], AGS (Attention-Guided object Segmentation)[41], AGNN (Attentive Graph Neural Networks)[40], COSNet (Co-Attention Siamese Networks)[20], STEm-Seg (Spatio-Temporal Embeddings for instance Segmentation) [1], AD-Net (Anchor Diffusion Network)[48], MATNet (Motion-Attentive Transition Network)[55], FSNet (Full-duplex Strategy Network)[9], FEM-Net (Flow Edge-based Motion-attentive Network)[57], 3DC (3D Convolutions)[22], RTNet (Reciprocal Transformation Network)[30], IMCNet (Implicit Motion-Compensated Network) [43], OFS (Optical Flow Segmentation), [23], DASPP (Dynamic Astrous Spatial Pyramid Pooling) [53], IMP (Iterative Mask Propagation)[13], and TMO (Treating Motion as Option)[6]. Among them, many methods such as OFS, FEM-Net, RT-Net and TMO, adopt the two-stream VOS framework that employs op \begin{table} \begin{tabular}{l r r r r r r r} \hline \hline Method & Venue & \multicolumn{1}{c}{att} & flow & pp & \(\overline{\mathcal{J}}\) & \(\overline{\mathcal{F}}\) & \(\overline{\mathcal{J}\&\mathcal{F}}\) & FPS \\ \hline AGUNet[49] & PR’21 & ✓ & & & 80.7 & 81.0 & 80.9 & 11.1 \\ FEELVOS[38] & CVPR’19 & & & & 81.1 & 82.2 & 81.7 & 2.0 \\ MVOS-OL[44] & TPAMI’20 & & & 83.3 & 84.1 & 83.7 & 2.3 \\ DDEAL[50] & TNNLS’21 & & & 85.1 & 85.7 & 85.4 & 25.0 \\ \hline PDB[33] & ECCV’18 & & & crf & 77.2 & 74.5 & 75.9 & - \\ AGNN[40] & ICCV’19 & ✓ & & crf & 80.7 & 79.1 & 79.9 & 0.3 \\ COSNet[20] & CVPR’19 & ✓ & & crf & 80.5 & 79.5 & 80.0 & 1.2 \\ STEm-Seg[1] & ECCV’20 & & & 80.6 & 80.6 & 80.6 & 0.7 \\ AD-Net[48] & ICCV’19 & ✓ & & ip & 81.7 & 80.5 & 81.1 & 0.3 \\ MATNet[55] & TIP’20 & ✓ & ✓ & crf & 82.4 & 80.7 & 81.6 & 1.3 \\ FSNet[9] & ICCV’21 & ✓ & ✓ & & 82.3 & 83.3 & 82.8 & 12.5 \\ FSNet[9] & ICCV’21 & ✓ & ✓ & crf & 83.4 & 83.1 & 83.3 & - \\ 3DC[22] & BMVC’20 & & & & 83.9 & 84.2 & 84.1 & 4.5 \\ DASPP[53] & PR’21 & & & & 63.4 & 60.2 & 61.8 & 29.4 \\ AGS[41] & TPAMI’21 & ✓ & & crf & 79.7 & 77.4 & 78.6 & - \\ RTNet[30] & CVPR’21 & ✓ & ✓ & & 85.6 & 84.7 & 85.2 & 4.3 \\ OFS[23] & TPAMI’23 & & ✓ & & 69.3 & 70.7 & 70.0 & - \\ FEM-Net[57] & TCSVT’22 & ✓ & ✓ & & 79.9 & 76.9 & 78.4 & 16.0 \\ IMCNet[43] & TCSVT’22 & ✓ & & & 82.7 & 81.1 & 81.9 & - \\ IMP[13] & AAAI’22 & ✓ & & & 84.5 & **86.7** & **85.6** & 1.79\({}^{\dagger}\) \\ TMO[6] & WACV’23 & & ✓ & & **85.6** & 86.6 & **86.1** & 24.8 \\ \hline LSTA (Ours) & & ✓ & & & 82.4 & 84.3 & 83.4 & **42.8** \\ LSTA\({}^{*}\)(Ours) & & ✓ & & ip & 82.7 & 84.8 & 83.8 & 36.2 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparisons on DAVIS2016. The methods in top group are semi-supervised and the rest are unsupervised. att: attention mechanism; flow: optical flow feature; pp: post-processing; crf: conditional random field skill; ip: instance pruning; ‘-’ means the speed is unavailable; \({}^{\dagger}\) denotes the fps on RTX 2080Ti GPU. tical flow as the motion modality to capture the temporal relations; IMCNet aligns motion information from nearby frames to the current frame, and adopts a co-attention mechanism to learn the global representation; IMP repeats easy frame selection with mask propagation but the inference speed is very low. These records show that our LSTA model enjoys more satisfying overall performance, and very competitive with SOTA unsupervised alternatives in terms of \(\mathcal{F}\) Mean, i.e., \(84.3\%\). Especially, LSTA ranks Top-1 in the inference speed, achieving 42.8 fps, almost 9.5 times of that of the best candidate 3DC. This demonstrates that LSTA can be deployed in highly demanding applications with its real-time segmentation ability. Besides, we have the following observations: * Our LSTA can beat against several semi-supervised VOS methods, such as FAVOS and RGMP, in terms of both region similarity and contour accuracy. Meanwhile, LSTA is much faster (\(1.7\times\)) than DDEAL who has the best performance in semi-supervised setting. * Most of the unsupervised VOS methods adopt post-processing techniques, such as conditional random field or instance pruning to refine the object mask. We also use instance pruning to slightly upgrade the performance by \(0.4\%\) of \(\mathcal{J}\&\mathcal{F}\) Mean. * Unlike existing unsupervised VOS methods that use costly optical flow features, our model attempts to discover the patterns of constantly present and moving objects, by encoding both global and local spatiotemporal correlations across frames with the long-short temporal attention mechanism. * While the latest work TMO achieves the best segmentation performance, it requires the expensive optical flow features and the inference speed is much lower than ours, i.e., 24.8fps versus 42.8fps. This demonstrates the LSTA strikes a good balance between the performance and speed. In addition, we provide the computational comparison and component computational analysis in Table 2 and Table 3, respectively. Note that we also provide the inference speed using 2080Ti card for reference. It can be seen from the tables that our LSTA achieves the best inference speed at 42.8fps using TITAN Xp, which is almost the twice faster than that of the second best one. Meanwhile, the developed LTM module and STA module are very lightweight in terms of FLOPs with the negligible parameter size. **DAVIS2017**. This dataset is much more difficult for segmentation, since there may exist multiple objects in a single video. To handle this case, some previous unsupervised VOS works, such as UnOVOST (Unsupervised Offline VOS) [21], MATNet (Motion-Attentive Transition Network) [55], AGS (Attention-Guided object Segmentation) [41], employ instance segmentation model Mask R-CNN [8] to obtain object proposals involving mask and boundary box. Unlike them, TAODA (Target-Aware Object Discovery and Association) [54] introduces an instance discrimination network to obtain object proposals in a bottom-up fashion. Moreover, DyStaB (Dynamic-Static Bootstrapping) [47] employs a motion segmentation module to perform temporally consistent region separation, and it requires the expensive optical flow features and CRF post-processing to get final results. Note that STEm-Seg [1] used spatiotemporal embeddings to find instances, which is based on Gaussian distribution, failing to handle complex appearance of objects. The \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Method & Venue & att & flow & pp & \(\overline{\mathcal{J}}\) & \(\overline{\mathcal{F}}\) & \(\overline{\mathcal{J}\&\mathcal{F}}\) \\ \hline RVOS[37] & CVPR’19 & & & & 36.8 & 45.7 & 41.2 \\ PDB[33] & ECCV’18 & & & crf & 53.2 & 57.0 & 55.1 \\ MATNet[55] & TIP’20 & ✓ & ✓ & crf & 56.7 & 60.4 & 58.6 \\ STEm-Seg[1] & ECCV’20 & & & & 61.5 & 67.8 & 64.7 \\ UnOVOST[21] & WACV’20 & & ✓ & & 66.4 & 69.3 & 67.9 \\ AGS[41] & TPAMI’21 & ✓ & & crf & 55.5 & 59.5 & 57.5 \\ DyStaB[47] & CVPR’21 & & ✓ & crf & 58.9 & - & 58.9 \\ TAODA[54] & CVPR’21 & & & & 63.7 & 66.2 & 65.0 \\ \hline LSTA (Ours) & & ✓ & & & **70.8** & **75.8** & **73.3** \\ LSTA\({}^{\dagger}\) (Ours) & & ✓ & & & 67.8 & 72.3 & 70.1 \\ \hline \hline \end{tabular} \end{table} Table 4: Performance comparisons on DAVIS2017. earlier works, PDB (Pyramid Dilated Bidirectional ConvLSTM) [33] and RVOS (Recurrent network) [37] identify the instance by recurrent neural networks with ConvLSTM, which fails to encode long-range temporal relations. Similarly, we use HTC (Hybrid Task Cascade) [2] to obtain object proposals of the first frame, which are then processed to yield object mask, and adopt STCN (Space-Time Correspondence Network) [4] to obtain initial object masks of subsequent frames in a semi-supervised manner. After that, we use our LSTA model to obtain predicted masks that contain multiple objects. In addition, we follow UnOVOST using Mask R-CNN to extract object proposals and its merging strategy with the pixel probability obtained by our model, and the results are listed in the bottom row. The comparison results of the above methods are tabulated in Table 4, which has shown the significant improvements brought by our model with good generalization ability. For example, LSTA achieves \(70.8\%\) on \(\mathcal{J}\) Mean, \(75.8\%\) on \(\mathcal{F}\) Mean, and \(73.3\%\) on \(\mathcal{J}\&\mathcal{F}\) Mean, which have improvements of \(4.4\%\), \(6.5\%\), and \(5.4\%\) compared to the most competitive alternative, i.e., UnOVOST. We attribute this to the fact that our approach is able to encode both long-range and shot-range spatiotemporal pixel-wise relations of the current frame and the past frames, which helps to better capture constantly present objects and moving objects in video. **YouTube-Objects**. Following previous work[55], we give segmentation results in terms of \(\mathcal{J}\) Mean for each of the _ten_ semantic categories, as in Table 5. The compared methods include SFL[5], PDB[33], MATNet[55], AGS[41], COSNet[20], AGNN[40], RTNet[30], IMCNet [43], and TMO[6]. Among them, our model outperforms the rest ones in overall performance, achieving \(71.5\%\) on \(\mathcal{J}\) Mean at \(35.5\) fps in inference speed. Especially, the performance of our method is comparable to the latest TMO model which employs the more complicated framework, and our speed is twice faster. The encouraging records have verified the advantage of our approach in terms of both effectiveness and efficiency. **FBMS**. We compare our model with OBN (Object Bilateral Networks)[16], PDB[33], COSNet[20], MATNet[55], OFS [23], AGS[41], and DASPP [53] methods on FBMS, whose results are shown in Table 6. From these records, we see our LSTA method gets the highest region similarity, i.e., \(77.3\%\), with a margin \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Method & OBN[16] & PDB[33] & COSNet[20] & MATNet[55] & OFS[23] & AGS[41] & DASPP[53] & LSTA \\ Venue & ECCV’18 & ECCV’18 & CVPR’19 & TIP’20 & TPAMI’23 & TPAMI’21 & PR’21 & Ours \\ \hline att & & & ✓ & ✓ & & ✓ & ✓ & ✓ \\ flow & ✓ & & & ✓ & ✓ & & & \\ pp & crf & crf & crf & crf & & crf & & \\ \hline \(\overline{\mathcal{J}}\) & 73.9 & 74 & 75.6 & 76.1 & 57.8 & 76.0 & 62.3 & **77.3** \\ FPS & 4.7 & - & 0.9 & 0.4 & - & - & - & **41.4** \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison results on FBMS validation set. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline Method & SegFlow[5] & PDB[33] & MATNet[55] & COSNet[20] & AGNN[40] & AGSI[41] & RTNet[30] & IMCNet[43] & TMO[6] & LSTA \\ Venue & ICCV’17 & ECCV’18 & TIP’20 & CVPR’19 & ICCV’19 & TPAMI’21 & CVPR’21 & TCSVT’22 & WACV’23 & Ours \\ \hline att & & & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & & ✓ \\ flow & ✓ & & ✓ & ✓ & & ✓ & & ✓ & & ✓ \\ pp & & crf & crf & crf & crf & crf & & & \\ \hline Airplane(6) & 65.6 & 78.0 & 72.9 & 81.1 & 81.1 & **87.7** & 84.1 & 81.1 & 85.7 & 85.1 \\ Bird(6) & 65.4 & 80.0 & 77.5 & 75.7 & 75.9 & 76.7 & 80.2 & **81.1** & 80.0 & 75.9 \\ Boat(15) & 59.9 & 58.9 & 66.9 & 71.3 & 70.7 & **72.2** & 70.1 & 70.3 & 70.1 & 66.5 \\ Car(7) & 64.0 & 76.5 & 79.0 & 77.6 & 78.1 & 78.6 & **79.5** & 77.1 & 78.0 & 78.4 \\ Cat(16) & 58.9 & 63.0 & **73.7** & 66.5 & 67.9 & 69.2 & 71.8 & 73.3 & 73.6 & 72.4 \\ Cow(20) & 51.2 & 64.1 & 67.4 & 69.8 & 69.7 & 64.6 & 70.1 & 66.8 & **70.3** & 67.1 \\ Dog(27) & 54.1 & 70.1 & 75.9 & 76.8 & 77.4 & 73.3 & 71.3 & 74.8 & 76.8 & **77.9** \\ Horse(14) & 64.8 & 67.6 & 63.2 & 67.4 & 67.3 & 64.4 & 65.1 & 64.8 & 66.2 & **68.5** \\ Motorbike(10) & 52.6 & 58.4 & 62.6 & 67.7 & **68.3** & 62.1 & 64.6 & 58.7 & 58.6 & 65.5 \\ Train(5) & 34.0 & 35.3 & 51.0 & 46.8 & 47.8 & 48.2 & 53.3 & 56.8 & 47.0 & **57.5** \\ \hline \(\overline{\mathcal{J}}\) & 57.1 & 65.5 & 69.0 & 70.5 & 70.8 & 69.7 & 71.0 & 70.5 & **71.5** & **71.5** \\ FPS & - & - & - & 0.9 & 0.4 & 4.2 & - & - & 18.5 & **35.5** \\ \hline \hline \end{tabular} \end{table} Table 5: Performance comparisons on YouTube-Objects with 10 categories. The number of videos in each category is in parenthesis. of \(1.2\%\) compared to the second best, i.e., MATNet. Meanwhile, our method achieves a satisfying balance between segmentation performance and inference speed without optical flow features and post-processing, e.g., its inference speed is 41.4fps, which helps to being deployed in highly-demanding environment. ### Ablation Study This section makes extensive analysis on the contribution of individual components of LSTA, the number of past frames \(N\) and the strategy of selecting them, the patch size and stride in STA block, the influences of channel numbers \(c\) and whether sharing Conv2D layers, the tradeoff parameter \(\alpha\), and the performance on the videos with various visual characteristics on DAVIS2016. **Individual components**. We show the effects of different components in Table 7, which examines the Light Temporal Attention block and the Short Temporal Attention block as well as \(\mathcal{L}_{2}\) loss terms. For baseline, we use \(\mathcal{L}_{1}\) loss term. From the table, we observe that both LTM and STA promote the performance by \(3.8\%\) (row 2) and \(3.2\%\) (row 3), respectively, on \(\mathcal{J}\) Mean, compared to baseline. Besides, the coupling of LTM and STA brings about an upgrade (row 4) by some margin of \(0.6\%\) and \(3.2\%\) on \(\mathcal{J}\&\mathcal{F}\) Mean, respectively. This demonstrates the necessity of simultaneously considering both global and local spatiotemporal pixel relations of the current frame and the past frames. In addition, the knowledge distillation skill is beneficial for boosting the performance by \(1.0\%\) (bottom row) on \(\mathcal{J}\) Mean. Meanwhile, we show some visualization examples of using different components in Fig. 6. As shown in the figure, the error area becomes smaller or disappears when using both STA module and LTM module (Row 2 and 4); meanwhile, the segmentation quality is further improved by adding the knowledge distillation loss \(\mathcal{L}_{2}\) (Row 3 and 6). **Number of past frames in LTM**. We vary the number of past frames from 2 to 11 for LTM, and the results are shown in Table 8. It can be seen that when \(N\) is 5, the model achieves the best performance, which \begin{table} \begin{tabular}{l c c c c} \hline \hline Blocks & \(\mathcal{J}\) Mean & \(\mathcal{F}\) Mean & \(\overline{\mathcal{J}}\&\mathcal{F}\) & Gain \\ \hline Baseline & 76.3 & 79.6 & 77.9 & - \\ +LTM & 80.1 & 82.3 & 81.2 & +3.3 \\ +STA & 79.5 & 82.5 & 81.0 & +3.1 \\ +LTM+STA & 81.4 & 83.8 & 82.6 & +4.7 \\ +LTM+STA+\(\mathcal{L}_{2}\) & **82.4** & **84.2** & **83.3** & +5.4 \\ \hline \hline \end{tabular} \end{table} Table 7: Ablation study of individual components on DAVIS2016. Figure 6: Visualization results of the ablation study on individual components. Row 1/3: LTM, Row 2/4: LTM+STA, Row 3/6: LTM+STA+\(\mathcal{L}_{2}\). The dashed yellow circle highlights the difference. suggests that much more past frames can not provide additional spatiotemporal information due to frame redundancy. **Selecting past frames**. Table 9 gives the results of six ways of selecting the past frames. Selecting the nearest frame (row 2) performs better than using the first frame (row 1). Besides, using both the first frame and the previous one will slightly degrade performance in comparison of using only the previous one, which might be reason that the object mask changes a lot with time and the first frame may mislead mask prediction. When using more past frames, e.g., two frames in row 3 and five frames in the last three rows, the segmentation performance is improved, especially using previous \(N\) nearest frames (bottom row) in inference phase. **Patch size and stride in STA**. STA adopts the locality-based sliding window strategy to reduce the computational cost, which is largely governed by the patch size \(k\) and the stride \(d\). We vary them from 2 to 32 and 1 to 16, respectively, and the results are recorded in Table 10. From the table, STA block works the best when the batch size is 8 and the stride is 4, which are both modest values. Larger patches or smaller ones do not help the improvements of capturing local spatiotemporal pixel relations of the current frame and the previous frame. **Number of channels in Conv2D**. Table 11 shows the results of increasing the number of channels (\(c\)) in Conv2D from 16 to 256. The region similarity metric is improved by \(3.2\%\) using 64 channels, compared to that with 16 channels. More channels encode more accurate spatial structure of frame data, but the performance degrades when \(c\) is over 100. This might because some noise in channel dimension is mixed with the feature maps. **Sharing Conv2D layer**. Our model adopts Conv2D layers for encoding the past frames in LTM by \(\phi(\cdot)\), the previous frame in STA by \(\theta(\cdot)\), and the current frame in STA by \(\psi(\cdot)\). So we explore the influences of sharing them in various forms as in Table 12. When sharing either two of them or three all, the performance is worse than using Conv2D layers independently for them. It suggests that learning parameters independently \begin{table} \begin{tabular}{c c c c c} \hline \hline \(k\) & \(d\) & \(\mathcal{J}\) Mean & \(\mathcal{F}\) Mean & \(\mathcal{J}\&\mathcal{F}\) Mean \\ \hline 2 & 1 & 82.0 & **84.2** & 83.1 \\ 4 & 2 & 82.0 & **84.2** & 83.1 \\ 8 & 4 & **82.4** & **84.3** & **83.4** \\ 16 & 8 & 82.0 & 84.1 & 83.0 \\ 32 & 16 & 81.9 & 83.9 & 82.9 \\ \hline \hline \end{tabular} \end{table} Table 10: Patch size \(k\) and stride \(d\) in STA block. \begin{table} \begin{tabular}{c c c c} \hline \hline \(c\) & \(\overline{\mathcal{J}}\) & \(\overline{\mathcal{F}}\) & \(\overline{\mathcal{J}\&\mathcal{F}}\) \\ \hline 16 & 79.4 & 80.9 & 80.2 \\ 32 & 81.4 & 83.7 & 82.6 \\ 64 & **82.4** & **84.3** & **83.4** \\ 128 & 81.8 & 84.0 & 82.9 \\ 256 & 81.5 & 83.9 & 82.7 \\ \hline \hline \end{tabular} \begin{tabular}{c c c c} \hline \hline Share & \(\overline{\mathcal{J}}\) & \(\overline{\mathcal{F}}\) & \(\overline{\mathcal{J}\&\mathcal{F}}\) \\ \hline \(\psi=\phi\) & 81.5 & 83.9 & 82.7 \\ \(\psi=\theta\) & 81.6 & 84.2 & 82.9 \\ \(\phi=\theta\) & 81.7 & 84.1 & 82.9 \\ \(\psi=\phi=\theta\) & 81.6 & 84.1 & 82.9 \\ None & **82.4** & **84.3** & **83.4** \\ \hline \hline \end{tabular} \end{table} Table 11: Channel Number \(c\). for them can model spatial pixel correlations better. **Tradeoff parameter \(\alpha\)**. We vary the tradeoff parameter \(\alpha\) in the loss function from 0.1 to 0.9 at an interval of 0.2, and the results are shown in Table 13. The results show that the performance of our method tends to rise up before 0.5 and saturates after the best value 0.5. This indicates that our method performs best when the model loss \(\mathcal{L}_{1}\) and the knowledge distillation loss \(\mathcal{L}_{2}\) contribute equally to the objective function. **Various visual characteristics**. We show the LSTA performance on video data with 13 kinds of visual characteristics in Fig. 7. As depicted in this histogram, our model consistently performs better across a wide range of data characteristics. For example, LSTA is higher than the most competitive one by a large margin in several challenging scenarios, including background clutter, deformation, fast motion, motion blur, and occlusion. This provides solid evidence of the strong robustness of our model. ### Qualitative Results To give an intuitive view on the superiority of our model, we visualize segmentation results of randomly selected video frames from DAVIS2016 [26], YouTube-objects [28], and FBMS [24] in Fig. 8. As drawn in the figure, our model can give an accurate mask of the human-bicycle object regardless of the occluded tree (row 1), and successfully identify the local parts of objects, e.g., the feet of animals (cats in row 3 and dogs in row 5). This demonstrates that our model can well capture the moving object and the local pattern of object. In the meantime, Fig. 9 shows the masks of multiple objects in four video sequences randomly chosen from DAVIS2017, and the results validate that LSTA enjoys satisfying discriminative ability of distinct categories in the same scenario. Moreover, we exhibit the visualized feature maps generated during the intermediate procedures of inference on a video from DAVIS2016 in Fig. 10. The first row shows the appearance feature map after Encoder, the second row shows the enhanced appearance feature map after Conv2D, the third and the fourth rows show the global and the local feature maps by passing LTM and STA, respectively. As vividly illustrated in these images, global feature maps are good at encoding the object contours, and local feature maps indeed play a similar role of optical flow in discovering the pattern of object motion. \begin{table} \begin{tabular}{c c c c} \hline \hline \(\alpha\) & \(\overline{\mathcal{J}}\) & \(\overline{\mathcal{F}}\) & \(\overline{\mathcal{J}\&\mathcal{F}}\) \\ \hline 0.1 & 78.3 & 80.2 & 79.3 \\ 0.3 & 80.1 & 81.5 & 80.8 \\ 0.5 & **82.4** & **84.3** & **83.4** \\ 0.7 & 81.9 & 82.8 & 82.4 \\ 0.9 & 81.8 & 83.0 & 82.4 \\ \hline \hline \end{tabular} \end{table} Table 13: Tradeoff parameter \(\alpha\) in the loss. Figure 7: Performance on videos with various visual characteristics. In addition, we show some failure cases of our model on DAVIS2016 in Fig. 11. For primary objects, such as a dancing man in case (a) and a running car in case (b), there are noisy objects with similar appearance, e.g., the boy closest to the dancing man and another car in the right corner, which prevents our model from accurately segmenting the primary objects. This might be because our model is dependent of visual appearance, so one can resort to instance segmentation to alleviate the problem of similarity interference in appearance. ## 5 Conclusion We have developed an end-to-end real-time unsupervised video object segmentation approach, named LSTA. It includes two primary blocks, i.e., Long Temporal Memory and Short Temporal Attention, which encode both long-range and short-range spatiotemporal pixel relations of the current frame and the past frames, respectively. The former LTM captures those constantly present objects from a global view, while the latter STA models the pattern of moving objects from a local view. Moreover, we have explored the performance of our method on several benchmark datasets. Both quantitative records and qualitative visualization results indicate the superiority of the proposed approach, including more promising segmentation masks, real-time inference speed, and robustness to some deformations or occlusions. Figure 8: Segmentation results of LSTA (Row 1, 3, 5) and MATNet (row 2, 4, 6) on three randomly selected video from DAVIS2016 (row 1&2), YouTube-objects (row 3&4), and FBMS (row 5&6) data sets, respectively. Figure 9: Segmentation results of multiple objects by LSTA on DAVIS2017. There still exist some limitations in our LSTA method to be addressed in the future. 1) Due to the lacking of supervision frame, it is difficult to capture those small or tiny objects, and which can be solved by designing unsupervised VOS methods tailored for small objects. 2) It fails to handle the objects with occlusions, which often appear in real-world applications. Hence, a heuristic way is to adopt the video inpainting method to recover the occluded parts first. 3) Compared to the current vanilla method, it will be interesting to consider using additional knowledge, such as referring expressions and object detections, so as to further improve the performance in unsupervised setting. ## Acknowledgment This work was supported in part by Zhejiang Provincial Natural Science Foundation of China under Grants LR23F020002, LY22F020012, in part by "Pioneer" and "Leading Goose" R&D Program of Zhejiang, China under Grants 2023C01221, 2022C03132.
2309.03303
A Novel Approach for Invoice Management using Blockchain
Electronic invoicing is another area where blockchain technology is being used. Additionally, it has the power to alter how payments are made, invoices are issued, and transactions are validated. Using a blockchain-based invoicing system will enable smooth payments from a customer's digital wallet to a business's digital wallet. Transactions are simple to track and monitor, and the blockchain may be used to retrieve an exchange's full history. Sometimes shopkeepers create fake bills and submit them to the higher tax-paying authorities. To bring transparency to this billing system between customers, shopkeepers, and tax-paying authorities billing system using blockchain is to be implemented using the concept of Blockchain and make the billing system in our country work smoothly. Blockchain technology can revolutionize the invoicing and payment process by providing a secure, transparent and tamper-proof system. A blockchain-based billing system can facilitate smooth payments, allow for easy tracking and monitoring of transactions, and provide a tamper-proof history of all exchanges. The use of blockchain can prevent fraud and increase transparency among customers, shopkeepers, and tax-paying authorities. Furthermore, it can streamline the process by using digital wallets for both customers and businesses, reducing time and resources for traditional invoicing methods. Overall, blockchain technology can bring greater efficiency and trust to the billing system, benefiting all parties involved. It can prevent fraud, increase transparency and streamline the invoicing and payment process. This technology can create a more secure and efficient billing system ultimately benefiting all parties involved.
Nikhil Sontakke, Shivansh Rastogi, Sejal Utekar, Shriraj Sonawane
2023-09-06T18:26:40Z
http://arxiv.org/abs/2309.03303v1
# A Novel Approach for Invoice Management using Blockchain ###### Abstract Electronic invoicing is another area where blockchain technology is being used. Additionally, it has the power to alter how payments are made, invoices are issued, and transactions are validated. Using a blockchain-based invoicing system will enable smooth payments from a customer's digital wallet to a business's digital wallet. Transactions are simple to track and monitor, and the blockchain may be used to retrieve an exchange's full history. Sometimes shopkeepers create fake bills and submit them to the higher tax-paying authorities. To bring transparency to this billing system between customers, shopkeepers, and tax-paying authorities "billing system using blockchain" is to be implemented using the concept of Blockchain and make the billing system in our country work smoothly. Blockchain technology can revolutionize the invoicing and payment process by providing a secure, transparent and tamper-proof system. A blockchain-based billing system can facilitate smooth payments, allow for easy tracking and monitoring of transactions, and provide a tamper-proof history of all exchanges. The use of blockchain can prevent fraud and increase transparency among customers, shopkeepers, and tax-paying authorities. Furthermore, it can streamline the process by using digital wallets for both customers and businesses, reducing time and resources for traditional invoicing methods. Overall, blockchain technology can bring greater efficiency and trust to the billing system, benefiting all parties involved. It can prevent fraud, increase transparency and streamline the invoicing and payment process. This technology can create a more secure and efficient billing system ultimately benefiting all parties involved. Blockchain, Technology, Invoicing, Supply Chain, Transactions, Secure, Decentralized, Bill Management Systems. + Footnote †: preprint: ISSN No:-2456-2165 ## I Introduction Almost all facets of modern life are made easier by technology advancements as all nations embrace the 4.0 industrial revolution. New developments brought forth by the technology's quick development offer enormous potential for market and commercial expansion [9]. Blockchain technology is a distributed, decentralized digital ledger that securely and openly records transactions. The technology that serves as the foundation for the virtual currency Bitcoin was first unveiled in 2008 by a person or group of persons named as Satoshi Nakamoto. [3]. Since then, the idea of blockchain has developed into one of the most anticipated technologies of the twenty-first century, with potential uses in a variety of sectors including finance, supply chain management, and healthcare. Blockchain is an integrated innovation of already existing technologies, not a disruptive one. It combines smart contracts, distributed storage, consensus processes, and data encryption [6]. A chain of blocks, each containing a collection of transactions, is formed by the blockchain technology. Each block is linked to the one before it, creating a permanent record of every network transaction. The decentralized nature of the blockchain ensures that the ledger is tamper-proof, as any changes to a block would be immediately apparent to all parties on the network. Low cost was cited by 52% of reviewers as the primary advantage of electronic invoices (e-invoices). Each paper invoice exchange costs e7, whereas e0.3 is charged for exchanges in electronic format, a 25-fold cost reduction. Additionally, each individual can only process 6,000 paper bills annually, whereas an individual can issue up to 90,000 invoices in electronic form. E-invoices will be successfully merged by the automatic digital system, which uses the e-invoice as input data [7]. Traditional payment methods are sometimes opaque, relying on paper, and leaving little or no audit trail. Debtors can easily avoid paying their debts by hiding behind bureaucracy or claiming that their claims have been delayed or lost. So using blockchain information can be accessible at each step of the transaction and transparency can be provided.The number of parties involved in the product transportation process and the number of handoffs that occur many times en route add to the complexity of global trade. Building end-to-end shipping visibility becomes quite difficult. Therefore, for the purposes of creating invoices, handling disputes, and settling payments, shippers and carriers are engaged to collect as much information as possible [5]. One of blockchain technology's most promising applications is the usage of bill management systems, which can increase efficiency and trust in the system by offering a safe and transparent means to record and verify transactions. In traditional bill management systems, it is easy for fraudulent activities to take place, such as shopkeepers creating fake bills and submitting them to tax-paying authorities. Blockchain allows for the recording of all transactions on a distributed ledger that is available to all parties. By doing this, all transactions are recorded in a visible and unchangeable manner, virtually eliminating the possibility of fraud. Furthermore, blockchain technology can also help to streamline the bill management process. Digital wallets can be used for both customers and businesses, allowing for easy and efficient payments. This can help to reduce the time and resources needed for traditional bill management methods, such as printing and mailing paper bills.The use of blockchain in bill management systems can bring greater transparency and trust between customers, shopkeepers, and tax-paying authorities.Additionally, it can aid in lowering the expenses and administrative strain connected to conventional bill management systems. The technology can help to prevent fraud and increase transparency, while also streamlining the bill management process. Despite the many advantages of blockchain technology, there are also some challenges and limitations to its implementation. Scalability is a major issue because the network can currently only handle a certain amount of transactions. Another challenge is the lack of standardization and regulation, which can make it difficult for businesses to adopt the technology. Finally, the advancement of blockchain technology has the potential to fundamentally change the way in which bill payment systems function. By providing a secure, transparent and tamper-proof system, blockchain technology can bring greater efficiency and trust to the bill management process. But there are also difficulties and constraints with its application that must be resolved. In order to understand the potential of blockchain technology for bill management systems, this research study will look at both its advantages and disadvantages. The paper will also discuss current developments and real-world examples of blockchain-based bill management systems, and provide insights into the future of this technology in this field. ## II Literature Review From the standpoint of OSCM, Rosanna Cole, Mark Stevenson, and James Aitken looked into blockchain technology. This article aims to lead to more research into blockchain technology from a management and operations perspective. Research agendas for the future will be developed based on identification of potential application areas. Different techniques, including strengthening product safety and security, enhancing quality management, lowering unauthorized counterfeiting, and enhancing sustainability, are possible to apply blockchain to OSCM operation. Additionally, it can decrease the need for middleman and enhance supply chain interactions in a way that lowers costs. Inventory management and replenishment can also be improved [1]. Simanta Shekhar Sarmah provides background on Blockchain technology, its history, its architecture, its advantages, and its applications in a number of industries in this research-based paper where he discusses Blockchain technology, its history, its architecture, its workings, and its advantages and disadvantages. A major technological innovation in recent years has been blockchain technology. The blockchain has revolutionized the way businesses are conducted because it is a transparent system of money exchange. In the next five years, the blockchain market is predicted to be worth over 3 trillion dollars thanks to major investments from tech giants and corporations. The network consists of a digital ledger in a peer-to-peer network. It is gaining popularity because of its security and capability to solve digital identity issues.A simple introduction to blockchain technology is given in this application-based study by Arijit Chakrabarti and Ashesh Kumar Chaudhuri. Additionally, it explores how blockchain technology might greatly benefit customers and retailers by being applied to some business operations in the retail industry. The research highlights some of the challenges as well as the use of blockchain technology [3]. In a recently published application-based research article, Nam Ho Kim, Sun Moo Kang, and Choong Seon Hong proposed a mobile charger billing system for electric vehicles that makes use of Blockchain technology. Peer-to-peer online transactions are now more secure because of the application of this technology. Additionally, they examined the billing needs for mobile chargers and put up a lightweight solution to the problem of data size in the current Blockchain [4]. A blockchain-based e-invoice system for goods carriers is suggested in the paper "Blockchain Based e-Invoicing Platform for Global Trade" by Krishnasuri Narayanam, Seep Goel, et. al. It intends to increase the effectiveness and reduce the expenses of the e-invoicing process. The system promises to decrease disputes, expedite dispute settlement, and enable real-time auditing by using real-time shipment tracking data and pre-agreed service contract rates to generate invoices. The motivation for organizations to adopt such a system is to improve the overall efficiency and cost-effectiveness of global trade[5]. Blockchain technology may effectively address the problems of intermediaries' trust risk, reduce transaction costs, and boost synergy efficiency in a multi-agent context, according to Liu Xidong. The viability of using blockchain technology to create electronic invoices is discussed in this article, along with the general layout of the blockchain for electronic invoices. In the study, it is also recommended to build an alliance chain for electronic invoicing and employ intelligent contracts to put research ideas into practice for various alliance transactions [6].In this paper, Van-Cam NGUYEN, et. al suggest a method that uses Blockchain smart contracts to digitize invoices and calculate VAT automatically. The smart contract was created on the Remix IDE using the Solidity programming language and the Ethereum platform. According to empirical findings, the new model has cheap costs for digitizing invoices and figuring VAT. Our suggested strategy also lowers the danger of data loss attacks, which increases the credibility of the implementation of VAT payment (nonaffection from the third party) [7]. E-commerce system security, openness and trust, efficiency, and other specific challenges are now being addressed by the industry. These problems can be solved by implementing blockchain technology in the e-commerce industry. The potential uses of blockchain technology in the e-commerce industry were discussed in this paper. Examined in relation to blockchain applications and possibilities are several aspects of e-commerce, such as payment, security, supply chain, work automation utilizing smart contracts, and ethical standards for transparency in e-commerce transactions. [8] This study examines how a VAT system can use blockchain technology, especially for electronic invoices (e-Invoice). This study used a qualitative methodology to explore blockchain technology models that could be used in a VAT system. The study's findings indicate that taxpayer data that doesn't need to be private can only be stored via blockchain technology. One example of data that is deemed secure if distributed across nodes in the blockchain technology network is the Tax Invoice Serial Number (TISN) [9],Chang, Yi-Wei, et. al. created an online marketplace powered by blockchain. They processed the money and secured the deposit using the self-enforcement of smart contracts. Each transaction is recorded in the decentralized ledger and blockchain-verified. Thus, trustless transactions are made possible. Without the involvement of reliable third parties, the smart contract can carry out reliable transactions, and blockchain transactions are traceable and irreversible. The blockchain stores all processes, including the introduction of the goods, the purchase, the delivery, and the payment. When a transaction dispute arises, it can be logged and utilized as electronic evidence in court [10]. This study looks at how smart contracts and blockchain technology can be used to efficiently bill for government services. The report also evaluates a number of government agencies and services to choose the appropriate blockchain type. Implementing blockchain and smart contracts reduces the problem of duplicate billing and payments, but it also has the potential to revolutionize the process by increasing the transparency of service billing and payment, which improves audit opportunities [11].In this paper, Guerar, Meriem, et.al. propose a model based on a public blockchain that permits fully open and group-restricted invoice auctioning. Furthermore, their strategy offers a reputation system based on the prior deeds of entities as documented on the open blockchain, enabling insurance providers to modify the cost of the insurance contracts they offer [12]. Distributed Ledger Invoice, a blockchain-based invoice discounting system, is introduced in this study, and a novel assessment method is suggested for assessing the present blockchain solutions for the invoice discounting scenario. Additionally, they go through two key challenges relating to interoperability and accessibility of information. Interoperability is crucial for blockchain's acceptance in interbanking operations because it is still a developing technology and multiple blockchain solutions may be employed in these activities. They also recommend a decoupling layer based on the Attribute-Based Access Control language to standardize access control to reserved information across several blockchains [13]. In this article, the block chain, which forms the basis of Bitcoin, is examined. BlockChain is a very ISSN No:-2456-2165 appealing technology for resolving existing issues in the financial and non-financial industries because of its distributed ledger functionality and security.BlockChain-based business apps are quite popular, and as a result, many start-ups are developing them. The adoption will undoubtedly encounter the previously indicated severe headwinds. But even major financial organizations like Visa, Mastercard, Banks, and NASDAQ are making investments to investigate how to use current business models on BlockChain. In fact, several of them are looking for fresh company ideas in the blockchain industry. [14] For SMEs to manage their liquidity concerns, factoring--where the invoice is cashed to prevent late payments from customers is a critical financial tool. Unfortunately, the fact that this business model relies on relationships with others and that the people involved in this case suffer from knowledge asymmetry make it unsafe. "Double funding" is one of the issues. which occurs when a SME draws money from several sources. They have proposed a system called DecReg that is built on blockchain technology in order to lessen this disparity and improve the scalability of this crucial instrument. We give performance analysis together with the protocols created for this framework.[1] ## III System Design The System is designed in such a way that the taxpayers can track their taxes and the government authorities monitoring taxation nationwide can track the shopkeepers' real payments with the help of all the billing data produced on the blockchain. The Taxpayers and the authorities contribute directly to the blockchain. A single block consists of registration annual returns payments show cause notices and orders. This data is then verified and used by/for law enforcement, assessment, deemed and recovery, and prosecution. Fig 1: Flowchart of Proposed Model There are different views for the data stored in the blockchain: State Tax Authorities, Central Tax Authorities, and Other related authorities. Views are basically used for giving access control to users based on specific data items. ## IV Methodology The project is built on a Web Application platform with HTML, CSS, and JavaScript, the three main languages used to build websites. Our website is programmed in JavaScript, structured in HTML, and styled using CSS. Bills are created via the UI, and bill data such as owner name, bill id, and so on are supplied as smart contract characteristics. A smart contract, which is a self-executing contract, directly incorporates the terms of the buyer-seller agreement into its lines of code. Smart contracts enable the implementation of reliable transactions and agreements between dispersed, anonymous parties without the need for a centralized authority, a legal framework, or an external enforcement mechanism. We are placing the data on the Ethereum Blockchain to assure its security. A peer-to-peer network for securely executing and verifying application code, or "smart contracts," is created by Ethereum, a decentralized blockchain platform. Without the aid of a reliable central authority, parties can conduct business with one another via smart contracts. We deployed the contracts on test networks before deploying them on the main network. We used Solidity to construct the contract. We used Remix IDE to create and deploy smart contracts, which are used to create a chain of transaction records and execute business logic in the blockchain system. ## V Implementation In implementation, the website frontend is created using Web3, a JavaScript library that allows users to interact with Ethereum blockchain. The first step is to fetch the contract that has been deployed on the blockchain. Once the contract is fetched, an instance of it is created. After creating the instance, the relevant information about the invoice, such as receipt number, total amount, seller identification, and buyer identification, is filled in as parameters. Once the parameters are filled, the event that has been created is described, and its emit function is called. By storing the variables on the blockchain with this emit function, they become unchangeable and tamper-proof. By enabling open access to all data stored on the blockchain, the system becomes transparent and trustworthy. Additionally, the frontend can also include features such as user authentication, real-time updates, and notifications, to ensure smooth and efficient interactions with the blockchain contract. The frontend can also include a user-friendly interface that allows users to easily navigate and interact with the contract. Overall, the frontend serves as the bridge between the users and the blockchain contract, making it an important ISSN No:2456-2165 aspect of the overall implementation. The smart contract for the invoice generation system was created using Solidity, a programming language specifically designed for the Ethereum blockchain. RemixIDE, a web-based integrated development environment (IDE) that enables developers to build, test, and deploy smart contracts, was used to develop the contract. For the implementation, the Ethereum Ropsten TestNet and Ganache were used as the blockchains. Ropsten TestNet is a test network for Ethereum, which allows developers to test their contracts without using real Ether. Ganache, on the other hand, is a localhost simulation of Ethereum, which allows developers to test their contracts in a local environment. The website provides a great user interface for the shopkeeper or retailer and can provide with a lot of options like inventory management and order management apart from just invoice generation. This allows the shopkeeper or retailer to have complete control over their inventory and orders, which can help them keep track of the stock and avoid stockouts. The website also provides a lot of functionalities like real-time updates, notifications, and reports, which can help the shopkeeper or retailer to make better decisions. Overall, the smart contract and the website together provide a robust and secure invoice management system that can help businesses to automate their invoicing process, improve their efficiency, and reduce the cost. Because the data is immutable and tamper-proof thanks to the usage of blockchain technology, the system is more transparent and trustworthy. Additionally, the user-friendly interface of the website makes it easy for users to interact with the contract, making it accessible for businesses of all sizes. Pseudo Code for smart contract in Solidity for creating and paying the bill: 1. Initialize an empty list of bills 2. Initialize a bill counter to 0 3. Create a function called "create bill" a. Input: payee's address, bill amount, and memo b. Increment the bill counter c. Create a new bill with payee's address, bill amount, memo, and "unpaid" status d. Add the new bill to the list of bills e. Emit an event "bill created" with bill ID, payee's address, bill amount and memo Fig 2: Generated Invoice 4. Create a function called "pay bill" a. Input: bill ID b. fetch bills from list of bills, using bill ID c. Check if the bill is unpaid i. If it is unpaid, check if the msg.value is equal to the bill amount 1. If true, transfer the bill amount to the payee's address 2. Mark the bill as paid 3. Emit an event "bill paid" with bill ID We also deployed the contracts on Ganache which is a LocalHost simulation of Ethereum. ## VI Conclusion and Future SCOPE In this article, we covered blockchain technology and how it may be used as a billing system in the retail industry. By enhancing the transparency of products and overall information of bill generation the retail industry can be benefitted by blockchain technology. This system will minimize present tax evasion, and the government will be able to track it. In addition to that, more openness will be offered to customers so that they may learn whether the tax they pay for a product is truly paid to the government.
2309.16040
Handbook on Leveraging Lines for Two-View Relative Pose Estimation
We propose an approach for estimating the relative pose between calibrated image pairs by jointly exploiting points, lines, and their coincidences in a hybrid manner. We investigate all possible configurations where these data modalities can be used together and review the minimal solvers available in the literature. Our hybrid framework combines the advantages of all configurations, enabling robust and accurate estimation in challenging environments. In addition, we design a method for jointly estimating multiple vanishing point correspondences in two images, and a bundle adjustment that considers all relevant data modalities. Experiments on various indoor and outdoor datasets show that our approach outperforms point-based methods, improving AUC@10$^\circ$ by 1-7 points while running at comparable speeds. The source code of the solvers and hybrid framework will be made public.
Petr Hruby, Shaohui Liu, Rémi Pautrat, Marc Pollefeys, Daniel Barath
2023-09-27T21:43:04Z
http://arxiv.org/abs/2309.16040v1
# Handbook on Leveraging Lines for Two-View Relative Pose Estimation ###### Abstract We propose an approach for estimating the relative pose between calibrated image pairs by jointly exploiting points, lines, and their coincidences in a hybrid manner. We investigate all possible configurations where these data modalities can be used together and review the minimal solvers available in the literature. Our hybrid framework combines the advantages of all configurations, enabling robust and accurate estimation in challenging environments. In addition, we design a method for jointly estimating multiple vanishing point correspondences in two images, and a bundle adjustment that considers all relevant data modalities. Experiments on various indoor and outdoor datasets show that our approach outperforms point-based methods, improving AUC@10\({}^{\circ}\) by 1-7 points while running at comparable speeds. The source code of the solvers and hybrid framework will be made public. ## 1 Introduction Estimating the relative pose (, rotation and translation) between an image pair is a fundamental problem both in computer vision and robotics that has numerous real-world applications,, in 3D reconstruction [3, 9, 37, 71, 76, 86], visual localization [50, 58, 68, 69], simultaneous localization and mapping [16, 17, 22, 54], multi-view stereo [13, 26, 27, 43], and visual odometry [56, 57]. In this paper, we focus on estimating the relative pose in a hybrid manner, jointly from 2D line and point correspondences and their coincidences. This allows for being robust to various indoor and outdoor scene characteristics,, low-textured areas where lines tend to be more distinctive than points. The traditional approach for estimating relative pose in two images involves detecting [17, 49, 64, 81] and matching [66] local features to form a tentative point correspondences. They are then fed into a robust estimator, such as RANSAC or one of its variants [6, 8, 14, 40, 63], to simultaneously find the sought relative pose and the matches consistent with it. Although this point-based approach is still widely used and forms the cornerstone of many vision applications, it has certain weaknesses that deteriorate its accuracy in scenes dominated by homogeneous or repetitive regions. This poses a challenge, especially in indoor scenes [15, 72, 73] that often contain low-textured areas, walls, preventing to find distinctive features. Repetitive structures also frequently appear in man-made environments, windows on a facade, breaking the visual descriptor-based feature matching due to the implied ambiguity. Several alternative approaches have been proposed for relative pose estimation, including ones leveraging optical flow [18, 38], or using features that contain richer information than simply the point coordinates [4, 5]. While algorithms based on optical flow are widely used in SLAM pipelines [54], they assume a relatively small camera motion. Thus, they are not applicable to general relative pose estimation with cameras moving arbitrarily. Other methods exploit rich features, such as affine correspondences, to solve the problem with fewer matches than when using only points. This reduces the combinatorics of the robust estimation problem and can often improve both accuracy and runtime. However, these methods are also subject to the same weaknesses as point-based approaches in that they Figure 1: **Relative pose from points and lines.** We present all configurations to exploit point, line, and vanishing point correspondences for estimating the relative pose of two calibrated images. By combining the configurations within a hybrid RANSAC [24] framework our approach can handle typical failure cases of the widely used 5-point solver [55], in low textured areas. require features to be located on salient regions to estimate their affine shape accurately [51, 52, 53, 84]. Lines are known to be particularly useful, especially in low-textured areas, and are actively used for 3D localization [1, 29] or reconstruction using 2D-3D line matches [10, 34, 48, 62, 74, 82, 87]. Also, a growing number of papers investigate their potential when having more than two images [11, 19, 20, 23, 32, 33, 39]. However, their use for relative pose estimation in a stereo setup is limited as corresponding 2D lines do not impose explicit constraints on the relative camera pose [19]. There are several works leveraging lines for two-view geometry estimation. Guerero et al. [35] estimate a homography from four collinear line correspondences by the well-known direct linear transformation. Elqursh et al. [21] assumes a triplet of lines to be in a special configuration, allowing to estimate the relative camera rotation decoupled from the translation. This paper investigates the configurations where points and lines can be used to estimate the relative pose between two calibrated views. Even though there are several solvers proposed throughout the years [21, 35, 65] using lines and vanishing points to estimate relative pose, there is no comprehensive overview nor comparison of how such methods can be used in practice. We provide a list of the relevant point, vanishing point, and line configurations and review the minimal solvers available in the literature. Benefiting from this knowledge, we develop a unified framework that simultaneously exploits multiple data modalities in a hybrid manner to provide robust and accurate results even in challenging environments. The contributions are: * We investigate _all_ relevant data configurations, where points, lines, and their coincidences (_e.g_., vanishing points and junctions) can be used together. * We review the minimal solvers for the configurations available in the literature and provide an overview. * We transform the configurations not available in the literature to the known problems to solve them. * We develop a unified framework that simultaneously benefits from multiple feature types in a hybrid manner for estimating the relative pose. * In addition, we provide proof that the constraint derived from coplanar lines is equivalent to using line junctions while leading to more stable solvers. We propose a joint vanishing point estimation method between two images; and a local optimization algorithm that simultaneously optimizes over all data modalities. We demonstrate on several public, real-world, and large-scale datasets (both indoor and outdoor) that the proposed approach is superior to state-of-the-art methods relying only on point correspondences. ## 2 Relative Pose from Point and Line Matches In this section, we study the problem of calibrated relative pose estimation between two images from 2D point correspondences (PC), line correspondences (LC), and the vanishing points (VP) stemming from parallel lines. Point correspondences can come from line junctions [21], endpoints, or from an off-the-shelf feature detector and matcher, _e.g_., SuperPoint [17] with SuperGlue [66] or LoFTR [78]. Vanishing points are extracted from the detected line matches prior to the relative pose estimation procedure. ### Theoretical Background Here, we describe the theoretical concepts used in the paper. **Projection matrix \(\mathbf{P}_{i}\in\mathbb{R}^{3\times 4}\)** of the \(i\)-th camera is decomposed as \(\mathbf{P}_{i}=\mathbf{K}_{i}[\mathbf{R}_{i}\ \mathbf{t}_{i}]\), where \(\mathbf{K}_{i}\in\mathbb{R}^{3\times 3}\) is the intrinsic matrix, and \(\mathbf{R}_{i}\in\text{SO}(3),\mathbf{t}_{i}\in\mathbb{R}^{3}\) represent the rotation and translation, respectively. In case of having calibrated cameras, \(\mathbf{P}_{i}\) can be simplified to \(\mathbf{P}_{i}=[\mathbf{R}_{i}\ \mathbf{t}_{i}]\). **Relative pose \((\mathbf{R},\mathbf{t})\)** between two cameras \(\mathbf{P}_{1},\mathbf{P}_{2}\) is obtained as \(\mathbf{R}=\mathbf{R}_{2}\mathbf{R}_{1}^{\mathsf{T}}\), \(\mathbf{t}=\mathbf{t}_{2}-\mathbf{R}_{2}\mathbf{R}_{1}^{\mathsf{T}}\mathbf{t}_ {1}\). The **epipolar geometry**[36] relates the relative pose \((\mathbf{R},\mathbf{t})\) and a homogeneous 2D point correspondence \((\mathbf{p},\mathbf{p}^{\prime})\in\mathbb{R}^{3}\times\mathbb{R}^{3}\) (PC). Let \(\mathbf{X}\in\mathbb{R}^{3}\) be a point in space, \(\mathbf{p}\) be its projection into \(\mathbf{P}_{1}\), and \(\mathbf{p}^{\prime}\) be its projection into \(\mathbf{P}_{2}\). Projections \(\mathbf{p},\mathbf{p}^{\prime}\) are related by the epipolar constraint [36] as \(\mathbf{p}^{{}^{\prime}\mathsf{T}}\mathbf{F}\mathbf{p}=0\), where \(\mathbf{F}\) is a fundamental matrix relating \(\mathbf{P}_{1}\), \(\mathbf{P}_{2}\). If the cameras are calibrated, the constraint is simplified to \(\mathbf{p}^{{}^{\prime}\mathsf{T}}\mathbf{E}\mathbf{p}=0\), where \(\mathbf{E}\) is the essential matrix relating cameras \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\). The essential matrix \(\mathbf{E}\) is decomposed as \([\mathbf{t}]_{\times}\mathbf{R}\). Then, the epipolar constraint is written as \[\mathbf{p}^{{}^{\prime}\mathsf{T}}[\mathbf{t}]_{\times}\mathbf{R}\mathbf{p}=0. \tag{1}\] Equation (1) imposes one constraint on the relative pose \(\mathbf{R},\mathbf{t}\). Since the scale cannot be observed, the relative pose has five degrees of freedom, and it can be estimated from five point correspondences [55]. **Homography** relates planes projected into cameras \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\). Let \(\mathbf{\Pi}\) be a 3D plane, and \(\mathbf{X}\in\mathbf{\Pi}\) be a point on plane \(\mathbf{\Pi}\). Its projections \(\mathbf{p}\), \(\mathbf{p}^{\prime}\) into \(\mathbf{P}_{1}\), \(\mathbf{P}_{2}\) are related by \(\mathbf{p}^{\prime}\sim\mathbf{H}\mathbf{p}\), where \(\mathbf{H}\in\mathbb{R}^{3\times 3}\) depends only on \(\mathbf{P}_{1}\), \(\mathbf{P}_{2}\), and \(\mathbf{\Pi}\). Similarly, let \(\mathbf{L}_{1}\subset\mathbf{\Pi}\) be a line in its implicit form on plane \(\mathbf{\Pi}\). Its projections \(\mathbf{l}\), \(\mathbf{l}^{\prime}\) into \(\mathbf{P}_{1}\), \(\mathbf{P}_{2}\) are related by \[\mathbf{l}\sim\mathbf{H}^{\mathsf{T}}\mathbf{l}^{\prime}. \tag{2}\] We can estimate \(\mathbf{H}\) from 4 coplanar line corrs. (LC) [36]. **Vanishing point (VP)** is an intersection of 2D projections of parallel 3D lines. The homogeneous coordinates of vanishing point \(\mathbf{v}_{i}\) in camera \(j\) are \(\mathbf{v}_{i}\sim\mathbf{K}_{j}\mathbf{R}_{j}\mathbf{d}_{i}\), where \(\mathbf{d}_{i}\in\mathbb{R}^{3}\) is the direction of the \(i\)-th line in 3D. If the camera is calibrated, then this formula is simplified as \[\mathbf{v}_{i}\sim\mathbf{R}_{j}\mathbf{d}_{i}. \tag{3}\] Let us have 2 calibrated cameras \(\mathbf{P}_{1}=[\mathbf{R}_{1}\ \mathbf{t}_{1}]\), \(\mathbf{P}_{2}=[\mathbf{R}_{2}\ \mathbf{t}_{2}]\). Vanishing points \(\mathbf{v}\) in \(\mathbf{P}_{1}\), \(\mathbf{v}^{\prime}\) in \(\mathbf{P}_{2}\) are related by \[\mathbf{v}^{\prime}\sim\mathbf{R}_{2}\mathbf{R}_{1}^{\mathrm{T}}\mathbf{v}= \mathbf{R}\mathbf{v}, \tag{4}\] where \(\mathbf{R}=\mathbf{R}_{2}\mathbf{R}_{1}^{\mathrm{T}}\in\mathrm{SO}(3)\) is the relative rotation between \(\mathbf{P}_{1}\), \(\mathbf{P}_{2}\). Note that, if \(\mathbf{v}\), \(\mathbf{v}^{\prime}\) are normalized, there must hold: \[\mathbf{v}^{\prime}=\mathbf{R}\mathbf{v}\quad\text{or}\quad\mathbf{v}^{\prime }=-\mathbf{R}\mathbf{v}. \tag{5}\] A single vanishing point correspondence (VC) gives two constraints on rotation \(\mathbf{R}\). Two VCs fully determine \(\mathbf{R}\), and a third VC does not give additional constraints on the calibrated relative pose estimation. ### Possible Configurations Considering the constraints described in the previous section, the number of distinct configurations of points, vanishing points, and lines orthogonal to them for estimating the relative pose is limited. For their summary, see Fig. 6. All other configurations can be traced back to these or do not provide additional constraints for the relative pose. **Discussions on the completeness.** We give here a high-level explanation that the list of configurations is complete. The full proof is provided in Sec. A in the supp. mat. The relative pose has 5 degrees of freedom (DoF) [55]. The possible configurations can only have \(0\), \(1\), or \(2\) VPs. While 1 VP fixes 2 DoF, and 2 VPs fix 3 DoF [65], a third VP does not provide additional constraints. Moreover, one line orthogonal to a VP can create a second VP [21]; in the case of 2 VPs, a line orthogonal to one of them does not provide any new information. A point correspondence fixes 1 DoF [36]. Four coplanar points fix 5 DoF due to the homography constraint [36]. Since \(n<4\) points are always coplanar, their coplanarity does not add any new constraints. We use these facts to obtain the possible configurations of points, vanishing points, and lines orthogonal to them. Overall, the configurations can be clustered into five distinct categories, summarized in Table 10. **Obtaining all configurations.** To obtain more configurations, points can be replaced by lines with the following rules, where we adopt the solver notations of Fig. 6: * 3 PCs can be replaced by _3 coplanar lines_. Configuration 2-3-0 can thus be obtained from 5-0-0, and configuration 0-3-1 from 3-0-1. * If we have 4 coplanar points, we can replace each of them with a line [36]. Thus, the 4-0-0 configuration yields four additional ones: 3-1-0, 2-2-0, 1-3-0, 0-4-0. * One PC can be replaced with an intersection of two lines. We prove in Sec. B of supp. mat. that using constraints implied by coplanar lines is equivalent to using their junctions as corresponding points. * 2-0-1\({}^{\perp}\), 2-1-1\({}^{\perp}\), and 1-2-1\({}^{\perp}\) belong to the same family. In summary, the 5 categories in Table 10 yield the 13 configurations of Fig. 6 that are relevant to the problem. #### 2.2.1 Existing Solvers Some of the previously listed problems have already been discussed in the literature and solved. Such configurations are the 5 PC solver [44, 47, 55, 77], the 4 PC, 4 LC, and combined homography solvers [36], the 2-1-1\({}^{\perp}\) solver [21], and the 2-0-2 solver [65]. Configuration 2-3-0 is solved by 5 PC solver [55] after replacing the junctions with points. Configurations 3-0-1 and 0-3-1 are solved by transforming to the 3 point upright relative pose problem [25, 28, 42, 79]. We use these as off-the-shelf solvers in our experiments. In the next sections, we review the minimal problems that have not been mentioned in the literature, and transform them to previously solved problems. \begin{table} \begin{tabular}{c c c c} \hline \hline VPs & LC\(\downarrow\)VP & PC generic & PC coplanar \\ \hline 0 & N/A & 5 & 0 \\ 0 & N/A & 0 & 4 \\ 1 & 0 & 3 & 0 \\ 1 & 1 & 2 & 0 \\ 2 & 0 & 2 & 0 \\ \hline \hline \end{tabular} \end{table} Table 1: **Overview of relevant configurations** using point correspondences (PC), vanishing points (VP), and line correspondences (LC) orthogonal to them. Each row corresponds to one family of configurations. PC and LC can be used interchangeably under the conditions of Section A. Figure 2: **Overview of the relevant solvers** showing configurations of points, lines, and vanishing points relevant to calibrated relative pose estimation. Configuration X-Y-Z: number of X points, Y lines, and Z vanishing points. #### 2.2.2 Pose from IVC and 3PCs (3-0-1) We show here how to calculate the relative pose \(\mathbf{R}\), \(\mathbf{t}\) from one vanishing point correspondence and three point correspondences. Suppose that we are given a vanishing point match \((\mathbf{v},\mathbf{v}^{\prime})\), and three point correspondences \((\mathbf{p}_{1},\mathbf{p}_{1}^{\prime})\), \((\mathbf{p}_{2},\mathbf{p}_{2}^{\prime})\), \((\mathbf{p}_{3},\mathbf{p}_{3}^{\prime})\), normalized by the camera intrinsics. First, we are going to use the vanishing point to constrain rotation \(\mathbf{R}\). The corresponding vanishing points are related by (5) which provides two systems of equations \(\mathbf{v}_{1}^{\prime}=\mathbf{R}\mathbf{v}_{1}\) and \(-\mathbf{v}_{1}^{\prime}=\mathbf{R}\mathbf{v}_{1}\). If \(\mathbf{R}\in\text{SO}(3)\) is a valid rotation matrix, it satisfies at least one of these systems. Each one is of form \[\mathbf{x}^{\prime}=\mathbf{R}\mathbf{x}, \tag{6}\] where \(\|\mathbf{x}\|=\|\mathbf{x}^{\prime}\|=1\). Estimating \(\mathbf{R}\) and \(\mathbf{t}\) with a constraint in form (6) is similar to estimating the pose with known gravity [25, 42, 79]. We can resolve the sign ambiguity by checking the order of the lines, from which the VP was obtained. Based on (6), we can decompose \(\mathbf{R}\) as: \[\mathbf{R}=\mathbf{R}^{\prime\text{T}}_{\ \mathbf{x}}\mathbf{R}_{y}\mathbf{R} _{\mathbf{x}}, \tag{7}\] where \(\mathbf{R}_{\mathbf{x}}\) is a rotation that brings vector \(\mathbf{x}\) to \(y\)-axis, \(\mathbf{R}_{\mathbf{x}}^{\prime}\) brings \(\mathbf{x}^{\prime}\) to \(y\)-axis, and \(\mathbf{R}_{y}\) is rotation around the \(y\)-axis. Let \(\mathbf{b}_{2}=[0\ 1\ 0]^{\text{T}}\) denote the \(y\)-axis direction. We find rotation \(\mathbf{R}_{\mathbf{x}}\) using the Rodrigues formula as \[\mathbf{R}_{\mathbf{x}}=\mathbf{I}+\sin\alpha_{x}[\mathbf{a}_{\mathbf{x}}]_{ \times}+(1-\cos\alpha_{x})[\mathbf{a}_{\mathbf{x}}]_{\times}^{2},\] where \(\alpha_{x}=\arccos\mathbf{x}^{\text{T}}\mathbf{b}_{2}\) is the angle between vector \(\mathbf{x}\) and the \(y\)-axis, and \(\mathbf{a}_{\mathbf{x}}=(\mathbf{x}\times\mathbf{b}_{2})/\|\mathbf{x}\times \mathbf{b}_{2}\|\) is the normalized cross product of \(\mathbf{x}\) and the \(y\)-axis. We find rotation \(\mathbf{R}_{\mathbf{x}}^{\prime}\) in an analogous way. Now, we are going to find rotation \(\mathbf{R}_{y}\) and translation \(\mathbf{t}\) from the point correspondences. From (1), there holds \[\mathbf{p}_{i}^{\prime\text{T}}[\mathbf{t}]_{\times}\mathbf{R}_{\mathbf{x}}^{ \prime\text{T}}\mathbf{R}_{y}(\varphi)\mathbf{R}_{\mathbf{x}}\mathbf{p}_{i}=0, \ i\in\{1,2,3\}, \tag{8}\] where \[\mathbf{R}_{y}(\varphi)=\begin{bmatrix}\cos\varphi&0&-\sin\varphi\\ 0&1&0\\ \sin\varphi&0&\cos\varphi\end{bmatrix}.\] The essential matrix \([\mathbf{t}]_{\times}\mathbf{R}_{\mathbf{x}}^{\prime\text{T}}\mathbf{R}_{y}( \varphi)\mathbf{R}_{\mathbf{x}}\) from (8) is equal to \(\mathbf{R}^{\prime\text{T}}_{\mathbf{x}}[\mathbf{t}^{\prime}]_{\times}\mathbf{ R}_{y}(\varphi)\mathbf{R}_{\mathbf{x}}\) where \(\mathbf{t}^{\prime}=\mathbf{R}^{\prime}_{\mathbf{x}}\mathbf{t}\). We calculate \(\mathbf{q}_{i}=\mathbf{R}_{\mathbf{x}}\mathbf{p}_{i}\), \(\mathbf{q}_{i}^{\prime}=\mathbf{R}_{\mathbf{x}}^{\prime}\mathbf{p}_{i}^{\prime}\), and convert system (8) to: \[\mathbf{q}_{i}^{\prime\text{T}}[\mathbf{t}^{\prime}]_{\times}\mathbf{R}_{y}( \varphi)\mathbf{q}_{i}^{\prime}=0,\ i\in\{1,2,3\}.\] This is the problem of estimating the relative pose with upright rotation from \(3\) point correspondences. We solve this with the method from [79], which reduces the problem to a \(4\)-degree univariate polynomial. Note that the straightforward approach would yield a \(6\)-degree polynomial. Finally, we compose \(\mathbf{R}=\mathbf{R}_{\mathbf{x}}^{\prime\text{T}}\mathbf{R}_{y}(\varphi) \mathbf{R}_{\mathbf{x}}\), and \(\mathbf{t}=\mathbf{R}_{\mathbf{x}}^{\prime\text{T}}\mathbf{t}^{\prime}\). #### 2.2.3 Pose from IVC and 3LCs (0-3-1) Here, we discuss how to calculate the relative pose \(\mathbf{R},\mathbf{t}\) from one vanishing point correspondence and three correspondences of coplanar lines. Suppose that we are given a corresponding vanishing point pair \((\mathbf{v}_{1},\mathbf{v}_{1}^{\prime})\) and three coplanar line correspondences \((\mathbf{l}_{1},\mathbf{l}_{1}^{\prime})\), \((\mathbf{l}_{2},\mathbf{l}_{2}^{\prime})\), \((\mathbf{l}_{3},\mathbf{l}_{3}^{\prime})\). The line matches can be pairwise intersected to generate three corresponding point pairs as \(\mathbf{p}_{i}=\mathbf{l}_{i}\times\mathbf{l}_{j}\) and \(\mathbf{p}_{i}^{\prime}=\mathbf{l}_{i}^{\prime}\times\mathbf{l}_{j}^{\prime}\), where \((i,j)\in\{(1,2),(1,3),(2,3)\}\). We can use them together with the vanishing point correspondence to calculate the relative pose according to Sec. 2.2.2. #### 2.2.4 Pose from 1VC, 1PC, and 2LCs (1-2-1\({}^{\perp}\)) If one of the lines \(\mathbf{l}_{1}\) is orthogonal to the direction of vanishing point \(\mathbf{v}_{1}\), and the two lines intersect at \(\mathbf{p}_{2}=\mathbf{l}_{1}\times\mathbf{l}_{2}\), we can use line \(\mathbf{l}_{1}\) as the line \(\mathbf{l}\) together with \(\mathbf{v}_{1}\), \(\mathbf{p}_{1}\), \(\mathbf{p}_{2}\), and find the relative pose according to the 2-1-1\({}^{\perp}\) solver [21]. #### 2.2.5 Pose from 1VC, and 2PCs (2-0-1\({}^{\perp}\)) If the line passing through the points \(\mathbf{p}_{1}\), \(\mathbf{p}_{2}\) is orthogonal to the direction of the vanishing point \(\mathbf{v}_{1}\), we can use the line \(\mathbf{l}=\mathbf{p}_{1}\times\mathbf{p}_{2}\) together with \(\mathbf{v}_{1}\), \(\mathbf{p}_{1}\), \(\mathbf{p}_{2}\), and find the relative pose according to the 2-1-1\({}^{\perp}\) solver [21]. ## 3 Joint VP Estimation and Matching In this work, we need to compute the vanishing points of pairs of images \(I\) and \(I^{\prime}\), and to find the association between them. Instead of separately estimating vanishing points in both images and then matching them, we propose to jointly match and detect them at the same time. We first detect and match lines in two images using any existing line matcher [59, 61, 83, 85] and discard all the lines that are left unmatched. Given this association, we apply a multi-model fitting algorithm, _e.g_. [7], to jointly detect the VPs in both images. To do so, we define a minimal solver, used inside [7], that gets \(m=2\) line-to-line correspondences as input, and returns the implied VP match. Given line pairs \((\mathbf{l}_{1},\mathbf{l}_{2})\) in \(I\), and \((\mathbf{l}_{1}^{\prime},\mathbf{l}_{2}^{\prime})\) in \(I^{\prime}\), this means that the corresponding VPs are calculated as \(\mathbf{v}=\mathbf{l}_{1}\times\mathbf{l}_{2}\) and \(\mathbf{v}^{\prime}=\mathbf{l}_{1}^{\prime}\times\mathbf{l}_{2}^{\prime}\). For inlier counting in [7], we use the orthogonal distance in pixel, proposed in [80], to compute the distance from a VP to a line. A line match is considered inlier if its orthogonal distance is smaller than the inlier threshold in both images. Finally, we run the Levenberg-Marquardt numerical optimization [45] on the inliers of each VP pair. ## 4 Hybrid RANSAC on Points and Lines Now, we have a variety of solvers that utilize line, vanishing point and point (or junction) correspondences to estimate the relative pose. However, it is unclear which solver works the best in practice - there may not exist a best solver that works similarly well on all real-world scenarios. The accuracy of a particular solver depends on the structure of the underlying scene and the configurations of geometric entities. For example, point features may be enough to recover relative poses for well-textured image pairs, while they fail completely in case of lack of distinctive texture. Thus, we aim to adaptively employ all solvers covered in this paper within a hybrid RANSAC framework [12] to combine their advantages in a data-dependent manner. As proposed in [12], at each iteration of RANSAC, we first sample a minimal solver with respect to a probability distribution computed from the prior distribution and the inlier ratios of the corresponding geometric entities of each solver. Then, we sample a minimal set corresponding to the selected solver and solve for the relative pose. The termination criterion is adaptively determined for each solver similarly as in [12], depending on the inlier threshold of the corresponding geometric entities and the predefined confidence parameter. As the correctness of the line correspondences cannot be verified from the estimated relative pose, we pre-set the line inlier ratio to be 0.6 for computing the probability distribution to sample the minimal solver for each iteration. In our experiments, we set the prior probability to be uniform across all the solvers. Finally, a Ceres-based nonlinear optimization refines the estimated relative pose, minimizing the reprojection error on the point correspondences and the vanishing point error given the estimated rotation. ## 5 Experiments ### Synthetic tests **Numerical stability.** First, we generate a random rotation matrix \(\mathbf{R}_{\text{GT}}\), and a translation vector \(\mathbf{t}_{\text{GT}}\). To generate a PC, we sample a point \(\mathbf{X}\in\mathbb{R}^{3}\) from a Gaussian distribution with mean \([0,\ 0,\ 5]^{\text{T}}\) and standard deviation \(1\). We project \(\mathbf{X}\) into the first camera as \(\mathbf{p}\) and into the second one as \(\mathbf{q}\). To generate a LC in direction \(\mathbf{d}\), we sample a 3D point \(\mathbf{X}_{A}\) and a parameter \(\lambda\in\mathbb{R}\). We construct the second point as \(\mathbf{X}_{B}=\mathbf{X}_{A}+\lambda\mathbf{d}\). We project these points into both images to get projections of the 3D line. To generate vanishing point \(\mathbf{v}_{i}\), we sample a direction \(\mathbf{d}_{i}\). From \(\mathbf{d}_{i}\), we generate two parallel 3D lines and project them into the images. Vanishing points \(\mathbf{v}_{i}\) and \(\mathbf{v}_{i}^{\prime}\) are obtained as the intersections of the projected 2D lines. To generate a line orthogonal to a VP in direction \(\mathbf{d}_{i}\), we sample a random direction \(\mathbf{d}_{0}\), get direction \(\mathbf{d}=\mathbf{d}_{i}\times\mathbf{d}_{0}\) orthogonal to \(\mathbf{d}_{i}\), and sample a LC in direction \(\mathbf{d}\). To generate a tuple of \(k\) coplanar lines, we generate \(2k\) coplanar 3D points and use them as the endpoints. See the supplementary material for details. Let \(\mathbf{R}_{\text{est}}\), \(t_{\text{est}}\) be the rotation and translation estimated by a solver. We calculate the rotation error as the angle of the rotation represented as \(\mathbf{R}_{\text{est}}{}^{\text{T}}\mathbf{R}_{\text{gt}}\), and the translation error as the angle between vectors \(t_{\text{est}}\) and \(t_{\text{gt}}\). Hence, we generated \(n=100000\) random problem instances and ran the solvers on the noiseless samples. Figure 3 shows histograms of pose errors on a representative subset of the solvers, all of which are stable - there is no peak close to zero. **Tests with noise.** To evaluate the robustness of our solvers with respect to the input noise, we generate minimal problems similarly as in the previous section, and perturb the input with artificial noise. Namely, we set the focal length \(f=1000\), and add noise \(\frac{\sigma}{f}\) to each calibrated endpoint. To simulate the effect of junctions obtained from noisy lines, we generate two directions \(\mathbf{d}_{1}\), \(\mathbf{d}_{2}\), and four parameters \(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}\in\mathbb{R}\). Then, we get four endpoints \(\mathbf{X}_{1}=\mathbf{X}+\lambda_{1}\mathbf{d}_{1}\), \(\mathbf{X}_{2}=\mathbf{X}-\lambda_{2}\mathbf{d}_{1}\), \(\mathbf{X}_{3}=\mathbf{X}+\lambda_{3}\mathbf{d}_{2}\), \(\mathbf{X}_{4}=\mathbf{X}-\lambda_{4}\mathbf{d}_{2}\), project them into both cameras, add the noise to the projected endpoints, and take their junction. Errors of a representative subset of the solvers with input noise are shown in Fig. 4 without local optimization, Fig. 5 with local optimization. The omitted solvers are extensions of the representative ones based on the same equations. **Orthogonality test.** Solvers 2-1-1\({}^{\perp}\), 1-2-1\({}^{\perp}\), and 2-0-1\({}^{\perp}\) assume that the angle between directions \(\mathbf{d}_{i}\), \(\mathbf{d}\) is \(90^{\circ}\). We perturb the angle between \(\mathbf{d}_{i}\), \(\mathbf{d}\) and measure the error of those solvers. The result is shown in the supplementary. ### Real-world Experiments **Datasets.** We test our method on a variety of real-world datasets, both indoors and outdoors (see Table 2). The 7Scenes dataset [73] is a RGB-D dataset for visual localization, including 7 indoor scenes. We use the original GT poses provided with the images, and select pairs of images among all test sequences. Since each sequence is densely sampled, we take every 10th image \(i\) and associate it with the image \(i+50\). ScanNet [15] is a large-scale RGB-D indoor dataset. It pictures some hard cases with low texture, where lines are expected to provide better contraints. We use the test set of 1500 images as in Super Figure 3: **Histograms of \(\log_{10}\) rot. and trans. errors in radians of minimal solvers computed from \(100000\) noiseless samples.** Glue [66]. The PhotoTourism dataset [75] is a large-scale outdoor dataset of landmark pictures collected from the Internet, with GT poses from SfM. We reuse the validation pairs of the CVPR Image Matching Workshop 2020 [41] with a total of 9900 pairs. ETH3D is an indoor-outdoor dataset [72]. We use the 13 scenes of the training set of the high resolution multi-view images, and sample all pairs of images with at least 500 GT keypoints in common. The KITTI dataset [30, 31] is an outdoor dataset focused on the driving scenario. We use the 11 sequences of the training split of the Visual Odometry Challenge. For every sequence, we sample every 10th frame, and form pairs of consecutive images. This results in 2319 image pairs. Finally, the LaMAR dataset [67] is an indoor-outdoor dataset focused on augmented reality. We use the images of the validation split on Hololens in the CAB building. We use consecutive images to form pairs, resulting in 1423 pairs. We use lines detected by DeepLSD [60] and matched with GlueStick [61]. While we were experimenting with a number of methods to obtain lines (_e.g._, LSD) and match (_e.g._, SOLD2) them, this combination leads to the best performance on all tested datasets. The vanishing points are calculated from these lines by Prog-X [7]. We run LoFTR [78] to obtain point correspondences. We generate junction correspondences from line segment pairs that actually intersect in both images. This proved to be a good heuristic to obtain accurate junctions. Also, we consider the line endpoints as additional point correspondences. **7Scenes dataset.** Table 3 presents the Area Under Curve (AUC) for the maximum rotation and translation errors, specifically \(\max{(\epsilon_{\mathbf{R}},\epsilon_{\mathbf{t}})}\), at error thresholds of \(5^{\circ}\), \(10^{\circ}\), and \(20^{\circ}\). Additionally, it reports the median pose error in degrees and the average time in milliseconds on the 7Scenes dataset. The _first_ row presents the results of the proposed method when applied to LoFTR [78] point correspondences, effectively acting as MSAC with non-linear final optimization. The _second_ row, labeled 5PC + junc., \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & 7Scenes [73] & ScanNet [15] & PhotoT. [41] & ETH3D [72] & KITTI[30] & LaMAR[67] \\ \hline \# images & 1610 & 1500 & 9900 & 1969 & 2319 & 1423 \\ GT type & Kinect & Kin. + CAD & SMM & LiDAR & Laser & Laser \\ Indoos & ✓ & ✓ & ✗ & ✓ & ✗ & ✓ \\ Quadrons & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 2: **Datasets overview.** We consider a variety of indoor/outdoor datasets with different GT modalities. Figure 4: **Average angular error in deg. of the proposed solvers** (see Fig. 6) over 100000 runs, as a function of the image noise (_top_), and the number of lines used for VP estimation (_bottom_). Image noise (i) and line number per VP (l/VP) are in the titles. Figure 5: **Angular error in deg. of the solvers**, averaged over 25000 runs, run with LO, as a function of (_top_) the image noise (i), (_middle_) the number of lines used for VP estimation (l/VP), and (_bottom_) the number of points used inside LO (pt/LO). The fixed parameters for each test are reported in the titles. extends this method to LoFTR correspondences and those derived from the endpoints of line matches and junctions. The _third_ row, denoted as 5PC + 4PC, employs the 5PC essential matrix and 4PC homography solvers together, representing the scenario where all point-based solvers are used. The _fourth_ row integrates the aforementioned solvers but also includes the line endpoints and junctions. The _fifth_ row introduces our proposed hybrid estimator, incorporating all point and line-based solvers. Finally, the last row also considers the line endpoints and junctions during estimation. From the insights offered by Table 3, the contribution of line endpoints and junctions appears non-essential on this dataset. However, the hybrid estimator proves superior, enhancing the AUC scores by 1-2 points. **ScanNet dataset.** Table 4 presents the AUC scores at error thresholds of \(5^{\circ}\), \(10^{\circ}\), and \(20^{\circ}\), alongside the median pose error in degrees and the average runtime in milliseconds for the ScanNet dataset. Our observations regarding the effectiveness of junctions and endpoints in this dataset align with those from 7Scenes; their advantage in enhancing accuracy remains ambiguous. While jointly employing the 5PC and 4PC solvers offers a noticeable accuracy improvement over solely using 5PC, with an average increase of approximately 1 AUC point, our proposed hybrid estimator realizes the most significant gains. This estimator, which integrates all line-based solvers, improves by 2-3 AUC points compared to point-only strategies. Even with these advancements, the computation remains marginally slower, ensuring real-time performance on this dataset. **PhotoTourism dataset.** Table 4 details the AUC scores at error thresholds of \(5^{\circ}\), \(10^{\circ}\), and \(20^{\circ}\). Additionally, it reports the median pose error in degrees and the average runtime in milliseconds for the PhotoTourism dataset. Line junctions and endpoints appear to be counterproductive on this particular dataset, causing a notable decline in accuracy for point-based estimators. Interestingly, our hybrid method manages to harness these elements, producing the most accurate results. It surpasses the baseline by a margin of 1-2 AUC points while maintaining real-time performance. **ETH3D dataset.** Table 6 details the results on the ETH3D dataset. Our observations from this dataset mirror those from the PhotoTourism dataset. Specifically, point-based solutions experience a decrease in accuracy when the LoFTR correspondences are combined with those derived from line junctions and endpoints. However, our proposed hybrid methodology successfully harnesses this extra data, resulting in a noteworthy enhancement of 2-3 AUC points. **KITTI dataset.** Table 7 outlines the outcomes on the KITTI dataset. In this distinct setting - characterized by a forward-moving camera - line endpoints and junctions enhance the performance of all evaluated methods, often by a significant margin. The proposed hybrid method, integrated with endpoints and junctions showcases top-tier accuracy, and it is on par with 5PC + junc. Notably, in this scenario, the hybrid method is the fastest and it is second fastest when using additional correspondences from the line matches. **LaMAR dataset.** Table 8 presents the results on the LaMAR dataset. Within this dataset, the integration of line endpoints and junctions results in a marked improvement, enhancing accuracy by 3-6 AUC points. The proposed approach, which simultaneously utilizes LoFTR, endpoint, junction, and line matches, stands out. Compared to the baseline, this method manifests a substantial boost in performance, improving by 4-7 AUC points on average. **Vanishing Point Detection and Optimization.** As described in Section 3, we simultaneously detect and match vanishing points in pairs of images, using the matched lines. The detection of VPs itself is done with Progressive-X [7]. The proposed joint estimation runs for \(2.95\) ms per pair on average on 7Scenes. Running Prog-X independently on the images and then matching the VPs takes \(3.67\) ms. After detecting the VPs, we further refine them with a least square \begin{table} \begin{tabular}{l|c c c|c c} \hline \hline \multirow{2}{*}{Solver} & \multicolumn{2}{c|}{Pose Accuracy \(\uparrow\)} & \multicolumn{1}{c}{Med. \(\downarrow\)} & \multicolumn{1}{c}{Time} \\ & AUC@\(5^{\circ}\) & @\(10^{\circ}\) & @\(20^{\circ}\) & err. (\({}^{\circ}\)) & (ms) \\ \hline 5PC & 20.8 & 40.2 & 58.1 & 3.1 & **10.2** \\ 5PC + junc. & 20.9 & 39.8 & 58.3 & 3.2 & 32.6 \\ 5PC + 4PC & 21.7 & 41.0 & 58.7 & 3.0 & 29.9 \\ 5PC + 4PC + junc. & 21.9 & 40.8 & 58.8 & 3.0 & 22.6 \\ \hline Hybrid & **23.1** & **42.5** & **60.0** & **2.9** & 22.6 \\ Hybrid + junc. & 22.3 & 41.6 & 59.4 & 3.0 & 53.0 \\ \hline \hline \end{tabular} \end{table} Table 4: **Relative pose estimation on ScanNet [15].** We report the performance of the proposed method on LoFTR [78] point and DeepLSD + GlueStick [60, 61] line correspondences with the 5PC solver [55], with the 5PC + 4PC solvers [36], and with all line-based solvers (Hybrid) with line junctions and endpoints (+ junc). The best results are in **bold**, and the second bests are underlined. \begin{table} \begin{tabular}{l|c c c|c c} \hline \hline \multirow{2}{*}{Solver} & \multicolumn{2}{c|}{Pose Accuracy \(\uparrow\)} & \multicolumn{1}{c}{Med. \(\downarrow\)} & \multicolumn{1}{c}{Time} \\ & AUC@\(5^{\circ}\) & @\(10^{\circ}\) & @\(20^{\circ}\) & err. (\({}^{\circ}\)) & (ms) \\ \hline 5PC & 16.3 & 36.6 & 57.5 & 3.5 & **77.1** \\ 5PC + junc. & 16.2 & 36.8 & 57.9 & 3.6 & 96.7 \\ 5PC + 4PC & 16.1 & 36.8 & 57.9 & 3.6 & 88.4 \\ 5PC + 4PC + junc. & 16.6 & 37.1 & 57.7 & 3.5 & 98.5 \\ \hline Hybrid & **17.3** & **38.6** & 59.1 & **3.4** & 206.0 \\ Hybrid + junc. & 16.8 & 38.4 & **59.3** & 3.5 & 214.0 \\ \hline \hline \end{tabular} \end{table} Table 3: **Relative pose estimation on 7Scenes [73].** We report the performance of the proposed method on LoFTR [78] point and DeepLSD + GlueStick [60, 61] line correspondences with the 5PC solver [55], with the 5PC + 4PC solvers [36], and with all line-based solvers (Hybrid) with line junctions and endpoints (+ junc). The best results are in **bold**, and the second bests are underlined. optimization using the Ceres solver [2]. For each vanishing point \(\mathbf{v}\), we gather all inlier lines \(\mathbf{l}_{i}\) and re-fit the VP to these inliers, minimizing the sum of squared distances between the VP and the lines: \(v_{\text{refined}}=\arg\min_{\mathbf{v}}\sum_{\mathbf{l}_{i}}\delta(\mathbf{v}, \mathbf{l}_{i})^{2}\), where \(\delta\) is the line-VP distance introduced in [80]. The improvement from the optimization and the joint estimation is reported in Table 9 on the 7Scenes dataset when using the VP-based solvers independently. Both the joint estimation and the optimization improve, and the best results are obtained when both are used to get accurate VPs. ## 6 Conclusion In this paper, we have delved into exploiting 2D point and line correspondences to estimate the calibrated relative pose of two cameras. Our findings underscore that while leveraging line correspondences is not always straightforward, strategic incorporation of their endpoints, junctions, vanishing points, and line-based solvers can lead to a consistent improvement over traditional point-based methods. This nuanced approach improves across a diverse range of six datasets, from indoor to outdoor scenarios to applications in self-driving, mixed reality, and Structure-from-Motion. We believe our findings will serve as a helpful guide for those looking to use line correspondences in relative pose estimation. We will make the code publicly available. ###### Acknowledgements. We thank Marcel Geppert for helping to review this paper. Daniel Barath was supported by the ETH Postdoc Fellowship. \begin{table} \begin{tabular}{l|r r r|r r} \hline \hline \multirow{2}{*}{Solver} & \multicolumn{3}{c|}{Pose Accuracy \(\uparrow\)} & \multicolumn{1}{c}{Med. \(\downarrow\)} & \multicolumn{1}{c}{Time} \\ & AUC@5\({}^{\circ}\) & @10\({}^{\circ}\) & @20\({}^{\circ}\) & err. (\({}^{\circ}\)) & (ms) \\ \hline 5PC & 59.5 & 74.6 & 85.4 & **0.9** & 42.6 \\ 5PC + junc. & 54.0 & 70.2 & 82.7 & 1.1 & 71.7 \\ 5PC + 4PC & 58.8 & 73.9 & 85.0 & 1.0 & **42.3** \\ 5PC + 4PC + junc. & 53.1 & 69.3 & 82.1 & 1.2 & 66.1 \\ \hline Hybrid & **61.3** & **75.9** & 86.1 & **0.9** & 68.4 \\ Hybrid + junc. & 61.1 & **75.9** & **86.2** & **0.9** & 82.7 \\ \hline \hline \end{tabular} \end{table} Table 5: **Relative pose estimation on PhotoTourism [41]. We report the performance of the proposed method on LoFTR [78] point and DeepLSD + GlueStick [60, 61] line matches with the 5PC solver [55], with the 5PC + 4PC solvers [36], and with all line-based solvers (Hybrid) with line junctions and endpoints (+ junc). The best results are in bold, and the second bests are underlined.** \begin{table} \begin{tabular}{l|r r r|r r} \hline \hline \multirow{2}{*}{Solver} & \multicolumn{3}{c|}{Pose Accuracy \(\uparrow\)} & \multicolumn{1}{c}{Med. \(\downarrow\)} & \multicolumn{1}{c}{Time} \\ & AUC@5\({}^{\circ}\) & @10\({}^{\circ}\) & @20\({}^{\circ}\) & err. (\({}^{\circ}\)) & (ms) \\ \hline 5PC & 61.8 & 70.6 & 75.8 & **0.7** & 277.8 \\ 5PC + junc. & **63.0** & **72.2** & _77.7_ & **0.7** & 321.4 \\ 5PC + 4PC & 61.1 & 70.4 & 75.7 & **0.7** & **198.0** \\ 5PC + 4PC + junc. & 61.6 & 71.0 & 77.0 & **0.7** & 238.6 \\ \hline Hybrid & 61.1 & 70.1 & 75.8 & **0.7** & 229.7 \\ Hybrid + junc. & 62.4 & **72.2** & **77.9** & **0.7** & 250.5 \\ \hline \hline \end{tabular} \end{table} Table 7: **Relative pose estimation on KITTI [30, 31]. We report the performance of the proposed method on LoFTR [78] point and DeepLSD + GlueStick [60, 61] line correspondences with the 5PC solver [55], with the 5PC + 4PC solvers [36], and with all line-based solvers (Hybrid) with line junctions and endpoints (+ junc). The best results are in bold, and the second bests are underlined.** \begin{table} \begin{tabular}{l|r r r|r r} \hline \hline \multirow{2}{*}{Solver} & \multicolumn{3}{c|}{Pose Accuracy \(\uparrow\)} & \multicolumn{1}{c}{Med. \(\downarrow\)} & \multicolumn{1}{c}{Time} \\ & AUC@5\({}^{\circ}\) & @10\({}^{\circ}\) & @20\({}^{\circ}\) & err. (\({}^{\circ}\)) & (ms) \\ \hline 5PC & 22.6 & 37.3 & 50.7 & 2.9 & **40.0** \\ 5PC + junc. & 25.9 & 41.6 & 56.0 & 2.4 & 64.3 \\ 5PC + 4PC & 22.2 & 37.2 & 51.1 & 3.0 & 50.0 \\ 5PC + 4PC + junc. & 25.0 & 40.7 & 55.0 & 2.5 & 63.9 \\ \hline Hybrid & 24.7 & 39.8 & 53.0 & 2.7 & 81.2 \\ Hybrid + junc. & **26.9** & **43.3** & **57.9** & **2.3** & 156.4 \\ \hline \hline \end{tabular} \end{table} Table 8: **Relative pose estimation on LaMAR [67]. We report the performance of the proposed method on LoFTR [78] point and DeepLSD + GlueStick [60, 61] line correspondences with the 5PC solver [55], with the 5PC + 4PC solvers [36], and with all line-based solvers (Hybrid) with line junctions and endpoints (+ junc). The best results are in bold, and the second bests are underlined.** \begin{table} \begin{tabular}{l|r r r|r r} \hline \hline \multirow{2}{*}{Solver} & \multicolumn{3}{c|}{Pose Accuracy \(\uparrow\)} & \multicolumn{1}{c}{Med. \(\downarrow\)} & \multicolumn{1}{c}{Time} \\ & AUC@5\({}^{\circ}\) & @10\({}^{\circ}\) & @20\({}^{\circ}\) & err. (\({}^{\circ}\)) & (ms) \\ \hline 5PC & 22.6 & 37.3 & 50.7 & 2.9 & **40.0** \\ 5PC + junc. & 25.9 & 41.6 & 56.0 & 2.4 & 64.3 \\ 5PC + 4PC & 22.2 & 37.2 & 51.1 & 3.0 & 50.0 \\ 5PC + 4PC + junc. & 25.0 & 40.7 & 55.0 & 2.5 & 63.9 \\ \hline Hybrid & 24.7 & 39.8 & 53.0 & 2.7 & 81.2 \\ Hybrid + junc. & **26.9** & **43.3** & **57.9** & **2.3** & 156.4 \\ \hline \hline \end{tabular} \end{table} Table 8: **Relative pose estimation on LaMAR [67]. We report the performance of the proposed method on LoFTR [78] point and DeepLSD + GlueStick [60, 61] line correspondences with the 5PC solver [55], with the 5PC + 4PC solvers [36], and with all line-based solvers (Hybrid) with line junctions and endpoints (+ junc). The best results are in bold, and the second bests are underlined.** \begin{table} \begin{tabular}{l|r r r} \hline \hline & Standard & VP joint & VP opt. & Both \\ \hline 3-0-1 & 19.6 & 21.8 & 21.4 & **23.6** \\ 0-3-1 & 9.1 & 11.0 & 10.2 & **12.2** \\ 2-0-2 & 4.8 & 6.0 & 5.9 & **7.1** \\ 2-1-1\({}^{\perp}\) & 20.4 & 21.8 & 21.5 & **23.1** \\ 1-2-1\({}^{\perp}\) & 17.7 & 19.3 & 18.2 & **20.1** \\ \hline \hline \end{tabular} \end{table} Table 9: **Ablation study of VP estimation on 7Scenes [73]. We report the AUC@10\({}^{\circ}\) score of representative solvers using VPs detected independently in each image and then matched (Standard), VPs detected jointly as proposed in Section 3 (VP joint), VPs after numerical optimization (VP opt.), and when using both (Both).** ## Appendix A Complete List of Configurations In this section, we will provide the complete list of configurations that can be obtained using points, coplanar lines, vanishing points, and lines orthogonal to them. The section is separated into four subsections. In the first subsection, we give a complete list of configurations of points, vanishing points, and lines orthogonal to them. In the second subsection, we show how to extend these configurations with lines. In the last two subsections, we prove two propositions needed to obtain the list. ### Discussion on Completeness Here, we are going to give a complete list of configurations of points, vanishing points, and lines orthogonal to them that can be practically used for the estimation of relative pose between two cameras. To show this, we consider the following rules: * Calibrated relative pose has 5 degrees of freedom [55]. * The considered configurations have 0, 1, or 2 vanishing points (VPs) since a third vanishing point does not provide any new information (Sec. A.3). * One vanishing point fixes 2 degrees of freedom. (Sec. 2.2.2 in the main paper) * Two vanishing points fix 3 degrees of freedom. (Sec. 2.2.4 in the main paper) * A line orthogonal to a vanishing point can create a second VP [21], in the case of 2VPs, it does not add any new constraints (Sec. A.4). * A single point correspondence fixes a single degree of freedom as discussed in [36]. * Four coplanar points determine calibrated relative pose since via a homography [36], which can be decomposed to the relative pose [70]. * Since \(n<4\) points are always coplanar, their coplanarity does not add any new constraints. Using these rules, we can obtain all possible configurations of points, vanishing points, and lines orthogonal to them that can be used for relative pose estimation. There are five such configurations that are summarized in Table 10. ### Obtaining all configurations To obtain more configurations, we can replace points by lines according to the following rules: * Three point correspondences can be replaced by _three coplanar lines_, that intersect in these points. Therefore, configuration 2-3-0 can be obtained from 5-0-0, and 0-3-1 from 3-0-1. * If we have four coplanar points, we can replace each with a line in the same plane [36] since the coplanar points and lines are transformed by the homography. To estimate the homography from a minimal sample, the sum of the number of points and lines must be 4. Therefore, we can obtain four new configurations from 4-0-0 as: 3-1-0, 2-2-0, 1-3-0, 0-4-0. * One point correspondence can be replaced with an intersection of two lines. We prove in Sec. B that the constraints imposed by the coplanarity of the intersecting lines are equivalent to using the junction as a point correspondence, _i.e._, a line junction only gives us one independent constraint. Furthermore, the configuration 2-1-1\({}^{\perp}\) can be modified in the following way: * If one of the points is replaced with a line junction, and one of the lines building the line junction is orthogonal to the vanishing point, we obtain configuration 1-2-1\({}^{\perp}\). * If the line passing through the points is orthogonal to the vanishing point, we obtain configuration 2-0-1\({}^{\perp}\). * There are no other ways to use two points in order to obtain a line. In both cases listed, it is possible to extract the orthogonal line, obtain the second VP from it, use both VPs to calculate rotation, and the points to calculate translation. If we modify the configurations from Table 10 with the rules from this section, we obtain 13 configurations shown in Figure 6. Each of these configurations can be further modified by replacing any of the PCs with line junctions. ### Number of Vanishing Points in Calibrated Minimal Configurations Here, we show that the congifigurations that can be practically used for estimating calibrated relative pose between two views contain 0, 1, or 2 vanishing points. Let us have three generic vanishing point correspondences \((\mathbf{v}_{1},\mathbf{v}^{\prime}_{1})\), \((\mathbf{v}_{2},\mathbf{v}^{\prime}_{2})\), \((\mathbf{v}_{3},\mathbf{v}^{\prime}_{3})\). We want to find all relative poses \(\mathbf{R}\), \(\mathbf{t}\) that are consistent with the vanishing points. We show in the main paper that for a generic configura \begin{table} \begin{tabular}{c c c c c} \hline \hline VPs & LC\(\perp\)VP & PC generic & PC coplanar & Code \\ \hline 0 & N/A & 5 & 0 & 5-0-0 \\ 0 & N/A & 0 & 4 & 4-0-0 \\ 1 & 0 & 3 & 0 & 3-0-1 \\ 1 & 1 & 2 & 0 & 2-1-1\({}^{\perp}\) \\ 2 & 0 & 2 & 0 & 2-0-2 \\ \hline \hline \end{tabular} \end{table} Table 10: **Overview of relevant configurations** using point correspondences (PC), vanishing points (VP), and line correspondences (LC) orthogonal to them. Each row corresponds to one family of configurations. We give a code in format X-Y-Z, where X is the number of points, Y is the number of lines, and Z is the number of vanishing points. tion, the first two vanishing points are consistent with exactly \(4\) rotation matrices \(\mathbf{R}_{a}\), \(\mathbf{R}_{b}\), \(\mathbf{R}_{c}\), \(\mathbf{R}_{d}\). Since the first two vanishing points already fix a finite set of rotations, the third vanishing point can only be used to constrain translation. We want to find a set of all translations \(\mathbf{t}\in\mathbb{R}^{3}\) that satisfy the epipolar constraint as follows: \[\mathbf{v}_{3}^{\prime T}[\mathbf{t}]_{\times}\mathbf{R}\mathbf{v}_{3}=0.\] Because \((\mathbf{v}_{3},\mathbf{v}_{3}^{\prime})\) is a vanishing point correspondence, there holds \(\mathbf{v}_{3}^{\prime}=\mathbf{R}\mathbf{v}_{3}\). Therefore, we can rewrite the epipolar constraint as \[\mathbf{v}_{3}^{\prime T}[\mathbf{t}]_{\times}\mathbf{v}_{3}^{\prime}=0.\] The left side is equal to \(\mathbf{v}_{3}^{\prime\,\mathrm{T}}(\mathbf{t}\times\mathbf{v}_{3}^{\prime})\), which is equal to zero for every \(\mathbf{t}\in\mathbb{R}^{3}\). Therefore, the third vanishing point does not impose any constraints on \(\mathbf{t}\). Thus, practical configurations can only have 0, 1, or 2 vanishing points. ### Use of a Line Orthogonal to a VP Here, we show that using a line orthogonal to a VP only makes sense if there is a single VP. Then, it can be used together with the VP to fix the rotation as shown in [21]. It is clear that if there is no vanishing point, there cannot be any line orthogonal to a vanishing point. Now, we will show that if there are two vanishing points, the line orthogonal to one of them does not fix any degrees of freedom. If there are two vanishing points, the rotation \(\mathbf{R}\) is already fixed by these vanishing points. Therefore, the orthogonal line could only be used for fixing translation. Let us have two vanishing point correspondences \((\mathbf{v}_{1},\mathbf{v}_{1}^{\prime})\) and \((\mathbf{v}_{2},\mathbf{v}_{2}^{\prime})\) that yield rotation matrix \(\mathbf{R}\), and a line correspondence \((\mathbf{l},\mathbf{l}^{\prime})\) that is supposed to be orthogonal to the vanishing point \((\mathbf{v}_{1},\mathbf{v}_{1}^{\prime})\). Let \(\mathbf{L}\) denote the 3D line that can be obtained by backprojecting the 2D line \(\mathbf{l}\). The direction \(\mathbf{d}\) of \(\mathbf{L}\) can be obtained as \(\mathbf{d}=1\times\mathbf{v}_{1}\). Furthermore, the direction can be obtained as \(\mathbf{R}^{\mathrm{T}}(\mathbf{l}^{\prime}\times\mathbf{v}_{1}^{\prime})\). The solution exists if and only if \(\mathbf{l}\times\mathbf{v}_{1}\sim\mathbf{R}^{\mathrm{T}}(\mathbf{l}^{\prime} \times\mathbf{v}_{1}^{\prime})\). If this equation does not hold, there is no solution. Therefore, let us suppose that this equation holds. Then, for every translation \(\mathbf{t}\in\mathbb{R}^{3}\), we can uniquely triangulate the 3D line \(\mathbf{L}\). The direction of this triangulated line \(\mathbf{L}\) has to be \(\mathbf{d}\). Therefore, the orthogonality of line correspondence \((\mathbf{l},\mathbf{l}^{\prime})\) to a vanishing point correspondence \((\mathbf{v}_{1},\mathbf{v}_{1}^{\prime})\) does not impose any constraints on translation \(\mathbf{t}\), which concludes the proof that in the case of 2 VPs, a line orthogonal to a VP does not impose any new constraints on the relative pose. ## Appendix B Relation of Coplanarity and Junctions In the main paper, we design solvers that leverage line junctions to obtain point correspondences from line correspondences. In this section, we give more details on this process, and we prove that the constraints implied by coplanar lines are equivalent to using line junctions. We use the notation from the main paper. **Line junctions.** If two lines \(\mathbf{L}_{1}\), \(\mathbf{L}_{2}\) in space intersect, they share a point \(\mathbf{X}\in\mathbb{R}^{3}\). Let \(\mathbf{l}_{1},\mathbf{l}_{2}\in\mathbb{R}^{3}\) be homogeneous coordinates of the projections of \(\mathbf{L}_{1}\), \(\mathbf{L}_{2}\) into camera \(\mathbf{P}\). Then, the projection \(\mathbf{p}\) of \(\mathbf{X}\) can be obtained as the intersection of \(\mathbf{l}_{1},\mathbf{l}_{2}\) as \(\mathbf{p}=\mathbf{l}_{1}\times\mathbf{l}_{2}\). Let us have two cameras \(\mathbf{P}_{1}\), \(\mathbf{P}_{2}\). Let \(\mathbf{l}_{1},\mathbf{l}_{2}\) be the projections of \(\mathbf{L}_{1}\), \(\mathbf{L}_{2}\) into \(\mathbf{P}_{1}\), and \(\mathbf{l}_{1}^{\prime},\mathbf{l}_{2}^{\prime}\) the projections into \(\mathbf{P}_{2}\). Then, the intersections \(\mathbf{l}_{1}\times\mathbf{l}_{2}\), and \(\mathbf{l}_{1}^{\prime}\times\mathbf{l}_{2}^{\prime}\) present a valid point correspondence between cameras \(\mathbf{P}_{1}\), \(\mathbf{P}_{2}\). According to the epipolar constraint, there holds: \[(\mathbf{l}_{1}^{\prime}\times\mathbf{l}_{2}^{\prime})^{\mathrm{T}}[\mathbf{t} ]_{\times}\mathbf{R}(\mathbf{l}_{1}\times\mathbf{l}_{2}), \tag{9}\] where \(\mathbf{R}\), \(\mathbf{t}\) is the relative pose between cameras \(\mathbf{P}_{1}\), \(\mathbf{P}_{2}\). **Coplanar lines.** If two lines in space \(\mathbf{L}_{1}\), \(\mathbf{L}_{2}\) are coplanar, their projections \(\mathbf{l}_{1}\), \(\mathbf{l}_{2}\) into the first camera \(\mathbf{P}_{1}\), and \(\mathbf{l}_{1}^{\prime}\), \(\mathbf{l}_{2}^{\prime}\) into the second camera \(\mathbf{P}_{2}\) are related by the same homography matrix \(\mathbf{H}\) as follows: \[\mathbf{l}_{1}\sim\mathbf{H}^{\mathrm{T}}\mathbf{l}_{1}^{\prime},\ \ \mathbf{l}_{2}\sim \mathbf{H}^{\mathrm{T}}\mathbf{l}_{2}^{\prime}. \tag{10}\] If the cameras \(\mathbf{P}_{1}\), \(\mathbf{P}_{2}\) are calibrated, the homography has the form \(\mathbf{H}=\mathbf{R}-\mathbf{t}\mathbf{n}^{\mathrm{T}}\), where \(\mathbf{n}\in\mathbb{R}^{3}\) is the normal of the plane defined by lines \(\mathbf{L}_{1}\), \(\mathbf{L}_{2}\). Then, (10) becomes: \[\mathbf{l}_{1}\sim(\mathbf{R}^{\mathrm{T}}-\mathbf{n}\mathbf{t}^{\mathrm{T}}) \mathbf{l}_{1}^{\prime},\ \ \mathbf{l}_{2}\sim(\mathbf{R}^{\mathrm{T}}-\mathbf{n}\mathbf{t}^{\mathrm{T}}) \mathbf{l}_{2}^{\prime}. \tag{11}\] Figure 6: **Overview of the relevant solvers** showing configurations of points, lines, and vanishing points relevant to calibrated relative pose estimation. Configuration X-Y-Z: number of X points, Y lines, and Z vanishing points. ### Proof of Equivalence of Coplanar Lines and Junctions Now, we are going to show that the coplanarity constraint (9), and the common homography constraint (11) are equivalent, _i.e._, for generic \(\mathsf{l}_{1}\), \(\mathsf{l}_{2}\), \(\mathsf{I}_{1}^{\prime}\), \(\mathsf{I}_{2}^{\prime}\), the set of relative poses satisfying (9) is equal to the set of relative poses satisfying constraint (11). #### b.1.1 Coplanarity \(\implies\) Junction First, we are going to show that if \(\mathbf{R}\), \(\mathbf{t}\) satisfies (11) for generic projections \(\mathsf{l}_{1}\), \(\mathsf{l}_{2}\), \(\mathsf{I}_{1}^{\prime}\), \(\mathsf{I}_{2}^{\prime}\), then (9) holds. **Proposition 1**.: _Let \(\mathsf{l}_{1}\), \(\mathsf{l}_{2}\), \(\mathsf{I}_{1}^{\prime}\), \(\mathsf{I}_{2}^{\prime}\) be generic line projections. If \(\mathbf{R}\), \(\mathbf{t}\) satisfies (11) for generic projections \(\mathsf{l}_{1}\), \(\mathsf{l}_{2}\), \(\mathsf{I}_{1}^{\prime}\), \(\mathsf{I}_{2}^{\prime}\), then (9) holds._ Proof.: Let \(\mathbf{R}\), \(\mathbf{t}\) be a relative pose satisfying (11). Then, there is a normal \(\mathbf{n}\in\mathbb{R}^{3}\), for which (11) holds. We get a cross product of the equations in (11) to get: \[\mathsf{l}_{1}\times\mathsf{l}_{2}\sim(\mathbf{R}^{\mathsf{T}}- \mathbf{n}\mathbf{t}^{\mathsf{T}})^{\mathsf{T}}(\mathsf{I}_{1}^{\prime}\times \mathsf{I}_{2}^{\prime}),\] \[(\mathbf{R}-\mathbf{t}\mathbf{n}^{\mathsf{T}})(\mathsf{l}_{1} \times\mathsf{I}_{2})\sim(\mathsf{I}_{1}^{\prime}\times\mathsf{I}_{2}^{\prime }).\] We multiply both sides of the equation by \([\mathbf{t}]_{\times}\) from left and use the fact that \(\mathbf{x}\times\mathbf{x}=0\) to get: \[[\mathbf{t}]_{\times}(\mathbf{R}-\mathbf{t}\mathbf{n}^{\mathsf{T }})(\mathsf{l}_{1}\times\mathsf{I}_{2})\sim[\mathbf{t}]_{\times}(\mathsf{I}_{ 1}^{\prime}\times\mathsf{I}_{2}^{\prime}),\] \[([\mathbf{t}]_{\times}\mathbf{R}-[\mathbf{t}]_{\times}\mathbf{t} \mathbf{n}^{\mathsf{T}})(\mathsf{l}_{1}\times\mathsf{I}_{2})\sim[\mathbf{t}]_{ \times}(\mathsf{I}_{1}^{\prime}\times\mathsf{I}_{2}^{\prime}),\] \[[\mathbf{t}]_{\times}\mathbf{R}(\mathsf{l}_{1}\times\mathsf{I}_{2}) \sim[\mathbf{t}]_{\times}(\mathsf{I}_{1}^{\prime}\times\mathsf{I}_{2}^{ \prime}).\] Now, we multiply both sides of the equation by \((\mathsf{I}_{1}^{\prime}\times\mathsf{I}_{2}^{\prime})^{\mathsf{T}}\) and use the fact that \(\mathbf{y}^{\mathsf{T}}(\mathbf{x}\times\mathbf{y})=0\) to get: \[(\mathsf{I}_{1}^{\prime}\times\mathsf{I}_{2}^{\prime})^{\mathsf{ T}}[\mathbf{t}]_{\times}\mathbf{R}(\mathsf{l}_{1}\times\mathsf{I}_{2})\sim( \mathsf{I}_{1}^{\prime}\times\mathsf{I}_{2}^{\prime})^{\mathsf{T}}[\mathbf{t}] _{\times}(\mathsf{I}_{1}^{\prime}\times\mathsf{I}_{2}^{\prime}),\] \[(\mathsf{I}_{1}^{\prime}\times\mathsf{I}_{2}^{\prime})^{\mathsf{ T}}[\mathbf{t}]_{\times}\mathbf{R}(\mathsf{l}_{1}\times\mathsf{I}_{2})=0.\] This is exactly the epipolar constraint (9). #### b.1.2 Junction \(\implies\) Coplanarity Now, we are going to show that if \(\mathbf{R}\), \(\mathbf{t}\) satisfies (9) for a generic configuration of \(\mathsf{l}_{1}\), \(\mathsf{l}_{2}\), \(\mathsf{I}_{1}^{\prime}\), \(\mathsf{I}_{2}^{\prime}\), then (11) holds. **Proposition 2**.: _Let \(\mathsf{l}_{1}\), \(\mathsf{l}_{2}\), \(\mathsf{I}_{1}^{\prime}\), \(\mathsf{I}_{2}^{\prime}\) be a generic configuration of line projections. If \(\mathbf{R}\), \(\mathbf{t}\) satisfies (9) for a generic configuration of \(\mathsf{l}_{1}\), \(\mathsf{l}_{2}\), \(\mathsf{I}_{1}^{\prime}\), \(\mathsf{I}_{2}^{\prime}\), then (11) holds._ Proof.: Let \(\mathbf{R}\), \(\mathbf{t}\) be a relative pose satisfying (9). Then, there is a point \(\mathbf{X}\in\mathbb{R}^{3}\) that is projected by \(\mathbf{P}_{1}\) to the intersection \(\mathsf{l}_{1}\times\mathsf{I}_{2}\), and by \(\mathbf{P}_{2}\) to the intersection \(\mathsf{I}_{1}^{\prime}\times\mathsf{I}_{2}^{\prime}\)[36]. Let \(\mathbf{\Pi}_{1}\) be the plane obtained by backprojecting \(\mathsf{l}_{1}\). Then, \(\mathbf{X}\) lies on \(\mathbf{\Pi}_{1}^{\prime}\) obtained by backprojecting \(\mathsf{I}_{1}^{\prime}\). Therefore, \(\mathbf{X}\) lies on the intersection \(\mathbf{\Pi}_{1}\cap\mathbf{\Pi}_{1}^{\prime}\) of \(\mathbf{\Pi}_{1}\), \(\mathbf{\Pi}_{1}^{\prime}\). Since we assume a generic configuration, the intersection \(\mathbf{\Pi}_{1}\cap\mathbf{\Pi}_{1}^{\prime}\) is a unique line [36], which equals to \(\mathbf{L}_{1}\). Therefore, \(\mathbf{X}\) lies on \(\mathbf{L}_{1}\). We use the same argument to show that \(\mathbf{X}\) lies on \(\mathbf{L}_{2}\). Therefore, lines \(\mathbf{L}_{1}\), \(\mathbf{L}_{2}\) are coplanar, and their projections are related by homography [36]. For the sake of completeness, we also show the implication for the degenerate configurations of lines. In the degenerate configuration, the junctions \(\mathsf{l}_{1}\times\mathsf{l}_{2}\), \(\mathsf{I}_{1}^{\prime}\times\mathsf{I}_{2}^{\prime}\) are the vanishing points. Then, the lines \(\mathbf{L}_{1}\), \(\mathbf{L}_{2}\) are parallel, and therefore, they are coplanar. #### b.1.3 Junction \(\iff\) Coplanarity Here, we combine the results from Section B.1.1, and from Section B.1.2 to conclude the proof. **Proposition 3**.: _Let \(\mathsf{l}_{1}\), \(\mathsf{l}_{2}\), \(\mathsf{I}_{1}^{\prime}\), \(\mathsf{I}_{2}^{\prime}\) be a set of generic line projections. Then, the constraint (11) imposed on the relative pose \(\mathbf{R}\), \(\mathbf{t}\) by the coplanarity of the lines is equivalent to the epipolar constraint (9) imposed by the junction pair \((\mathsf{l}_{1}\times\mathsf{l}_{2},\mathsf{I}_{1}^{\prime}\times\mathsf{I}_{2}^{ \prime})\)._ Proof.: We have shown in Section B.1.1 that if the relative pose \(\mathbf{R}\), \(\mathbf{t}\) satisfies constraint (11), then it satisfies constraint (9). In Section B.1.2, we have shown that if the relative pose \(\mathbf{R}\), \(\mathbf{t}\) satisfies (9), then it satisfies (11). Therefore, the constraints imposed on the relative pose \(\mathbf{R}\), \(\mathbf{t}\) by (9), and by (11) are equivalent. ### Alternative Solver Formulation Based on Coplanar Lines Here, we give an alternative way to eliminate \(\mathbf{n}\) from system (11), and propose an alternative formulation of solvers 3-0-1, and 2-0-2 that use this constraint. We compare these alternative solvers with the proposed ones. Constraint (11) gives \(4\) constraints that are linear in the elements of \(\mathbf{n}\). Therefore, we can write the constraints as: \[\mathbf{A}(\mathbf{R},\mathbf{t})\begin{bmatrix}\mathbf{n}\\ 1\end{bmatrix}=0,\] where \(\mathbf{A}(\mathbf{R},\mathbf{t})\in\mathbb{R}^{4,4}\) is matrix whose elements are functions of the relative pose \(\mathbf{R}\), \(\mathbf{t}\). We eliminate \(\mathbf{n}\) to get: \[\det(\mathbf{A}(\mathbf{R},\mathbf{t}))=0, \tag{12}\] which gives one constraint on the relative pose \(\mathbf{R}\), \(\mathbf{t}\). We tried to use the constraints in this form (instead of epipolar constraint on junctions) to solve the 3-0-1 and 2-0-2 problems from the main paper. Namely, in the 3-0-1 case, we have a rotation in the form \(\mathbf{R}^{\prime\mathsf{T}}_{\mathbf{x}}\mathbf{R}_{y}(\varphi)\mathbf{R}_{ \mathbf{x}}\), and we need 3 constraints in the form (12) to solve for \(\varphi\), \(\mathbf{t}\). In the 2-0-2 case, the rotation \(\mathbf{R}\) is fixed by the vanishing points, and we need 2 constraints (12) to solve for \(\mathbf{t}\). In both cases, we employ the automatic minimal solver generator [46] to get the minimal solvers. Table 11 shows the time comparison of this approach with the proposed junction-based solvers. Figure 7 shows the numerical stability comparison. We can see that this alternative approach is inferior compared to the proposed ones in terms of both stability and time. ## Appendix C Details on Experiments ### Sampling of Synthetic Scenes Here, we describe in detail, how we sample the synthetic scenes, which we then use in the numerical stability and noise robustness tests. **Relative pose and point correspondences (PC).** First, we generate a random rotation matrix \(\mathbf{R}_{\text{GT}}\), and a camera center \(\mathbf{C}_{\text{GT}}\) from a Gaussian distribution with zero mean and unit standard deviation. We calculate translation \(\mathbf{t}_{\text{GT}}=-\mathbf{R}_{\text{GT}}\mathbf{C}_{\text{GT}}\). To generate a PC, we sample a point \(\mathbf{X}\in\mathbb{R}^{3}\) from a Gaussian distribution with mean \([0~{}0~{}5]^{\text{T}}\), and standard deviation \(1\). Then, we project the point to the first camera as \(\mathbf{p}=\mathbf{X}\) and into the second one as \(\mathbf{q}=\mathbf{R}_{\text{GT}}\mathbf{X}_{jA}+\mathbf{t}_{\text{GT}}\). **Line correspondences (LC).** To generate a LC in direction \(\mathbf{d}\), we sample a 3D point \(\mathbf{X}_{A}\) and a parameter \(\lambda\in\mathbb{R}\). We construct the second point as \(\mathbf{X}_{B}=\mathbf{X}_{A}+\lambda\mathbf{d}\). Then, we get the projections \(\mathbf{p}_{A}\), \(\mathbf{p}_{B}\) of both endpoints in the first camera, and \(\mathbf{q}_{A}\), \(\mathbf{q}_{B}\) in the second one. Then, we obtain the homogeneous coordinates of the line projections as \(\mathbf{l}=\mathbf{p}_{A}\times\mathbf{p}_{B}\), \(\mathbf{l}^{\prime}=\mathbf{q}_{A}\times\mathbf{q}_{B}\). **Vanishing points** To generate vanishing point correspondence \((\mathbf{v}_{i},\mathbf{v}_{j}^{\prime})\), we first sample a direction \(\mathbf{d}_{i}\). In the **numerical stability** tests, we generate two line correspondences \((\mathbf{l}_{1},\mathbf{l}_{1}^{\prime})\), \((\mathbf{l}_{2},\mathbf{l}_{2}^{\prime})\) in direction \(\mathbf{d}_{i}\) according to the previous paragraph, and obtain vanishing points \(\mathbf{v}_{i}\) and \(\mathbf{v}_{i}^{\prime}\) as the intersections of the projected 2D lines: \(\mathbf{v}_{i}=\mathbf{l}_{1}\times\mathbf{l}_{2}\), \(\mathbf{v}_{i}^{\prime}=\mathbf{l}_{1}^{\prime}\times\mathbf{l}_{2}^{\prime}\). In the **noise robustness tests**, we generate \(l\geq 3\) line correspondences \((\mathbf{l}_{j},\mathbf{l}_{j}^{\prime}),j\in\{1,...,l\}\) in direction \(\mathbf{d}_{i}\), and add the noise \(\frac{\sigma}{f}\) to the endpoints \(\mathbf{p}_{A}\times\mathbf{p}_{B}\), \(\mathbf{m}=\mathbf{q}_{A}\). Then, we construct matrix \(\mathbf{A}\in\mathbb{R}^{l,3}\) with rows \(\mathbf{l}_{j},j\in\{1,...,l\}\), and find \(\mathbf{v}_{i}\) as the least-squares solution to system \(\mathbf{A}\mathbf{v}_{i}=0\). We find vanishing point \(\mathbf{v}_{i}^{\prime}\) analogously from lines \(\mathbf{l}_{j}^{\prime},j\in\{1,...,l\}\). **Line orthogonal to VP.** To generate a line orthogonal to a VP in direction \(\mathbf{d}_{i}\), we sample a random direction \(\mathbf{d}_{0}\), get direction \(\mathbf{d}=\mathbf{d}_{i}\times\mathbf{d}_{0}\) orthogonal to \(\mathbf{d}_{i}\), and sample a LC in direction \(\mathbf{d}\). **Tuple of coplanar lines.** To generate a tuple of \(k\) coplanar lines, we first generate three points \(\mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3}\in\mathbb{R}^{3}\), and for every line, we sample 4 parameters \(\lambda_{1},...,\lambda_{4}\in\mathbb{R}\) from a normalized Gaussian distribution. Then, we get the endpoints of the line as \(\mathbf{X}_{A}=\mathbf{X}_{1}+\lambda_{1}\mathbf{X}_{2}+\lambda_{2}\mathbf{X}_ {3}\), \(\mathbf{X}_{B}=\mathbf{X}_{1}+\lambda_{3}\mathbf{X}_{2}+\lambda_{4}\mathbf{X}_ {3}\). Then, we project the endpoints into both cameras and join them to get a line correspondence. For every solver, we generate the entities that are needed to compute the pose. ### Additional Synthetic Tests Here, we give additional synthetic tests to evaluate the solvers 2-1-1\({}^{\perp}\), 1-2-1\({}^{\perp}\), and 2-0-1\({}^{\perp}\) from the main paper. These solvers assume that the angle, between the directions \(\mathbf{d}_{i}\) of the vanishing point and \(\mathbf{d}\) of the line, is \(90^{\circ}\). We perturb this angle and measure the error of these solvers. The result are shown in Figure 8. We consider both the case without local optimization, and after the local optimization is applied. The plot shows that the solvers 2-1-1\({}^{\perp}\), 1-2-1\({}^{\perp}\), and 2-0-1\({}^{\perp}\) are very robust to the deviation from the orthogonal direction, especially if they are combined with the \begin{table} \begin{tabular}{c|c|c} & Junctions (9) & Alternative (12) \\ \hline 3-0-1 & **8.67859**\(\mu s\) & 1633.69 \(\mu s\) \\ 2-0-2 & **0.13719**\(\mu s\) & 24.6418 \(\mu s\) \\ \end{tabular} \end{table} Table 11: Average time in \(\mu s\) of minimal solvers using the junctions (9) vs. the alternative approach (12). Figure 7: **Histogram of \(\log_{10}\) pose errors** in radians of minimal solvers 3-0-1, and 2-0-2 computed from \(100000\) noiseless samples. We compare the Junctions (9) and Coplanarity (12) formulations. local optimization. Even with deviation \(10^{\circ}\), the average rotation error of the solvers does not exceed \(1.1^{\circ}\), and the average translation error reaches about \(4^{\circ}\) for 2-1-1\({}^{\perp}\), and \(6^{\circ}\) for 1-2-1\({}^{\perp}\) and 2-0-1\({}^{\perp}\).
2309.09160
Axion detection via superfluid $^3$He ferromagnetic phase and quantum measurement techniques
We propose to use the nuclear spin excitation in the ferromagnetic A1 phase of the superfluid $^3$He for the axion dark matter detection. This approach is striking in that it is sensitive to the axion-nucleon coupling, one of the most important features of the QCD axion introduced to solve the strong CP problem. We review a quantum mechanical description of the nuclear spin excitation and apply it to the estimation of the axion-induced spin excitation rate. We also describe a possible detection method of the spin excitation in detail and show that the combination of the squeezing of the final state with the Josephson parametric amplifier and the homodyne measurement can enhance the sensitivity. It turns out that this approach gives good sensitivity to the axion dark matter with the mass of $O(1) \, \mu \mathrm{eV}$ depending on the size of the external magnetic field. We estimate the parameters of experimental setups, e.g., the detector volume and the amplitude of squeezing, required to reach the QCD axion parameter space.
So Chigusa, Dan Kondo, Hitoshi Murayama, Risshin Okabe, Hiroyuki Sudo
2023-09-17T05:16:28Z
http://arxiv.org/abs/2309.09160v2
# Axion detection via superfluid \({}^{3}\)He ferromagnetic phase and quantum measurement techniques ###### Abstract We propose to use the nuclear spin excitation in the ferromagnetic A\({}_{1}\) phase of the superfluid \({}^{3}\)He for the axion dark matter detection. This approach is striking in that it is sensitive to the axion-nucleon coupling, one of the most important features of the QCD axion introduced to solve the strong CP problem. We review a quantum mechanical description of the nuclear spin excitation and apply it to the estimation of the axion-induced spin excitation rate. We also describe a possible detection method of the spin excitation in detail and show that the combination of the squeezing of the final state with the Josephson parametric amplifier and the homodyne measurement can enhance the sensitivity. It turns out that this approach gives good sensitivity to the axion dark matter with the mass of \(\mathcal{O}(1)\,\mathrm{\SIUnitSymbolMicro eV}\) depending on the size of the external magnetic field. We estimate the parameters of experimental setups, e.g., the detector volume and the amplitude of squeezing, required to reach the QCD axion parameter space. ###### Contents * 1 Introduction * 2 Understanding \({}^{3}\)He via spinor BEC * 2.1 Phases of superfluid \({}^{3}\)He * 2.2 Spinor BEC description of magnetism in the A, A\({}_{1}\), and A\({}_{2}\) phases * 2.3 Nuclear magnons in the ferromagnetic A\({}_{1}\) phase * 3 Axion detection * 3.1 Axion-magnon conversion * 3.2 Mixing between magnon and cavity modes * 3.3 Quantum measurement techniques * 3.3.1 Squeezing of states * 3.3.2 Homodyne measurement * 4 Sensitivity * 5 Conclusion and Discussion * A Statistical treatment of noise * A.1 Formulation * A.2 Creation Rate of Magnons * A.3 Test Statistic * B Josephson parametric amplifier (JPA) * B.1 Effective description * B.2 Flux-driven Josephson parametric amplifier * B.3 Resonator equation ## 1 Introduction Axion [1] is a proposed solution to the strong CP problem, namely to explain why the quantum chromodynamics (QCD) does not violate the time-reversal symmetry. The experimental upper limit on the neutron electric dipole moment \(d_{n}<1.8\times 10^{-26}\,e\,\)cm [2] implies that the so-called vacuum angle of QCD to be extremely small \(\bar{\theta}<10^{-10}\). The theory assumes a new global U(1) Peccei-Quinn symmetry broken spontaneously at the energy scale called the axion decay constant \(f_{a}\) as well as explicitly by the QCD anomaly. The effective operator of the axion coupling to gluons is \[\mathcal{L}_{a}=\frac{g_{s}^{2}}{64\pi^{2}}\left(\bar{\theta}+\frac{a}{f_{a}} \right)\epsilon^{\mu\nu\rho\sigma}G^{b}_{\mu\nu}G^{b}_{\rho\sigma}. \tag{1}\] Switching to the chiral Lagrangian, it can be shown that the axion settles to the ground state where \(\bar{\theta}\) is dynamically canceled. Interestingly, it was pointed out that the axion can also comprise the dark matter of the Universe from misalignment mechanism or emission from axion strings [3]. The initial version of the theory assumed \(f_{a}=v\) (electroweak scale) and was excluded by beam dump experiments. It was later proposed to take \(f_{a}\gg v\) dubbed "invisible axion" [4; 5; 6; 7]. The axion abundance is higher for higher \(f_{a}\), and \(f_{a}\simeq 10^{12}\,\mathrm{GeV}\) is typically regarded as a preferred range. It translates to \(m_{a}\simeq\mathrm{\SIUnitSymbolMicro}\) scale. Many direct detection experiments for the dark matter axion, such as [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25], rely on the axion coupling to photons \(aF_{\mu\nu}\tilde{F}^{\mu\nu}\). Their prospect in the near future is becoming exciting. Yet the axion coupling to photons is highly model-dependent. To fully verify that the axion solves the strong CP problem, measuring its coupling to hadrons would be crucial. In particular, the axion couples to the nucleon spins \(\vec{\nabla}a\cdot\vec{s}_{N}\) with relatively little model dependence. Search for dark matter axion using the nuclear spins, or confirming detected axion signal with nuclear signs, would be crucial to enhance our understanding of both the strong CP problem as well as the nature of dark matter. In spite of its importance, there are relatively few experiments and proposals including [26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37] in this direction. In this paper, we propose a new experimental technique to detect dark matter axions using their coupling to nuclear spins. Interactions among the nuclear spins are very weak because their magnetic moments are suppressed by the nucleon mass \(\mu_{N}=e/m_{N}\) rather than the electron mass \(\mu_{B}=e/m_{e}\). One needs to identify material where nuclear spins play a major role at very low temperatures. We point out that the A\({}_{1}\) phase of superfluid \({}^{3}\)He is a unique material that has an ordering of nuclear spins without relying on their coupling to electron spins. This is because the Cooper pairs of \({}^{3}\)He atoms are in the \(p\)-wave (anti-symmetric) with total spin \(S=1\) (symmetric) as required by Fermi statistics. In a high magnetic field, it becomes basically a ferromagnet of nuclear spins. The corresponding nuclear magnon is gapped due to the external magnetic field and the gap can be tuned to the axion mass. It is quite remarkable that the gap happens to be in the range of the preferred axion mass for dark matter with an achievable magnetic field. Then the magnon can be converted to a cavity photon resonantly due to the polariton mixing between the magnon and photon. Again the size of the cavity is such that it can be fitted in a laboratory. Note that our setup is distinct from other proposals to use superfluid \({}^{3}\)He for axion dark matter search [33; 34] in the superfluid phase used and/or the signal detection method. Because our experiments are performed at such low temperatures \(T\lesssim 3\,\mathrm{mK}\) that the target \({}^{3}\)He shows superfluidity, the quantum noise [38] becomes non-negligible. These days several applications of quantum measurement techniques to axion detections have been studied in order to circumvent the quantum noise [39; 40; 41; 42; 43; 44; 45; 46; 13]. In this paper, we apply the squeezing technique, which has been discussed in [39; 13], and evaluate the improvement in the sensitivity of our experiment. This paper is organized as follows. In section 2, we review the properties of \({}^{3}\)He. We analyze superfluid phases of \({}^{3}\)He using the spinor BEC formalism and understand the properties of nuclear magnons in the ferromagnetic A\({}_{1}\) phase. In section 3, we discuss how the axion dark matter signal can be detected using superfluid \({}^{3}\)He; we use a nuclear magnon mode, which is converted into a cavity photon through the polariton mixing. We also discuss how noise reduction is realized by using squeezing and the homodyne measurement. We show sensitivities for several different setups in section 4 and conclude in section 5. A detailed description of our noise estimate and statistical treatment is summarized in appendix A. Finally, we review the Josephson parametric amplifier (JPA), which is a representative apparatus for squeezing, in appendix B. ## 2 Understanding \({}^{3}\)He via spinor BEC In this section, we will describe the phase structure of the superfluid \({}^{3}\)He using Ginzburg-Landau formalism and simplified spinor BEC formalism. We summarize the phase structure in the following tree diagram. The important thing in this paper is that we will use the fact that the A\({}_{1}\) phase has a ferromagnetic property. Superfluid phases of \({}^{3}\)He No external magnetic field A phase Anti-Ferromagnetic B phase Spin-Orbit Coupling In section 2.1, we will sketch the phase structure of \({}^{3}\)He with Ginzburg-Landau formalism. In section 2.2, we will describe the A phase of \({}^{3}\)He with an external magnetic field with spinor BEC formalism. In section 2.3, we will study how to utilize the ferromagnetic property of the A\({}_{1}\) phase of \({}^{3}\)He as magnon. ### Phases of superfluid \({}^{3}\)He Historically, after the success of the BCS theory [47], people tried to look for the description of the superfluid \({}^{3}\)He because it is liquid and has no lattice structure inside. Some people thought about the pairing states which are not \(s\)-wave. One is about the general anisotropic case by Anderson and Morel [48]. This model has a peculiar feature that the nodes exist on the Fermi surface for the axial \(p\)-wave state (ABM state named after Anderson-Brinkman-Morel). It turned out that this theory describes what is called the A phase nowadays. The number of the substates for \(p\)-wave Cooper pair is three. Later, it was shown that the mixing of all these states is favored energetically [49], which turned out to be the B phase nowadays. At early stages, the thermodynamic properties are explored by Leggett using the framework of renormalized quasi-particles [50; 51]. Experimentally, the A and B phases were discovered at \(2.6\,\)mK and \(1.8\,\)mK respectively [52], which confirmed the existence of the phase structure of the superfluid \({}^{3}\)He. The nucleus of a \({}^{3}\)He atom consists of two protons and one neutron. The proton spins are aligned anti-parallel with each other, while the neutron spin is isolated, making the total spin angular momentum to be \(I=1/2\). In the superfluid phase, two \({}^{3}\)He atoms form a Cooper pair, whose ground state is a spin-triplet \(p\)-wave condensate [53]. The corresponding order parameter is expressed in terms of annihilation operators of nuclei \(\hat{a}_{\vec{k}\alpha}\) as \[\Big{\langle}\hat{a}_{-\vec{k}\beta}\hat{a}_{\vec{k}\alpha}\Big{\rangle}\propto \Delta_{\vec{k}\alpha\beta}\equiv\sum_{\mu=1}^{3}d_{\mu}(\vec{k})(\sigma_{\mu} i\sigma_{2})_{\alpha\beta}\,, \tag{1}\] where \(\vec{k}\) and \(\alpha\) (\(\beta\)) are the momentum and the spin of a \({}^{3}\)He nucleus, respectively, and \(\sigma_{\mu}\) is the Pauli matrix. Since a Cooper pair forms a spin-triplet \(L=1\) relative angular momentum state, the vector \(d_{\mu}(\vec{k})\) can be represented as a linear combination of spherical harmonics \(Y_{1m}(\vec{k}/|\vec{k}|)\propto\vec{k}/|\vec{k}|\), \[d_{\mu}(\vec{k})=\sqrt{3}\sum_{j=1}^{3}A_{\mu j}\frac{\vec{k}_{j}}{|\vec{k}_{ j}|}. \tag{2}\] The phenomenological Lagrangian of the Cooper pairs, i.e., Ginzburg-Landau Lagrangian, can be expressed in terms of the \(3\times 3\) order parameter matrix \(A_{\mu j}\)[54; 55]. The index \(\mu=1,2,3\) refers to the \(S=1\) states while \(j=1,2,3\) to the \(L=1\) state both in the Cartesian basis. Namely \(A_{\mu j}\) transforms as a bi-vector under \(\text{SO}(3)_{L}\times\text{SO}(3)_{S}\). Note that \(A_{\mu j}\) is complex as its phase U(1)\({}_{\phi}\) corresponds to the conserved number operator of the Cooper pairs. Because the Lagrangian has to be Hermitian and invariant under the global \(\text{SO}(3)_{L}\times\text{SO}(3)_{S}\times\text{U}(1)_{\phi}\) symmetry, we have only one second-order term of \(A_{\mu j}\) \[I_{0}=\text{tr}\Big{(}AA^{\dagger}\Big{)}, \tag{3}\] and five fourth-order terms \[I_{1} =\big{|}\text{tr}\big{(}AA^{T}\big{)}\big{|}^{2}, \tag{4}\] \[I_{2} =\Big{[}\text{tr}\big{(}AA^{\dagger}\Big{)}\Big{]}^{2},\] (5) \[I_{3} =\text{tr}\big{[}(AA^{T})(AA^{T})^{*}\big{]},\] (6) \[I_{4} =\text{tr}\Big{[}(AA^{\dagger})^{2}\Big{]},\] (7) \[I_{5} =\text{tr}\Big{[}(AA^{\dagger})(AA^{\dagger})^{*}\Big{]}, \tag{8}\] in the effective potential. As a result, in the absence of any external fields, the effective potential per volume is given by \[V_{0}=\alpha(T)I_{0}+\frac{1}{2}\sum_{i=1}^{5}\beta_{i}I_{i}\,, \tag{9}\] where we neglect higher-order terms of \(A_{\mu j}\), which can be justified when we consider the phenomenology of a system sufficiently close to the phase transition, and the numerical values of \(|A_{\mu j}|\) are small. The coefficients \(\alpha\) and \(\beta_{i}\) are determined by the microscopic theory. For example, they have been calculated in the weak-coupling theory [53], and their numerical values are \[\alpha(T)\sim-10^{-3}\bigg{(}1-\frac{T}{T_{c}}\bigg{)}\ \mathrm{\mu eV^{-1}\AA^{-3}}\,, \tag{10}\] \[(\beta_{1}^{\mathrm{WC}},\beta_{2}^{\mathrm{WC}},\beta_{3}^{ \mathrm{WC}},\beta_{4}^{\mathrm{WC}},\beta_{5}^{\mathrm{WC}})=\frac{6}{5} \beta_{0}\bigg{(}-\frac{1}{2},1,1,1,-1\bigg{)}\,,\] (11) \[\beta_{0}\sim 10^{-3}\ \mathrm{\mu eV^{-3}\AA^{-3}}\,, \tag{12}\] where \(T_{c}\) is the transition temperature \(\sim 2.6\,\mathrm{mK}\) in the absence of external magnetic fields. The values of \(\beta_{i}\) can differ from those of \(\beta_{i}^{\mathrm{WC}}\) depending on pressure. Nevertheless, we will use the numerical values in eqs. (11) and (12) for \(\beta_{i}\) below since the experimentally measured values differ from \(\beta_{i}^{\mathrm{WC}}\) by only \(\mathcal{O}(1)\) factors, \((\beta_{i}-\beta_{i}^{\mathrm{WC}})/\beta_{0}\simeq\mathcal{O}(1)\)[56]. As noted above, the effective Lagrangian has a global symmetry \(\mathrm{SO}(3)_{L}\times\mathrm{SO}(3)_{S}\times\mathrm{U}(1)_{\phi}\), which corresponds to the rotation in the momentum space, the rotation in the spin space, and the overall phase rotation, respectively. It is known that, depending on the values of coefficients in eq. (9), the matrix \(A\) acquires a non-zero expectation value in the ground state, which spontaneously breaks the global symmetry and leads to different phases. Without an external magnetic field, there are two superfluid phases for \({}^{3}\)He, the A and B phases. Their expectation values are expressed as \[\text{A phase: }A_{\mu j}\propto\frac{1}{\sqrt{2}}\begin{pmatrix}0&0&0 \\ 0&0&0\\ 1&i&0\end{pmatrix}, \tag{13}\] \[\text{B phase: }A_{\mu j}\propto\frac{1}{\sqrt{3}}e^{i\phi}R_{\mu j}( \vec{n},\ \theta)\,, \tag{14}\] where \(\phi\) is an overall phase, and \(R_{\mu j}\) is a relative rotation of the spin and orbital spaces, represented by a rotation axis \(\vec{n}\) and a rotation angle \(\theta\). Note that there are more than one choice of the order parameter in the A phase corresponding to the choices of particular directions of spin and orbital spaces, both of which are assumed to be the \(z\)-axis in the above expression. When we turn on an external magnetic field \(\vec{B}\), the potential \(V\) has two more invariant terms \[F^{(1)} =i\eta\sum_{\mu\nu\lambda j}\epsilon_{\mu\nu\lambda}B_{\mu}A_{ \nu j}^{*}A_{\lambda j}\,, \tag{15}\] \[F^{(2)} \propto\sum_{\mu\nu j}B_{\mu}A_{\mu j}B_{\nu}A_{\nu j}^{*}\,. \tag{16}\] The term \(F^{(1)}\) is there only for the indices \(\mu,\nu\) because the \({}^{3}\)He atoms are electrical neutral and their orbital angular momentum does not have a magnetic moment, while their spins do. Assuming that \(\vec{B}\) is along the \(z\)-direction, one can see that \(F^{(1)}\) and \(F^{(2)}\) break the global symmetry to \(\mathrm{SO(3)}_{L}\times\mathrm{U(1)}_{S_{z}}\times\mathrm{U(1)}_{\phi}\). Because these interaction terms \(F^{(1)}\) and \(F^{(2)}\) bring three types of spontaneous symmetry breaking depending on the coefficients, there are three corresponding phases: \[\text{A}_{1}\ \text{phase:}\ A_{\mu j}\propto\frac{1}{2}\begin{pmatrix}1 &i&0\\ i&-1&0\\ 0&0&0\end{pmatrix}, \tag{17}\] \[\text{A}_{2}\ \text{phase:}\ A_{\mu j}\propto\frac{1}{\sqrt{2( \left|p_{1}\right|^{2}+\left|p_{2}\right|^{2})}}\begin{pmatrix}p_{1}&ip_{1}&0 \\ ip_{2}&-p_{2}&0\\ 0&0&0\end{pmatrix},\] (18) \[\text{B}_{1}\ \text{phase:}\ A_{\mu j}\propto\frac{e^{i\phi}}{\sqrt{2 \Big{[}2(\left|p_{1}\right|^{2}+\left|p_{2}\right|^{2})+\left|p_{3}\right|^{2} \Big{]}}}\begin{pmatrix}p_{1}&p_{2}&0\\ \pm p_{2}&\mp p_{1}&0\\ 0&0&p_{3}\end{pmatrix}, \tag{19}\] where the real parameters \(p_{1}\), \(p_{2}\), and \(p_{3}\) are uniquely determined as functions of the coefficients \(\alpha(T)\) and \(\beta_{i}\), as demonstrated in the next section. In fig. 1, we show the phase diagram of the superfluid \({}^{3}\)He as a function of the temperature \(T\), the pressure \(P\), and the external magnetic field \(H\) taken from [53]. We can see from the figure that the A\({}_{1}\) phase of our interest can be realized by carefully setting pressure \(P\), temperature \(T\), and the external magnetic field \(H\); for example, \(P\sim\mathcal{O}(1)\) bars, \(T\sim\mathcal{O}(1)\) mK, and \(H\gtrsim\mathcal{O}(1)\) T. In the next subsection, we will see in more detail the criteria for which phase is realized, focusing on the A, A\({}_{1}\), and A\({}_{2}\) phases. Figure 1: The phase diagram of the superfluid \({}^{3}\)He taken from [53]. ### Spinor BEC description of magnetism in the A, A\({}_{1}\), and A\({}_{2}\) phases Hereafter, we focus on the A, A\({}_{1}\), and A\({}_{2}\) phases, which have a unified description with the so-called spinor BEC formalism by keeping only the spin degree of freedom. The spinor BEC refers to a Bose-Einstein condensate of atoms with integer spin, see e.g., for a review [57; 58]. This procedure is appropriate partly because the unbroken symmetries of these phases do not mix the rotations in spin and orbital spaces unlike the \(B\) phases. Thus, if we do not consider excitation of the orbital angular momentum of \({}^{3}\)He, we can focus only on the spin space. For this purpose, we define a _spinor_ order parameter \(\vec{c}\) by fixing \(L_{z}=+1\) as \[A_{\mu j}=\frac{1}{\sqrt{2}}(\vec{c},\ i\vec{c},\ 0)\,. \tag{20}\] We can rewrite invariant terms \(I_{i}\) and \(F^{(1)}\) in terms of \(\vec{c}\) as \[I_{0} =\vec{c}\,^{*}\cdot\vec{c}\,, \tag{21}\] \[I_{2} =(\vec{c}\,^{*}\cdot\vec{c})^{2}\,,\] (22) \[I_{4} =(\vec{c}\,^{*}\cdot\vec{c})^{2}\,,\] (23) \[I_{5} =|\vec{c}\cdot\vec{c}|^{2}=\left[(\vec{c}\,^{*}\times\vec{c})^{2 }+(\vec{c}\,^{*}\cdot\vec{c})^{2}\right],\] (24) \[F^{(1)} =i\eta\vec{B}\cdot(\vec{c}\,^{*}\times\vec{c})\,. \tag{25}\] Here, we do not consider the invariants \(I_{1}\), \(I_{3}\), and \(F^{(2)}\) because these terms vanish for the A\({}_{1}\) and A\({}_{2}\) phases. Finally, we get a simplified effective potential with the external magnetic field \[V=\alpha(T)(\vec{c}\,^{*}\cdot\vec{c})+\frac{\beta_{245}}{2}(\vec{c}\,^{*} \cdot\vec{c})^{2}+\frac{\beta_{5}}{2}(\vec{c}\,^{*}\times\vec{c})^{2}+i\eta \vec{B}\cdot(\vec{c}\,^{*}\times\vec{c})\,. \tag{26}\] Here, we have defined a new parameter, \[\beta_{245}\equiv\beta_{2}+\beta_{4}+\beta_{5}\,. \tag{27}\] Note that \(\beta_{245}>0\) and \(\beta_{5}<0\) according to eq. (11). In the following, we discuss the magnetism of the A\({}_{1}\) and A\({}_{2}\) phases with this potential. Using the simplified effective potential, we can easily analyze the potential form as a function of parameters.1 In the absence of an external magnetic field, only the temperature plays an important role. For \(T>T_{c}\), since \(\alpha(T)>0\) according to eq. (10), the potential \(V\) has a global minimum at \(\vec{c}=\vec{0}\), while for \(T<T_{c}\) or \(\alpha(T)<0\), there is a global minimum at \(\vec{c}\propto(0,0,1)^{T}\) with the potential energy \(-\alpha^{2}/(2\beta_{245})<0\). The former corresponds to the normal liquid phase, while the latter is consistent with the matrix structure of the A-phase order parameter (13). Footnote 1: Note that there can be a deeper minimum of the potential, which cannot be described by the spinor BEC formalism. Such a phase may correspond to the B or B\({}_{1}\) phase due to the spin-orbit couplings which originate from a long-distance dipole-dipole interaction among magnetic moments. However this effect is small and can be ignored in the presence of a strong magnetic field. It is worth noting, however, that any of the A, A\({}_{1}\), and A\({}_{2}\) phases can be a global minimum of \(V\) for reasonable choices of temperature, external magnetic field, and pressure, such as \(T\simeq T_{c}\) and \(B_{z}\sim\mathcal{O}(1)\,\mathrm{T}\) under the standard atmosphere. Next, we turn on the external magnetic field \(\vec{B}=(0,0,B_{z})^{T}\) with \(\eta B_{z}>0\). Restricting the form of \(\vec{c}\) to be \((p_{1},ip_{2},0)^{T}\) with \(p_{1},p_{2}\in\mathbb{R}\), we obtain local minima of \(V\) expressed as \[V=0 \text{at}\quad\vec{c}=\vec{0}\,, \tag{28}\] \[V=V_{1}\equiv-\frac{\alpha(T)^{2}}{2\beta_{245}}\frac{(x+y)^{2} }{y(1+y)} \text{at}\quad\vec{c}=\vec{c}_{1}\equiv\sqrt{\frac{-\alpha(T)(x+y)}{2 \beta_{245}(1+y)}}\begin{pmatrix}1\\ i\\ 0\end{pmatrix},\] (29) \[V=V_{2}\equiv-\frac{\alpha(T)^{2}}{2\beta_{245}}\frac{x^{2}+y}{y} \text{at}\quad\vec{c}=\vec{c}_{2}\equiv\sqrt{\frac{-\alpha(T)}{2 \beta_{245}}}\begin{pmatrix}\sqrt{1+\sqrt{1-x^{2}}}\\ i\sqrt{1-\sqrt{1-x^{2}}}\\ 0\end{pmatrix}, \tag{30}\] where we defined dimensionless variables \[x\equiv\frac{\beta_{245}\eta B_{z}}{\alpha(T)\beta_{5}}\propto B _{z}\bigg{(}1-\frac{T}{T_{c}}\bigg{)}^{-1}\,, \tag{31}\] \[y\equiv-\frac{\beta_{245}}{\beta_{5}}\,>0\,. \tag{32}\] The value of \(x\) determines which of the local minima is the global minimum of \(V\) as shown in fig. 2. Note that for \(p_{1}\in\mathbb{R}\), the local minimum \(\vec{c}=\vec{c}_{1}\) exists only when \(x<-y\) or \(x>0\). Similarly, the local minimum \(\vec{c}=\vec{c}_{2}\) exists when \(0<x<1\). When \(0<x<1\), we have \(V_{1}\geq V_{2}\), and this region corresponds to the A\({}_{2}\) phase (the blue region of fig. 2). When \(x>1\) or \(x<-y\), we have \(V_{2}\geq V_{1}\), which corresponds to the A\({}_{1}\) phase (the red region). When \(-y<x<0\), we obtain the normal liquid phase (the gray region). For later convenience, we define a normalized order parameter \[\vec{\phi}\equiv\frac{\sqrt{n}}{\Delta}\vec{c}\,, \tag{33}\] where \(\Delta\) is a normalization factor with a dimension of energy defined as \[\Delta\equiv\sqrt{\vec{c}\,^{*}\cdot\vec{c}}\,, \tag{34}\] so that \(\vec{\phi}^{*}\cdot\vec{\phi}=n\) with \(n\) being the number density of Cooper pairs. Given that the typical interatomic spacing is \(\sim 3.5\,\text{\AA}\) in the superfluid \({}^{3}\)He, we can estimate \(2n\simeq 0.023\,\text{\AA}^{-3}\). This is obtained by two independent ways [59] that agree with each other; one is to use the Green function Monte-Carlo method to solve the many-body Schrodinger equation to calculate ground state properties [60; 61], while the other is the variational method, which is constructed by the Slator determinant for the correlation pairs [62; 63]. The effective potential is now given by \[V=-\mu\vec{\phi}^{*}\cdot\vec{\phi}+\frac{\mu}{2v^{2}}(\vec{\phi}^{*}\cdot\vec {\phi})^{2}-\lambda(\vec{\phi}^{*}\times\vec{\phi})^{2}+ig\mu_{N}\vec{B}\cdot( \vec{\phi}^{*}\times\vec{\phi}), \tag{35}\] with some new parameters \(\mu\equiv-\alpha(T)F\), \(v^{2}\equiv-\alpha(T)/(\beta_{245}F)\), \(\lambda\equiv-\beta_{5}F^{2}/2\), and \(g\mu_{N}\equiv\eta F\) with \(F\equiv\Delta^{2}/n\). Typical sizes of parameters are estimated as \(\mu\sim\text{neV}\), \(v\sim\text{\AA}^{-3/2}\), and \(\lambda\sim\text{neVA}^{3}\). In the last term of the potential, \(g\simeq-4.3\) is the \(g\)-factor of the \({}^{3}\)He nucleus [64], while \(\mu_{N}\simeq 3.2\times 10^{-8}\,\mathrm{eV\,T^{-1}}\) is the nuclear magneton. This choice of the coefficient is justified by the fact that the spin density is expressed as \(\vec{s}\equiv-i(\vec{\phi}^{*}\times\vec{\phi})\). Indeed, the last term describes the interaction between the magnetic field and the spin of the form \(g\mu_{N}\vec{B}\cdot\vec{s}\). We can now study the ordering of nuclear spins using \(\vec{\phi}\) and its expectation values in different phases. In the A\({}_{2}\) phase, the spin per Cooper pair is calculated as \[\vec{S}\equiv\frac{\vec{s}}{n}=\begin{pmatrix}0\\ 0\\ x\end{pmatrix}. \tag{36}\] In the limit of \(B_{z}\to 0\) or \(x\to 0\), this phase is smoothly connected to the A phase, which has an anti-ferromagnetic ordering with \(\vec{S}=\vec{0}\). In the A\({}_{1}\) phase, the spin per Cooper pair is \[\vec{S}=\begin{pmatrix}0\\ 0\\ 1\end{pmatrix}, \tag{37}\] which shows that the spins of Cooper pairs are completely aligned along the direction of \(g\mu_{N}\vec{B}\). Therefore, we conclude that the A\({}_{1}\) phase has a ferromagnetic ordering. Figure 2: The schematics of the phase diagram focusing on A, A\({}_{1}\), and A\({}_{2}\) phases of the superfluid \({}^{3}\)He. Here we fix the orbital angular momentum at \(L_{z}=1\), so the B and B\({}_{1}\) phases do not appear in this phase diagram. The white box in each phase schematically represents the spin configuration of the Cooper pairs with the magnetic field \(\vec{B}\) pointing _down_ because of the negative g-factor. Note that the spins are not equally spaced as shown in this figure since the \({}^{3}\)He is not a solid in our setup. ### Nuclear magnons in the ferromagnetic A\({}_{1}\) phase Depending on the symmetry-breaking patterns in different phases, there appear several gapless modes, the so-called Nambu-Goldstone (NG) modes. These modes are classified as type-A and type-B modes with characteristic dispersion relations at the long-wavelength limit [65; 66]. For example, in the ferromagnetic A\({}_{1}\) phase, the coset space is given by \[\mathbb{R}P^{3}=\frac{\text{SO}(3)_{S}\times\text{U}(1)_{\phi}}{ \text{SO}(2)_{S_{z}-\phi}}, \tag{38}\] which corresponds to one type-A NG mode with a linear dispersion and one type-B NG mode with a quadratic dispersion. The type-B mode is identified as an acoustic magnon mode, whose gap can be generated by the soft symmetry-breaking effect, including the external magnetic field. On the other hand, in the anti-ferromagnetic A\({}_{2}\) phase, the coset space is given by \[S^{2}\times\text{U}(1)_{\phi}=\frac{\text{SO}(3)_{S}\times\text{ U}(1)_{\phi}}{\text{SO}(2)_{S_{z}}}, \tag{39}\] which corresponds to three type-A NG modes, two of which are identified as magnon modes with \(S_{x}\) and \(S_{y}\). Since the magnon modes in the ferromagnetically-ordered phase have the strongest interaction with the spatially uniform magnetic field, such as the one induced by the axion dark matter, we will focus on the type-B magnon mode in the A\({}_{1}\) phase. The excitation modes in the superfluid \({}^{3}\)He can be studied by treating the normalized order parameter \(\vec{\tilde{\phi}}\) as a dynamical field. The field theory Lagrangian is given by \[\mathcal{L}=i\vec{\tilde{\phi}}^{\dagger}\cdot\partial_{t}\vec{ \tilde{\phi}}-\frac{1}{2m^{\star}}\sum_{i}(\partial_{i}\vec{\tilde{\phi}}^{ \dagger})\cdot(\partial_{i}\vec{\tilde{\phi}})-V, \tag{40}\] where \(i=x,y,z\) are the space coordinates, and the potential \(V\) is given by eq. (35). The effective mass \(m^{\star}\) depends on the pressure imposed on \({}^{3}\)He and can be experimentally determined through measurements of the specific heat. The typical value of \(m^{\star}\) is about 3 to 6 times larger than the \({}^{3}\)He atomic mass [67]. In order to study the magnon excitation mode in the A\({}_{1}\) phase, we add a quantum fluctuation \(\hat{\psi}\) to the expectation value \(\langle\vec{\tilde{\phi}}\rangle=\sqrt{n/2}(1,i,0)\) as \[\vec{\tilde{\phi}}=\frac{\sqrt{n}}{2\sqrt{2}}\begin{pmatrix}2- \hat{\psi}^{\dagger}\hat{\psi}-\hat{\psi}^{2}\\ i(2-\hat{\psi}^{\dagger}\hat{\psi}+\hat{\psi}^{2})\\ -2\sqrt{2-\hat{\psi}^{\dagger}\hat{\psi}}\,\hat{\psi}\end{pmatrix}. \tag{41}\] We also consider the fluctuation of the magnetic field as \(\vec{B}=(0,0,-B_{z})^{T}+\delta\vec{B}\) with \(B_{z}>0\). For simplicity, we assume that both \(\hat{\psi}\) and \(\delta\vec{B}\) do not depend on the space coordinate. By substituting the expansion in the Lagrangian (40) and picking up only the leading-order terms of the fluctuation \(\hat{\psi}\) and \(\delta\vec{B}\), we obtain the following terms \[\delta\mathcal{L}=g\mu_{N}nB_{z}\hat{\psi}^{\dagger}\hat{\psi}+ \frac{1}{\sqrt{2}}g\mu_{N}n\Big{(}\delta B_{x}(\hat{\psi}+\hat{\psi}^{\dagger} )-i\delta B_{y}(\hat{\psi}-\hat{\psi}^{\dagger})\Big{)}, \tag{42}\] which originally come from the last term of the potential (35). It is convenient to discuss in terms of the non-relativistic Hamiltonian described with the magnon operators. For this purpose, we first obtain the relationship of the spin density \[\hat{s}_{+} \equiv\hat{s}_{x}+i\hat{s}_{y}=n\sqrt{2-\hat{\psi}^{\dagger}\hat{ \psi}}\ \hat{\psi}, \tag{43}\] \[\hat{s}_{-} \equiv\hat{s}_{x}-i\hat{s}_{y}=n\hat{\psi}^{\dagger}\sqrt{2-\hat{ \psi}^{\dagger}\hat{\psi}},\] (44) \[\hat{s}_{z} =n(1-\hat{\psi}^{\dagger}\hat{\psi}). \tag{45}\] On the other hand, using the Holstein-Primakoff transformation with the spin size \(s=1\), we can relate the spin operator of each Cooper pair labeled by \(\ell\) to the magnon annihilation and creation operators as \[\hat{S}_{\ell}^{+} =\sqrt{2-\hat{b}_{\ell}^{\dagger}\hat{b}_{\ell}}\,\hat{b}_{\ell}, \tag{46}\] \[\hat{S}_{\ell}^{-} =\hat{b}_{\ell}^{\dagger}\sqrt{2-\hat{b}_{\ell}^{\dagger}\hat{b }_{\ell}},\] (47) \[\hat{S}_{\ell}^{z} =1-\hat{b}_{\ell}^{\dagger}\hat{b}_{\ell}, \tag{48}\] with the canonical commutation relation of bosonic operators \([\hat{b}_{\ell},\hat{b}_{\ell^{\prime}}^{\dagger}]=\delta_{\ell\ell^{\prime}}\). We are only interested in the spatially uniform mode obtained by the Fourier transformation \(\hat{d}\equiv\sum_{\ell=1}^{N}\hat{b}_{\ell}/\sqrt{N}\), where \(N\equiv nV_{3\text{He}}\) is the total number of Cooper pairs with \(V_{3\text{He}}\) being the volume of the superfluid \({}^{3}\)He. We find that this mode is related to the spatially uniform fluctuation \(\hat{\psi}\) as \[\hat{d}=\sqrt{N}\hat{\psi}. \tag{49}\] Note that eq. (49) is consistent when \(\hat{\psi}\) obeys a bosonic commutation relation, which is the case for the spinor BEC formalism. Finally, substituting the magnon operator (49) in the Lagrangian (42), we obtain the relevant part of the Hamiltonian \[H=\omega_{L}\hat{d}^{\dagger}\hat{d}-\sqrt{\frac{N}{2}}g\mu_{N} \left(\delta B_{x}(\hat{d}+\hat{d}^{\dagger})-i\delta B_{y}(\hat{d}-\hat{d}^{ \dagger})\right)+\cdots, \tag{50}\] where \(\omega_{L}\equiv-g\mu_{N}B_{z}\) is the Larmor frequency. As we will see below, the second term causes the magnon excitation by the axion-induced effective magnetic field. ## 3 Axion detection In this section, we discuss the method to obtain the cavity photon signal from the axion. In section 3.1, we will see how the axion is converted into the magnon. In section 3.2, we will look at the mixing between the magnon and the cavity photon. This cavity photon is used as a signal. In section 3.3, we will discuss how to catch the signal with quantum measurement techniques. ### Axion-magnon conversion As is mentioned above, the spin angular momentum of a \({}^{3}\)He nucleus originates from the neutron spin. As a result, the axion-proton coupling can be neglected in our discussion, which generally has a different value from the axion-neutron coupling. The axion-neutron dynamics is described by the Lagrangian \[\mathcal{L}=\frac{1}{2}(\partial_{\mu}a)^{2}-\frac{1}{2}m_{a}a^{2}+\bar{n}(i \partial\hskip-5.0pt/-m_{n})n+C_{ann}\frac{\partial_{\mu}a}{2f_{a}}\bar{n} \gamma^{\mu}\gamma_{5}n, \tag{10}\] where \(a\) and \(n\) are the axion and the neutron fields with masses \(m_{a}\) and \(m_{n}\), respectively, \(C_{ann}\) is a model-dependent \(\mathcal{O}(1)\) coupling coefficient, and \(f_{a}\) is the axion decay constant. For the QCD axion, there is a relationship between \(m_{a}\) and \(f_{a}\)[68]: \[m_{a}\simeq 5.7\,\mathrm{\SIUnitSymbolMicro eV}\times\left(\frac{10^{12}\, \mathrm{GeV}}{f_{a}}\right). \tag{11}\] We assume that the axion field explains all of the dark matter abundance through the misalignment mechanism [3; 70; 69]; accordingly, the axion field can be treated as a classical field with coherent oscillation \[a(t,\vec{x})\simeq a_{0}\sin(m_{a}t-m_{a}\vec{v}_{a}\cdot\vec{x}+\varphi), \tag{12}\] where \(v_{a}\) is the velocity of axion, while \(\varphi\) is a random phase. Here, we utilize the fact that the axion is non-relativistic to approximate the axion energy to be \(m_{a}\). Using these variables, the local dark matter density \(\rho_{a}\sim 0.45\,\mathrm{GeV/cm^{3}}\) can be expressed as \(\rho_{a}=(m_{a}a_{0})^{2}/2\). The expression of \(a(t,\vec{x})\) tells us that the coherent length of the axion field is given by \(\lambda_{a}\equiv 1/(m_{a}v_{a})\). Since \(\lambda_{a}\sim 100\,\mathrm{m}\) for \(m_{a}\sim 1\,\mathrm{\SIUnitSymbolMicro eV}\) and \(v_{a}\sim 10^{-3}\)[71], the axion field can be regarded as a spatially uniform field within an experimental apparatus, which allows us to neglect the second argument of the sine function. Also, the coherence time of the axion field is \(\tau_{a}\simeq 1/(m_{a}v_{a}^{2})\sim 1\,\mathrm{ms}\) for \(m_{a}\sim 1\,\mathrm{\SIUnitSymbolMicro eV}\), during which the velocity \(\vec{v}_{a}\) and the phase \(\varphi\) can be treated as constant. In the non-relativistic limit, we obtain the following effective Hamiltonian density describing the axion-nucleus interaction: \[\mathcal{H}_{\mathrm{eff}}\simeq-C_{ann}\frac{m_{a}a_{0}}{f_{a}}\vec{v}_{a} \cdot\vec{s}_{N}\sin(m_{a}t+\varphi), \tag{13}\] where \(\vec{s}_{N}\) is the spin density operator of \({}^{3}\)He nuclei, which can be identified as the spin operator of neutrons in the \({}^{3}\)He. Note that the interaction strength is proportional to \(m_{a}a_{0}=\sqrt{2\rho_{a}}\) and independent of \(m_{a}\). The interaction term can be rewritten in the form of the ordinary spin-magnetic field coupling, \(\mathcal{H}=\gamma_{N}\vec{B}_{a}\cdot\vec{s}_{N}\sin(m_{a}t+\varphi)\), where \(\gamma_{N}=g\mu_{N}\) is the gyromagnetic ratio of a nucleus. The effective axion magnetic field that exclusively couples to the neutron spins is given by \[\gamma_{N}\vec{B}_{a}(t)=-C_{ann}\frac{\sqrt{2\rho_{a}}}{f_{a}}\vec{v}_{a}\sin (m_{a}t+\varphi). \tag{14}\] Thus, by substituting \(\delta\vec{B}\) by \(\vec{B}_{a}\) in eq. (50), we obtain the Hamiltonian of the axion-nuclear magnon coupled system \[H(t) =H_{0}+H_{\rm int}(t), \tag{66}\] \[H_{0} =\omega_{L}\hat{d}^{\dagger}\hat{d},\] (67) \[H_{\rm int}(t) =\frac{C_{ann}}{f_{a}}\sqrt{\rho_{a}N}\left(v_{a}^{+}\hat{d}^{ \dagger}+{\rm h.c.}\right)\sin(m_{a}t+\varphi), \tag{68}\] where \(v_{a}^{+}\equiv v_{a}^{x}+iv_{a}^{y}\). We define the ground state \(\left|0\right\rangle\) and the one-magnon state \(\left|1\right\rangle\) of \({}^{3}\)He with \(\hat{d}\left|0\right\rangle=0\) and \(\left|1\right\rangle\equiv\hat{d}^{\dagger}\left|0\right\rangle\).2 Then, the magnon production amplitude is calculated as Footnote 2: States with more than one magnon can be safely neglected due to the smallness of the magnon excitation rate for the axion parameter region of our interest. \[-i\mathcal{M}=\left\langle 1\right|U(t)\left|0\right\rangle=-i\int_{0}^{t} \mathrm{d}t^{\prime}\,\left\langle 1\right|H_{\rm int}(t^{\prime})\left|0 \right\rangle e^{-i\omega_{L}t^{\prime}}, \tag{69}\] where \(t<\tau_{a}\) is the observation time and the evolution matrix is defined as \[U(t)\equiv\exp\left[-i\int_{0}^{t}\mathrm{d}t^{\prime}\,H(t^{ \prime})\right]. \tag{70}\] Since the axion spectrum is approximately monochromatic with energy \(m_{a}\), the magnon production rate is resonantly enhanced when \(m_{a}=\omega_{L}\). In this limit, the amplitude is evaluated as \[\mathcal{M}\simeq-i\frac{C_{ann}}{2f_{a}}\sqrt{\rho_{a}N}v_{a}^{+ }e^{i\varphi}t, \tag{71}\] where we assumed \(t\gg\omega_{L}^{-1}\) so that the oscillatory term can be dropped. Then the transition probability is \[P=|\mathcal{M}|^{2}=\left(\frac{C_{ann}}{2f_{a}}\right)^{2}\rho_ {a}Nt^{2}v_{a}^{2}\sin^{2}\theta_{a}, \tag{72}\] where \(\theta_{a}\) is the relative angle between the external magnetic field and axion wind. This result is consistent with [72] where the spatially uniform mode (the Kittel mode) of the electronic magnons is considered. The transition probability grows as \(P\propto t^{2}\) as far as the coherence of the signal is maintained. The typical coherence time \(\tau\) can be estimated as \[\tau\sim\min\left(\tau_{a},\tau_{\rm mag},\tau_{\rm exp}\right), \tag{73}\] where \(\tau_{\rm mag}\) is the lifetime of magnon, and \(\tau_{\rm exp}\) denotes the minimum relaxation time scale of excitation modes used for the magnon detection. Since we propose to use the mixing between a nuclear magnon and a cavity photon as is discussed in section 3.2, the cavity quality factor \(Q\) is an important component that determines \(\tau_{\rm exp}\). In this paper, we can use the relationship \(\tau_{\rm mag}<\tau_{a}\) since \(\tau_{\rm mag}\sim 1.2\,\mathrm{\SIUnitSymbolMicro s}\)[73], while \(\tau_{a}\sim 1\,\mathrm{ms}\) for the parameter region of our interest. Also, we assume \(\tau_{\rm mag}<\tau_{\rm exp}\) and use \(\tau=\tau_{\rm mag}\) for the following calculation, which is reasonable for \(Q\gtrsim 10^{3}\). Finally, the signal rate for the total observation time \(t\gg\tau\) is evaluated as \[\frac{{\rm d}N_{\rm sig}}{{\rm d}t}=\frac{N}{4}C_{ann}^{2}\frac{ \rho_{a}v_{a}^{2}\sin^{2}\theta_{a}}{f_{a}^{2}}\tau, \tag{3.14}\] where \(\sin^{2}\theta_{a}\) should be replaced by the averaged value if \(t\gg\tau_{a}\). Hereafter, we assume this is the case and simply average out the directional dependence, though it might be interesting to study it further in light of the modulation of the axion signal. Note that the total number of Cooper pairs for \({}^{3}\)He of mass \(M\) is calculated as \(N=M/(2m_{{}^{3}{\rm He}})\sim 1.0\times 10^{25}(M/100\,{\rm g})\). For the QCD axion, for example, the external magnetic field \(B_{z}=10\,{\rm T}\) corresponds to the Larmor frequency \(\omega_{L}=m_{a}\simeq 1.3\,{\rm\mu eV}\) and \(f_{a}\simeq 4.3\times 10^{12}\,{\rm GeV}\), which result in \[\frac{{\rm d}N_{\rm sig}}{{\rm d}t}=8.7\times 10^{-6}\,{\rm s}^{-1} \times C_{ann}^{2}\left(\frac{M}{100\,{\rm g}}\right)\left(\frac{v_{a}}{10^{- 3}}\right)^{2}. \tag{3.15}\] For later convenience, we also show the expression of the signal power: \[P_{\rm sig}=1.9\times 10^{-31}\,{\rm W}\times C_{ann}^{2} \left(\frac{M}{100\,{\rm g}}\right)\left(\frac{v_{a}}{10^{-3}}\right)^{2}\sin ^{2}\theta_{a}. \tag{3.16}\] ### Mixing between magnon and cavity modes When one of the cavity modes has the same frequency as the magnons of our interest, \(\omega_{\rm cavity}=\omega_{L}\), there is a large mixing between these modes. This can be understood similarly as the formation of the magnon-polariton of electron spins [74; 75; 76]. Let \(\hat{c}\) (\(\hat{c}^{\dagger}\)) be the annihilation (creation) operator of the cavity mode. Assuming that all the other cavity modes have frequencies largely deviated from \(\omega_{L}\), we can safely neglect them and write down the relevant part of the Hamiltonian \[H=\omega_{L}\hat{d}^{\dagger}\hat{d}+\omega_{\rm cavity}\hat{c}^{ \dagger}\hat{c}+H_{\rm mix}. \tag{3.17}\] The mixing term is sourced from the interaction between nucleon spin and the magnetic field of the cavity mode and is given by \[H_{\rm mix}=ig\mu_{N}\int_{{}^{3}{\rm He}}{\rm d}V\left(\vec{ \phi}^{*}(\vec{r})\times\vec{\phi}(\vec{r})\right)\cdot\vec{B}_{0}(\vec{r})( \hat{c}+\hat{c}^{\dagger}), \tag{3.18}\] where the volume integral is performed over the volume of the superfluid \({}^{3}\)He, while \(\vec{B}_{0}(\vec{r})\) is the profile of the magnetic field of the cavity mode. If we consider as an example the cavity mode with \(\vec{B}_{0}(\vec{r})=B_{0}(\vec{r})\vec{u}_{x}\) with \(\vec{u}_{x}\) being the unit vector along the \(x\)-axis, terms linear in the magnon mode is obtained similarly to eq. (2.50) as \[H_{\rm mix}\simeq\sqrt{\frac{N}{2}}g\mu_{N}\vec{B}_{0}(\hat{d}+ \hat{d}^{\dagger})(\hat{c}+\hat{c}^{\dagger}), \tag{3.19}\] where the averaged magnetic field over the superfluid \({}^{3}\)He is defined as \[\overline{B}_{0}\equiv\frac{1}{V_{{}^{3}\text{He}}}\int_{{}^{3}\text{He}}\text{d}V \,B_{0}(\vec{r}). \tag{3.20}\] We finally find the quadratic part of the Hamiltonian \[H \simeq\omega_{L}\hat{d}^{\dagger}\hat{d}+\omega_{\text{cavity}} \hat{c}^{\dagger}\hat{c}+g_{\text{eff}}(\hat{c}\hat{d}^{\dagger}+\hat{c}^{ \dagger}\hat{d}), \tag{3.21}\] \[g_{\text{eff}} =\sqrt{\frac{N}{2}}\,g\mu_{N}\overline{B}_{0}, \tag{3.22}\] where we used the rotating wave approximation to neglect the fast oscillation terms. Note that the typical size of the magnetic field can be estimated by matching the electromagnetic energy with a cavity mode frequency. Defining \(\left\langle B_{0}^{2}\right\rangle\equiv\frac{1}{V_{\text{cavity}}}\int_{ \text{cavity}}\text{d}V\,B_{0}^{2}(\vec{r})\) with integration over the cavity volume, we obtain \[\sqrt{\left\langle B_{0}^{2}\right\rangle}\sim 4\,\text{f}\text{T}\left( \frac{\omega_{\text{cavity}}}{10^{2}\,\text{MHz}}\right)^{1/2}\left(\frac{10 ^{3}\,\text{cm}^{3}}{V_{\text{cavity}}}\right)^{1/2}. \tag{3.23}\] For the order estimation of the physics scales, we can approximate that \(\overline{B}_{0}\sim\sqrt{\left\langle B_{0}^{2}\right\rangle}\), though there can be an \(\mathcal{O}(1)\) geometry factor difference. Indeed, this estimation is consistent with [76], which shows that \(\overline{B}_{0}\sim 5\,\text{p}\text{T}\) in one of the figures, while a rough estimation gives \(\sqrt{\left\langle B_{0}^{2}\right\rangle}\sim 1\,\text{p}\text{T}\). By diagonalizing the Hamiltonian (3.21), we obtain the energy eigenstates. In particular, the maximal mixing is realized when \(\omega_{L}=\omega_{\text{cavity}}\) with the corresponding energy eigenvalues \(\left|\omega_{L}\pm g_{\text{eff}}\right|\). Compared with the magnon-polariton of electron spins, the energy scale of the system is smaller by a factor of \(\mu_{N}/\mu_{B}\sim 10^{-3}\) with \(\mu_{B}\) being the Bohr magneton. This affects the time scale of the conversion of the magnon mode into the cavity mode. The time scale can be estimated by evaluating the energy gap \(\Delta E=2\min\left(\omega_{L}=\omega_{\text{cavity}},g_{\text{eff}}\right)\) between two energy eigenstates. Assuming \(V_{{}^{3}\text{He}}\sim V_{\text{cavity}}\) for simplicity, and the above estimation of \(\overline{B}_{0}\), we have \[\frac{g_{\text{eff}}}{2\pi}\sim 0.3\,\text{MHz}\,\left(\frac{M}{100\,\text{g} }\right)^{1/2}\left(\frac{\omega_{L}}{10^{2}\,\text{MHz}}\right)^{1/2}, \tag{3.24}\] where the \(\omega_{L}=\omega_{\text{cavity}}\) dependence comes from that of \(\sqrt{\left\langle B_{0}^{2}\right\rangle}\). This expression, together with \(\omega_{L}\sim 200\,\text{MHz}\) for \(B=1\,\text{T}\), shows that the conversion time scale, which is usually fixed by \(g_{\text{eff}}<\omega_{L}\), can be comparable to or shorter than the magnon lifetime \(\tau_{\text{mag}}\sim 1.2\,\text{\SIUnitSymbolMicro s}\). Thus, it is expected that a sizable fraction of magnons excited by the axion DM is converted to and detected as cavity modes. In appendix A, we show the detailed calculation of the dynamics of the magnon-cavity mixed system including various loss factors and quantum measurement techniques briefly introduced in the next subsection. ### Quantum measurement techniques In general, if one wants to detect weak signals originating from axions, it is helpful to amplify those signals with some amplifier. In such a case, the noise consists of two main contributions: the thermal noise and the noise from the amplifier. By expressing these noises in terms of temperatures, the total noise \(k_{B}T_{N}\) is written as [77] \[k_{B}T_{N}=\omega\bigg{(}n_{T}+\frac{1}{2}\bigg{)}+k_{B}T_{A}\,, \tag{3.25}\] where \(k_{B}T_{A}\) is the noise from the amplifier, and \(n_{T}\) is the occupation number of thermal photons under the physical temperature \(T\) (the temperature of the cavity in our setup), \[n_{T}=\frac{1}{\exp\!\left(\frac{\omega}{k_{B}T}\right)-1}\,. \tag{3.26}\] Even at zero temperature, the noise has a nonzero value, which is known as the standard quantum limit (SQL), \[k_{B}T_{\rm SQL}=\omega\,. \tag{3.27}\] Half of \(k_{B}T_{\rm SQL}\) comes from the thermal noise, and the other half comes from \(k_{B}T_{A}\). For instance, \(T_{\rm SQL}\sim 48\,\)mK at \(\omega\sim 1\,\)GHz. This SQL can be circumvented by using quantum measurement techniques (see [78] for a review). We use the squeezing of states and the homodyne measurement as described in [39]. We will summarize these techniques in this subsection. #### 3.3.1 Squeezing of states The starting point is introducing _quadratures_\(\hat{X}\) and \(\hat{Y}\) defined in terms of the annihilation (creation) operator of photons \(\hat{a}\) (\(\hat{a}^{\dagger}\)) as \[\hat{X}\equiv\frac{\hat{a}+\hat{a}^{\dagger}}{\sqrt{2}}\,,\quad\hat{Y}\equiv \frac{\hat{a}-\hat{a}^{\dagger}}{\sqrt{2}i}\,. \tag{3.28}\] Because of the commutation relation \([\hat{a},\hat{a}^{\dagger}]=1\), quadratures satisfy \([\hat{X},\hat{Y}]=i\,.\) This commutation relation results in the uncertainty relation of quadratures \[(\Delta\hat{X})^{2}(\Delta\hat{Y})^{2}\geq\frac{1}{4}\,. \tag{3.29}\] Since many of the ordinary measurement techniques measure both quadratures of the input signal at each time, the quantum noise \(\Delta\hat{X}\sim\Delta\hat{Y}\sim 1/2\) must appear and contribute to the SQL. However, quantum measurement techniques can decrease this quantum noise by focusing on only one of the quadratures. For example, a larger part of the uncertainty can be imposed on \(\hat{Y}\), as \(\Delta\hat{X}\sim 1/(2\sqrt{G})\) and \(\Delta\hat{Y}\sim\sqrt{G}/2\) with \(G\gg 1\), which reduces the uncertainty on the observable \(\hat{X}\) and remains consistent with eq. (3.29). This operation is called squeezing. Squeezing can be performed by, e.g., phase-sensitive amplifiers such as Josephson parametric amplifiers (JPAs); see appendix B for details. A possible experimental setup, which is similar to the setup of the HAYSTAC experiment [12; 13; 14], is schematically shown in fig. 3. We also summarize in fig. 4 how the state is squeezed in the \(XY\) plane. In this setup, squeezing is performed twice by JPAs. First, we assume that the input vacuum state \((\hat{X}_{\text{in},m},\hat{Y}_{\text{in},m})\)3 is the coherent state, which is the case for the thermal photons with a Gaussian distribution in the \(XY\) plane. The first JPA called SQ in fig. 3 squeezes the vacuum state along, e.g., the \(X\) direction. When we define the squeezing parameter of the SQ JPA as \(G_{s}\), the squeezed state \((\hat{X}_{s,m},\hat{Y}_{s,m})\) becomes Footnote 3: The meaning of subscript \(m\) is described in appendix A. \[\hat{X}_{s,m}=\frac{1}{\sqrt{G_{s}}}\hat{X}_{\text{in},m}\,,\quad\hat{Y}_{s,m} =\sqrt{G_{s}}\hat{Y}_{\text{in},m}\,. \tag{3.30}\] This squeezing reduces the noise \(\Delta\hat{X}\). Figure 4: Distribution of the four states in the \(XY\) plane. The subscripts of quadratures correspond to those in fig. 3. The input state \((\hat{X}_{\text{in},m},\hat{Y}_{\text{in},m})\) is Gaussian, which is the distribution of thermal photons. This state will be squeezed by the SQ JPA and becomes the squeezed state \((\hat{X}_{s,m},\hat{Y}_{s,m})\). The third state \((\hat{X}_{o,m},\hat{Y}_{o,m})\) is the state after the signal from the cavity is received. Finally, we get the output state \((\hat{X}_{\text{out},m},\hat{Y}_{\text{out},m})\) after squeezing by the AMP JPA. Figure 3: Schematic of our experimental setup for axion detection with superfluid \({}^{3}\)He. The operators \(\hat{c},\hat{d},\hat{B},\cdots\) correspond to the annihilation operators used in our paper. When this squeezed state receives the signal photon from the cavity, the state is displaced in the phase of the signal photon (from the second figure to the third figure in fig. 4). Because the noise has been suppressed by a factor \(1/\sqrt{G_{s}}\), the signal-to-noise ratio is enhanced by a factor \(\sqrt{G_{s}}\). The second JPA called AMP squeezes the displaced state \((\hat{X}_{o,m},\hat{Y}_{o,m})\). This JPA amplifies the state in the \(X\) direction, the opposite direction to the SQ, and we get the output state \((\hat{X}_{\text{out},m},\hat{Y}_{\text{out},m})\). Defining the squeezing parameter of the AMP JPA as \(G_{a}\), we get \[\hat{X}_{\text{out},m}=\sqrt{G_{a}}\hat{X}_{o,m}\,,\quad\hat{Y}_{\text{out},m}= \frac{1}{\sqrt{G_{a}}}\hat{Y}_{o,m}\,. \tag{3.31}\] Note that this second squeezing does not affect the signal-to-noise ratio because it amplifies both the signal and noise at the same time. Instead, the AMP JPA plays a role in overwhelming the noise added by the following circuits, including the amplifier. Technically, the direction of squeezing by JPAs is determined by the phase of the AC power input to them. In order to give a difference to the direction of amplification by the SQ and AMP JPA, the phase shifter between the microwave generator and the SQ JPA shifts the phase of the microwaves by \(\pi/2\). #### 3.3.2 Homodyne measurement Now we need to measure the \(\hat{X}\) quadrature exclusively to obtain a high signal-to-noise ratio beyond the SQL. This is possible by using another quantum measurement technique, the homodyne measurement. We will briefly review the theory of the homodyne measurement. A schematic of the homodyne measurement is shown in the lower right part of fig. 3. First, let \(\ket{\psi}\) be the signal state of our setup, i.e., the squeezed state output from the AMP JPA. Also, in this subsection, we use the abbreviation for notation of the corresponding annihilation operator and quadratures, \(\hat{a}\), \(\hat{X}\), and \(\hat{Y}\), representing \(\hat{a}_{\text{out},m}\), \(\hat{X}_{\text{out},m}\), and \(\hat{Y}_{\text{out},m}\), respectively. The homodyne measurement requires a local oscillator that has the same mode as that of the signal photons. We write the annihilation operator of the local oscillator by \(\hat{B}\), and set the initial state of the local oscillator to a coherent state \[\ket{\beta}\equiv e^{-\ket{\beta}^{2}/2}\sum_{n=0}^{\infty}\frac{\beta^{n}}{ \sqrt{n!}}\ket{n}\,. \tag{3.32}\] Here, \(\beta\equiv\ket{\beta}e^{i\theta}\) and \(\ket{n}\) is the Fock state of \(n\) photons. The initial state of the total system is defined as \(\ket{\Psi}\equiv\ket{\psi}\ket{\beta}\). The signal photons and the local oscillator are split in half and mixed by a beam splitter. As a result, we obtain two beams whose annihilation operators are \[\hat{a}^{\prime}=\frac{\hat{a}-\hat{B}}{\sqrt{2}}\,,\quad\hat{B}^{\prime}= \frac{\hat{a}+\hat{B}}{\sqrt{2}}\,. \tag{3.33}\] Next, we observe the difference \(\hat{R}\) between the amplitudes of those two beams by a differential amplifier: \[\hat{R} \equiv\hat{B}^{\prime\dagger}\hat{B}^{\prime}-\hat{a}^{\prime \dagger}\hat{a}^{\prime}\] \[=\frac{\hat{a}+\hat{a}^{\dagger}}{\sqrt{2}}\ \frac{\hat{B}+\hat{B}^{ \dagger}}{\sqrt{2}}+\frac{\hat{a}-\hat{a}^{\dagger}}{\sqrt{2}i}\ \frac{\hat{B}-\hat{B}^{ \dagger}}{\sqrt{2}i}\,. \tag{3.34}\] The expectation value of \(\hat{R}\) is calculated as \[\langle\Psi|\hat{R}|\Psi\rangle = \langle\psi|\bigg{(}\frac{\hat{a}+\hat{a}^{\dagger}}{\sqrt{2}}\ \frac{\beta+\beta^{*}}{\sqrt{2}}+\frac{\hat{a}-\hat{a}^{\dagger}}{\sqrt{2}i}\ \frac{\beta-\beta^{*}}{\sqrt{2}i}\bigg{)}|\psi\rangle \tag{3.35}\] \[= \sqrt{2}|\beta|\,\langle\psi|(\hat{X}\cos\theta+\hat{Y}\sin \theta)|\psi\rangle\.\] This equation means that we can measure only one component of quadratures by observing \(\hat{R}\). For example, if \(\theta=0\), we can measure only the \(\hat{X}\) quadrature. If we tune the phase \(\theta\) to be the same as the phase of amplification by the AMP JPA, we can measure only the amplified quadrature. This tuning is possible by using the same microwave generator for the AMP JPA and the local oscillator of the homodyne measurement; see fig. 3. Thus, the expectation value of the normalized observable \(\hat{R}^{\prime}\equiv\hat{R}/\sqrt{2}|\beta|\) becomes \[\langle\Psi|\hat{R}^{\prime}|\Psi\rangle=\,\langle\psi|\hat{X}| \psi\rangle. \tag{3.36}\] Furthermore, the measurement error of the operator \(\hat{R}^{\prime}\) is \[\langle\Psi|(\hat{R}^{\prime}-\hat{X})^{2}|\Psi\rangle=\frac{ \langle\psi|\hat{a}^{\dagger}\hat{a}|\psi\rangle}{2|\beta|^{2}}\,, \tag{3.37}\] which converges to zero in the limit of \(|\beta|\to\infty\). Therefore, \(\hat{X}\) can be accurately measured through the homodyne measurement using the local oscillator with a large number of photons. ## 4 Sensitivity We determine the sensitivity of our setup using a similar test statistic as the one used in [34] in a limit where the signal power is considerably smaller than the noise power. Here, we briefly outline our statistical analysis, leaving details of calculations to appendix A. In order to determine the 95% exclusion limits by a log-likelihood ratio test statistic, we calculate the following quantity \[q\equiv-\frac{T_{\rm int}}{2\pi}\int_{0}^{\infty}\mathrm{d} \omega\,\frac{S(\Delta\omega)^{2}}{B(\Delta\omega)^{2}}, \tag{4.1}\] where \(T_{\rm int}\) is the experimental integration time, \(\Delta\omega\equiv\omega-\omega_{L}\), and \(S(\Delta\omega)\) and \(B(\Delta\omega)\) are the signal and the noise spectral density, respectively. The 95% exclusion limits are obtained by solving \(q=-2.71\). According to the calculation in appendix A, we obtain \[q\simeq\left\{\begin{aligned} &-6.9\times 10^{58}g_{ann}^{4} \bigg{(}\frac{T_{\rm int}}{1\,{\rm day}}\bigg{)}\bigg{(}\frac{m_{a}}{1\,{\rm ne }{\rm V}}\bigg{)}^{-7/2}\bigg{(}\frac{G_{s}}{10^{2}}\bigg{)}^{1/2}\bigg{(} \frac{M}{100\,{\rm g}}\bigg{)}^{2}\bigg{(}\frac{Q}{10^{11}}\bigg{)}^{3/2}\\ &\qquad\qquad{\rm for}\quad Q/G_{s}\gtrsim 1.1\times 10^{8}\\ &-2.5\times 10^{49}g_{ann}^{4}\bigg{(}\frac{T_{\rm int}}{1\,{\rm day}} \bigg{)}\bigg{(}\frac{m_{a}}{1\,{\rm ne}{\rm V}}\bigg{)}^{-3}\bigg{(}\frac{M} {100\,{\rm g}}\bigg{)}^{2}\bigg{(}\frac{Q}{10^{6}}\bigg{)}^{2}\\ &\qquad\qquad{\rm for}\quad Q/G_{s}\lesssim 1.1\times 10^{8} \end{aligned}\right.\,, \tag{4.2}\] where we defined the dimensionless coupling \(g_{ann}\equiv C_{ann}m_{n}/f_{a}\). Equation (4.2) shows that \(q\) does not depend on \(G_{s}\) when \(Q/G_{s}\lesssim 1.1\times 10^{8}\), i.e., the sensitivity cannot be improved by the squeezing unless we achieve high \(Q/G_{s}\). Note that since the cavity is placed under a low temperature \(T<T_{c}\sim\mathcal{O}(1)\) mK, the sensitivity does not depend on \(T\) but is limited by the quantum fluctuation. Solving \(q=-2.71\), we estimate the expected exclusion limits on the axion-neutron coupling as \[g_{ann}\simeq\begin{cases}2.5\times 10^{-15}\bigg{(}\frac{T_{\text{ int}}}{1\,\text{day}}\bigg{)}^{-1/4}\bigg{(}\frac{m_{a}}{1\,\text{\mu eV}} \bigg{)}^{7/8}\bigg{(}\frac{G_{s}}{10^{2}}\bigg{)}^{-1/8}\bigg{(}\frac{M}{100 \,\text{g}}\bigg{)}^{-1/2}\bigg{(}\frac{Q}{10^{11}}\bigg{)}^{-3/8}\\ \qquad\qquad\text{for}\quad Q/G_{s}\gtrsim 1.1\times 10^{8}\\ 5.7\times 10^{-13}\bigg{(}\frac{T_{\text{int}}}{1\,\text{day}}\bigg{)}^{-1/4} \bigg{(}\frac{m_{a}}{1\,\text{\mu eV}}\bigg{)}^{3/4}\bigg{(}\frac{M}{100\, \text{g}}\bigg{)}^{-1/2}\bigg{(}\frac{Q}{10^{6}}\bigg{)}^{-1/2}\\ \qquad\qquad\text{for}\quad Q/G_{s}\lesssim 1.1\times 10^{8}\end{cases}. \tag{4.3}\] In our setup, we scan the magnetic field \(B_{z}\) and the cavity size so that the axion dark matter with mass \(m_{a}\simeq\omega_{L}=\omega_{\text{cavity}}\) can be searched for. Each scan step has a sensitivity on the axion mass width \(\sim 1/\tau\) around the Larmor frequency \[m_{a}\sim 0.13\,\text{\mu eV}\left(\frac{B_{z}}{1\,\text{T}}\right). \tag{4.4}\] For simplicity, we approximate the sensitivity curve for each scan by a rectangle with width \(1/\tau_{\text{mag}}\) instead of using a Breit-Wigner shape. The typical size of the cavity \(L_{\text{cavity}}\) is estimated by evaluating the corresponding Compton length as \[L_{\text{cavity}}\sim 1.2\,\text{m}\left(\frac{1\,\text{\mu eV}}{m_{a}} \right). \tag{4.5}\] The upper limit of the axion mass that can be searched by our experiment is determined by the upper limit of the magnetic field \(B_{z}\). We adopt \(25\,\text{T}\) as the maximum of \(B_{z}\), which can be regarded as realistic as planned for example in CAPP25T by IBS/BNL [79].4 Footnote 4: As a more optimistic option, \(\sim 45\,\text{T}\) is also planned to be developed [80]. The squeezing level \(G_{s}\) is also crucial for sensitivity estimation. Here, we summarize the current status of the squeezing level in various experiments including the gravitational wave telescope. The squeezing levels are usually represented in the unit of dB, and \(x\,\text{dB}\) of squeezing corresponds to \(G_{s}=10^{x/10}\) in our setup. In the context of the gravitational wave detection, \(6\,\text{dB}\) quantum noise reduction (corresponding to \(G_{s}=10^{0.6}\)) has already been reported [81], while the HAYSTAC experiment of the axion dark matter detection has achieved \(4\,\text{dB}\)[14]. Even larger values have already been achieved for the squeezed state production of light, such as \(8\,\text{dB}\) for the microwave and the terahertz range [82; 83], and \(15\,\text{dB}\) for the megahertz range [84]. It is notable, however, that a hindrance to using the squeezing state for the quantum measurement is the optical loss, which is one of the main obstacles that we have to tackle to improve the sensitivity further (see the discussion in section 5). In fig. 5, We show the 95% exclusion limits on the axion-neutron coupling \(g_{ann}\) with three setups under the total integration time \(T_{\rm tot}=2\,\)years. The green region shows the sensitivity of a realistic setup for the axion search; \(T_{\rm int}=1\,\)day, \(M=100\,\)g, and \(Q=10^{6}\), and quantum measurement techniques are not applied (\(G_{s}=0\,\)dB). This setup corresponds to the case of \(Q/G_{s}<1.1\times 10^{8}\). The blue region represents an example of the high-\(Q/G_{s}\) setup; \(T_{\rm int}=1\,\)day, \(M=100\,\)g, \(G_{s}=0\,\)dB, and \(Q=10^{9}\). In both examples, we scan the magnetic field within the range \(6.0\,{\rm T}\lesssim B_{z}\lesssim 25\,\)T with a fixed scan step width Figure 5: The 95% exclusion limit on the axion-neutron coupling \(g_{ann}\). We plotted the sensitivities for three setups, all of which are operated with the mass of the superfluid \({}^{3}\)He target \(M=100\,\)g, the scan width \(1/\tau=1/\tau_{\rm mag}\simeq 0.8\,\)MHz, and the total integration time \(T_{\rm tot}=2\,\)years. The green plot shows the sensitivity for a realistic setup with the integration time for each scan \(T_{\rm int}=1\,\)day, and the cavity’s quality factor \(Q=10^{6}\), without quantum measurements. The blue plot shows the same setup as the green one but with a higher-quality cavity, \(Q=10^{9}\). The feasible upper limit of the magnetic field \(B_{z}=25\,\)T corresponds to the upper limits of the axion mass \(m_{a}\simeq 3.25\,\)ueV for both cases. The magenta plot is for an ideal setup with \(T_{\rm int}=30\,\)days, and \(Q=10^{11}\) in addition to the quantum measurement techniques with the squeezing factor \(G_{s}=20\,\)dB. The upper limit of the axion mass for this case is set to \(m_{a}=1\,\)ueV. We also plotted the region already constrained by stellar physics [85; 86], the prospects of the CASPEr-gradient experiment [26] and the proposal with the homogeneous precession domain [33], the prediction of the DFSZ model with \(0.28\lesssim\tan\beta\lesssim 140\), and that of the KSVZ model. This figure is made by using the public code [87]. \(\Delta B_{z}\simeq 2.6\times 10^{-2}\,\mathrm{T}\) corresponding to the axion mass width \(1/\tau=1/\tau_{\mathrm{mag}}\). If future experiments succeed in discovering axion through its interaction with photons and/or electrons and its mass is already known, our experiment will also help to investigate the axion interaction with neutrons. For this purpose, we do not need to scan a wide mass range, but instead, we can spend a long integration time with a fixed size of magnetic field corresponding to the discovered axion mass. We show the sensitivity of a setup aimed at this purpose in fig. 5 with a longer integration time \(T_{\mathrm{int}}=30\,\mathrm{days}\), a larger amount of \({}^{3}\mathrm{He}\) (\(M=100\,\mathrm{g}\)), the use of squeezing (\(G_{s}=20\,\mathrm{dB}\)), and a higher-quality cavity (\(Q=10^{11}\)). Note that such a high-\(Q\) cavity has been achieved using nitrogen doping for superconducting niobium cavities [88]. These values should be understood as an ultimate goal to be achieved to detect the QCD axion-like interaction with neutrons of the DFSZ axion with \(\sin\beta\sim\mathcal{O}(1)\) and of the KSVZ axion. ## 5 Conclusion and Discussion In this paper, we proposed to use the nuclear magnon modes in the ferromagnetic A\({}_{1}\) phase of the superfluid \({}^{3}\mathrm{He}\) for the axion dark matter detection. We stressed the importance of this approach as a way to detect the axion-nucleon coupling, which is one of the most important features of the QCD axion. As a detection method of the nuclear magnon, we proposed to use the mixing between the magnon and the cavity photon modes, which then allows us to use quantum measurement technologies such as squeezing and the homodyne measurement to enhance the dark matter-induced signal. We showed the quantum mechanical description of our approach and derived the corresponding sensitivity on the axion-neutron coupling \(g_{ann}\). As shown in section 4, quantum measurement technologies turned out to be useful for enhancing the sensitivity to weak signals induced by the axion dark matter when we have a high-quality cavity. As a result, we obtained a sensitivity to axion with \(\mathcal{O}(1)\,\mathrm{neV}\) mass under a realistic setup, which exceeds the current best constraints by stellar physics. Our ideal setup in section 4 relies on the high-squeezing technology that has potential to enhance the signal significance by orders of magnitude. Currently, the squeezing parameter \(G_{s}\) has reached \(15\,\mathrm{dB}\)[84], and it is expected that the practical use of \(20\,\mathrm{dB}\) will be achieved in the near future [89]. However, the squeezing effect is suppressed when there is a transmission loss. For example, when we identify the transmission efficiency between the SQ JPA and the cavity and that between the cavity and the AMP JPA as \(\lambda\), the effective squeezing parameter \(S\) becomes \[S\simeq\left(1-\lambda+\frac{\lambda}{G_{s}}\right)^{-1}, \tag{10}\] as shown in [39]. Thus, even in the limit of \(G_{s}\to\infty\), \(S\) plateaus to \((1-\lambda)^{-1}\). Therefore, almost lossless transmission (\(\lambda\simeq 1\)) should be developed in order to make full use of the squeezing technology and enhance the sensitivity further. Finally, we comment that there are several other quantum measurement technologies that can also be applied to the dark matter detection. These technologies include two mode squeezing and state swapping interactions [40; 45]. For a higher frequency range, single photon counting is also a viable approach [46]. There is an attempt using superconducting qubits, which have reduced the noise to 15.7 dB below the standard quantum limit through the repeated quantum non-demolition measurements [44]. ## Acknowledgements We thank Yoji Ohashi and Masahito Ueda for giving advice on the superfluid \({}^{3}\)He, and Yuta Michimura for information about the quantum technology of gravitational wave detectors and the likely achievements in squeezing. RO also thanks Satoshi Shirai and Keiichi Watanabe for pointing out a typo in the sensitivity calculation. SC and HM are supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under the Contract No. DE-AC02-05CH1123. RO and HS are supported by Forefront Physics and Mathematics Program to Drive Transformation (FoPM), a World-leading Innovative Graduate Study (WINGS) Program, the University of Tokyo. DK and HM are supported by the Beyond AI Institute, the University of Tokyo. HM is also supported in part by the NSF grant PHY-2210390, by the JSPS Grant-in-Aid for Scientific Research JP20K03942, MEXT Grant-in-Aid for Transformative Research Areas (A) JP20H05850, JP20A203, by WPI, MEXT, Japan, and Hamamatsu Photonics, K.K. ## Appendix A Statistical treatment of noise ### Formulation In this section, we derive the expression eq. (4.2) of our test statistic \(q\). This quantity has been introduced as a parameter for a log-likelihood ratio test in [34]. We consider a quantum formulation of our system including the magnon and the cavity modes and apparatuses for squeezing and the homodyne measurement, and use it to evaluate the signal and the background spectral densities. We start with the following Hamiltonian for the cavity mode \(\hat{c}\) and background modes interacting with the cavity mode: \[H_{\rm tot} =H_{\rm sys}+H_{\rm int}+H_{\rm B}\,,\] (A.1) \[H_{\rm sys} =\omega_{L}\hat{c}^{\dagger}\hat{c}+\omega_{L}\hat{d}^{\dagger} \hat{d}-i\frac{\Gamma_{\rm mag}}{2}\hat{d}^{\dagger}\hat{d}+ig_{\rm eff}(\hat{ c}^{\dagger}\hat{d}-\hat{c}\hat{d}^{\dagger}),\] (A.2) \[H_{\rm int} =i\sum_{j=m,l}\sqrt{\frac{\kappa_{j}}{2\pi}}\int{\rm d}\omega \left[\hat{c}^{\dagger}\hat{a}_{j}(\omega)-\hat{c}\hat{a}^{\dagger}_{j}( \omega)\right]+i\sqrt{\frac{\kappa_{a}}{2\pi}}\int{\rm d}\omega\left[\hat{d}^ {\dagger}\hat{a}_{a}(\omega)-\hat{d}\hat{a}^{\dagger}_{a}(\omega)\right]\!,\] (A.3) \[H_{\rm B} =\sum_{j=m,l,a}\int{\rm d}\omega\,\omega\,\hat{a}^{\dagger}_{j}( \omega)\hat{a}_{j}(\omega),\] (A.4) where \(\hat{d}\) is the annihilation operator of magnon defined by eq. (2.49) and \(\Gamma_{\rm mag}\equiv\tau_{\rm mag}^{-1}\) is the bandwidth of magnon, and we used the rotating wave approximation. The last term of eq. (A.2) describes the mixing of magnons and cavity modes, and the magnon field \(\hat{d}\) has been redefined in comparison to eq. (3.21) for later convenience. Equation (A.3) represents the measurement of the cavity mode, the loss of the cavity electromagnetic field, and the magnon excitation by axions as interactions with three ports: the measurement port \(\hat{a}_{m}\), the loss port \(\hat{a}_{l}\), and the axion port \(\hat{a}_{a}\), respectively. The coupling constant for the loss port \(\kappa_{l}\) is determined with the cavity quality factor \(Q\) as \(\kappa_{l}=m_{a}/Q\). In Heisenberg picture, the equations of motion for \(\hat{c}(t)\), \(\hat{d}(t)\), and \(\hat{a}_{j}(\omega,t)\) are \[\frac{\mathrm{d}\hat{c}(t)}{\mathrm{d}t} =-i\omega_{L}\hat{c}(t)+g_{\mathrm{eff}}\hat{d}(t)+\sum_{j=m,l} \sqrt{\frac{\kappa_{j}}{2\pi}}\int\mathrm{d}\omega\,\hat{a}_{j}(\omega), \tag{100}\] \[\frac{\mathrm{d}\hat{d}(t)}{\mathrm{d}t} =-i\omega_{L}\hat{d}(t)-g_{\mathrm{eff}}\hat{c}(t)-\frac{\Gamma_{ \mathrm{mag}}}{2}\hat{d}(t)+\sqrt{\frac{\kappa_{a}}{2\pi}}\int\mathrm{d}\omega \,\hat{a}_{a}(\omega),\] (101) \[\frac{\mathrm{d}\hat{a}_{j}(\omega,t)}{\mathrm{d}t} =-i\omega\hat{a}_{j}(\omega,t)-\sqrt{\frac{\kappa_{j}}{2\pi}} \begin{cases}\hat{c}(t)&(j=m,l)\\ \hat{d}(t)&(j=a)\end{cases}. \tag{102}\] The formal solution of eq. (102) is written with an initial time \(t_{\mathrm{in}}\) (\(<t\)) as \[\hat{a}_{j}(\omega,t)=e^{-i\omega(t-t_{\mathrm{in}})}\hat{a}_{j}(\omega,t_{ \mathrm{in}})-\sqrt{\frac{\kappa_{j}}{2\pi}}\int_{t_{\mathrm{in}}}^{t}\mathrm{ d}t^{\prime}\ e^{-i\omega(t-t^{\prime})}\begin{cases}\hat{c}(t^{\prime})&(j=m,l)\\ \hat{d}(t^{\prime})&(j=a)\end{cases}. \tag{103}\] Substituting eq. (103) into eqs. (100) and (101), we get the Heisenberg-Langevin equations, \[\frac{\mathrm{d}\hat{c}(t)}{\mathrm{d}t} =-i\omega_{L}\hat{c}(t)-\frac{\kappa_{c}}{2}\hat{c}(t)+g_{ \mathrm{eff}}\hat{d}(t)+\sum_{j=m,l}\sqrt{\frac{\kappa_{j}}{2\pi}}\int \mathrm{d}\omega\ e^{-i\omega(t-t_{\mathrm{in}})}\hat{a}_{j}(\omega,t_{ \mathrm{in}}), \tag{104}\] \[\frac{\mathrm{d}\hat{d}(t)}{\mathrm{d}t} =-i\omega_{L}\hat{d}(t)-\frac{\kappa_{d}}{2}\hat{d}(t)-g_{ \mathrm{eff}}\hat{c}(t)+\sqrt{\frac{\kappa_{a}}{2\pi}}\int\mathrm{d}\omega\ e^{-i \omega(t-t_{\mathrm{in}})}\hat{a}_{j}(\omega,t_{\mathrm{in}}), \tag{105}\] where \(\kappa_{c}\equiv\kappa_{m}+\kappa_{l}\) and \(\kappa_{d}\equiv\kappa_{a}+\Gamma_{\mathrm{mag}}\). We define the so-called input field as \[\hat{a}_{s,j}(t)\equiv\frac{1}{\sqrt{2\pi}}\int\mathrm{d}\omega\ e^{-i\omega( t-t_{\mathrm{in}})}\hat{a}_{j}(\omega,t_{\mathrm{in}}). \tag{106}\] The input field of the measurement port is shown in fig. 3. In terms of the input fields, eqs. (104) and (105) are rewritten as \[\frac{\mathrm{d}\hat{c}(t)}{\mathrm{d}t} =-i\omega_{L}\hat{c}(t)-\frac{\kappa_{c}}{2}\hat{c}(t)+g_{ \mathrm{eff}}\hat{d}(t)+\sum_{j=m,l}\sqrt{\kappa_{j}}\hat{a}_{s,j}(t), \tag{107}\] \[\frac{\mathrm{d}\hat{d}(t)}{\mathrm{d}t} =-i\omega_{L}\hat{d}(t)-\frac{\kappa_{d}}{2}\hat{d}(t)-g_{ \mathrm{eff}}\hat{c}(t)+\sqrt{\kappa_{a}}\hat{a}_{s,a}(t). \tag{108}\] We can formally solve eq. (102) with another time \(t_{\mathrm{out}}\) (\(>t>t_{\mathrm{in}}\)) as \[\hat{a}_{j}(\omega,t)=e^{-i\omega(t-t_{\mathrm{out}})}\hat{a}_{j}(\omega,t_{ \mathrm{out}})-\sqrt{\frac{\kappa_{j}}{2\pi}}\int_{t_{\mathrm{out}}}^{t}\mathrm{ d}t^{\prime}\ e^{-i\omega(t-t^{\prime})}\begin{cases}\hat{c}(t^{\prime})&(j=m,l)\\ \hat{d}(t^{\prime})&(j=a)\end{cases}, \tag{109}\] and we can also define the output field as \[\hat{a}_{o,j}(t)\equiv\frac{1}{\sqrt{2\pi}}\int\mathrm{d}\omega\ e^{-i\omega( t-t_{\mathrm{out}})}\hat{a}_{j}(\omega,t_{\mathrm{out}}). \tag{110}\] Noting that the right-hand sides of eqs. (104) and (105) have the same form, we find the input-output relation by integrating them by \(\omega\): \[\hat{a}_{o,j}(t)=\hat{a}_{s,j}(t)-\sqrt{\kappa_{j}}\begin{cases}\hat{c}(t)&(j=m, l)\\ \hat{d}(t)&(j=a)\end{cases}. \tag{106}\] The output field of the measurement port is also shown in fig. 3. Transforming into the rotating frame, i.e., \(\hat{A}(t)\to\hat{A}(t)e^{-i\omega_{L}t}\) for all annihilation operators, we can eliminate the first terms in eqs. (102) and (103). Thus, in the Fourier domain, we can solve eq. (103) as \[\hat{d}(\Delta\omega)=\frac{1}{\kappa_{d}/2-i\Delta\omega}[-g_{\rm eff}\hat{c }(\omega)+\sqrt{\kappa_{a}}\hat{a}_{s,a}(\Delta\omega)], \tag{107}\] where \(\Delta\omega\equiv\omega-\omega_{L}\). Substituting this into eq. (102) and using the input-output relation for the measurement port, we get \[\hat{a}_{o,m}(\Delta\omega)=\sum_{j=m,l,a}\chi_{j}(\Delta\omega)\hat{a}_{s,j}( \Delta\omega), \tag{108}\] where \[\chi_{j}(\Delta\omega)=\delta_{mj}-\left(\frac{\kappa_{c}}{2}+ \frac{g_{\rm eff}^{2}}{\kappa_{d}/2-i\Delta\omega}-i\Delta\omega\right)^{-1}\] \[\qquad\qquad\times\sqrt{\kappa_{m}\kappa_{j}}\begin{cases}1&(j=m,l)\\ \frac{g_{\rm eff}}{\kappa_{d}/2-i\Delta\omega}&(j=a)\end{cases}. \tag{109}\] Note that the susceptibility \(\chi_{j}(\Delta\omega)\) satisfies \(\chi_{j}^{*}(-\Delta\omega)=\chi_{j}(\Delta\omega)\). We move to the quadrature basis by \[\begin{pmatrix}\hat{X}(\Delta\omega)\\ \hat{Y}(\Delta\omega)\end{pmatrix}=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\ -i&i\end{pmatrix}\begin{pmatrix}\hat{a}(\Delta\omega)\\ \hat{a}^{\dagger}(-\Delta\omega)\end{pmatrix}\equiv\mathbf{P}\begin{pmatrix}\hat{a }(\Delta\omega)\\ \hat{a}^{\dagger}(-\Delta\omega)\end{pmatrix}. \tag{110}\] We would like to relate the output operator \(\hat{X}_{\rm out,m}\), which goes out from the AMP JPA, with the input operator \(\hat{X}_{\rm in,m}\), which comes into the SQ JPA, by an SSR/cavity susceptibility \(\Xi_{j}\) as \(\hat{X}_{\rm out,m}=\sum_{j}\Xi_{j}\hat{X}_{\rm in,j}\). First, the SQ JPA squeezes \(\hat{X}_{\rm in,m}\): \[\vec{\hat{X}}_{s,m}(\Delta\omega)=\frac{1}{\sqrt{G_{s}}}\vec{\hat{X}}_{\rm in,m}(\Delta\omega), \tag{111}\] while leaving the operators at other ports unaffected. The SQ JPA amplifies the other quadrature at the measurement port \(\hat{Y}_{\rm in,m}\) by \(\sqrt{G_{s}}\), but we do not track the \(\hat{Y}\) quadratures because we will only measure the \(\hat{X}\) quadrature. Using eq. (108), we find that the susceptibility in the quadrature basis is the same as that in the original basis \(\chi_{j}(\Delta\omega)\): \[\begin{pmatrix}\hat{X}_{o,m}(\Delta\omega)\\ \hat{Y}_{o,m}(\Delta\omega)\end{pmatrix} =\mathbf{P}\begin{pmatrix}\sum_{j}\chi_{j}(\Delta\omega)&0\\ 0&\sum_{j}\chi_{j}^{*}(-\Delta\omega)\end{pmatrix}\mathbf{P}^{-1}\begin{pmatrix} \hat{X}_{s,j}(\Delta\omega)\\ \hat{Y}_{s,j}(\Delta\omega)\end{pmatrix}\] \[=\sum_{j}\begin{pmatrix}\chi_{j}(\Delta\omega)&0\\ 0&\chi_{j}(\Delta\omega)\end{pmatrix}\begin{pmatrix}\hat{X}_{s,j}(\Delta\omega )\\ \hat{Y}_{s,j}(\Delta\omega)\end{pmatrix}. \tag{112}\] Hence, \(\hat{X}_{o,m}(\Delta\omega)=\sum_{j}\chi_{j}(\Delta\omega)\hat{X}_{s,j}(\Delta\omega)\). Finally, the AMP JPA performs amplification with a squeezing parameter \(G_{a}\) as \[\hat{X}_{\text{out},m}(\Delta\omega)=\sqrt{G_{a}}\hat{X}_{o,m}(\Delta\omega). \tag{101}\] As a result, we get the SSR/cavity susceptibility, \[\Xi_{j}(\Delta\omega)=\begin{cases}\sqrt{\frac{G_{a}}{G_{s}}}\chi_{j}(\Delta \omega)&(j=m)\\ \sqrt{G_{a}}\chi_{j}(\Delta\omega)&(j=l,a)\end{cases}, \tag{102}\] and accordingly, \[\hat{X}_{\text{out},m}(\Delta\omega) =\sum_{j=m,l,a}\Xi_{j}(\Delta\omega)\hat{X}_{\text{in},j}(\Delta\omega)\] \[=\sqrt{G_{a}}\Bigg{[}\frac{\chi_{m}(\Delta\omega)}{\sqrt{G_{s}}} \hat{X}_{\text{in},m}(\Delta\omega)+\sum_{j=l,a}\chi_{j}(\Delta\omega)\hat{X} _{\text{in},j}(\Delta\omega)\Bigg{]}. \tag{103}\] Next, we calculate the output power spectral density (PSD), \(P(\Delta\omega)\). Equation (103) leads to \[P(\Delta\omega) \equiv\frac{1}{T_{\text{int}}}\left\langle\hat{X}_{\text{out},m}^ {\dagger}(\Delta\omega)\hat{X}_{\text{out},m}(\Delta\omega)\right\rangle\] \[=\frac{G_{a}}{T_{\text{int}}}\Bigg{[}\frac{\left|\chi_{mm}(\Delta \omega)\right|^{2}}{G_{s}}\left\langle\hat{X}_{\text{in},m}^{\dagger}(\Delta \omega)\hat{X}_{\text{in},m}(\Delta\omega)\right\rangle\] \[\qquad+\sum_{j=l,a}\left|\chi_{mj}(\Delta\omega)\right|^{2} \left\langle\hat{X}_{\text{in},j}^{\dagger}(\Delta\omega)\hat{X}_{\text{in}, j}(\Delta\omega)\right\rangle\Bigg{]}, \tag{104}\] where \(T_{\text{int}}\) is the integration time for each scan. The input spectral densities are \[\frac{1}{T_{\text{int}}}\left\langle\hat{X}_{\text{in},m}^{ \dagger}(\Delta\omega)\hat{X}_{\text{in},m}(\Delta\omega)\right\rangle=\frac{ 1}{T_{\text{int}}}\left\langle\hat{X}_{\text{in},l}^{\dagger}(\Delta\omega) \hat{X}_{\text{in},l}(\Delta\omega)\right\rangle=n_{T}+\frac{1}{2}, \tag{105}\] \[\frac{1}{T_{\text{int}}}\left\langle\hat{X}_{\text{in},a}^{ \dagger}(\Delta\omega)\hat{X}_{\text{in},a}(\Delta\omega)\right\rangle=n_{a}+ \frac{1}{2}, \tag{106}\] where \(n_{T}\) and \(n_{a}\) are the numbers of the input thermal photon and the axion per unit time per unit bandwidth, respectively. We assumed that the thermal noise dominates the input noise for the measurement port and the loss port. Note that \(n_{T}\) and \(1/2\)s in the spectral density matrix correspond to the thermal and the quantum noises, respectively. We also assume \(n_{T}\ll 1/2\) since our experiment is operated under a low temperature \(T<T_{c}\simeq\mathcal{O}(1)\) mK, which is lower than the SQL temperature \(T_{\text{SQL}}=m_{a}/k_{B}\sim\mathcal{O}(10)\) mK. Decomposing the PSD into the signal and the noise part, we get the signal and the noise spectral densities \(S(\Delta\omega)\) and \(B(\Delta\omega)\), \[S(\Delta\omega) =\frac{G_{a}}{T_{\rm int}}\bigg{|}\frac{\kappa_{c}}{2}+\frac{g_{\rm eff }^{2}}{\kappa_{d}/2-i\Delta\omega}-i\Delta\omega\bigg{|}^{-2}\frac{g_{\rm eff}^ {2}\kappa_{m}}{(\kappa_{d}/2)^{2}+(\Delta\omega)^{2}}\kappa_{a}n_{a},\] (A.29) \[B(\Delta\omega) =\frac{G_{a}}{2T_{\rm int}}\bigg{|}\frac{\kappa_{c}}{2}+\frac{g_{ \rm eff}^{2}}{\kappa_{d}/2-i\Delta\omega}-i\Delta\omega\bigg{|}^{-2}\] \[\quad\times\Bigg{[}\frac{1}{G_{s}}\Bigg{\{}\bigg{(}\frac{-\kappa_ {m}+\kappa_{l}}{2}+\frac{g_{\rm eff}^{2}}{(\kappa_{d}/2)^{2}+(\Delta\omega)^{ 2}}\frac{\kappa_{d}}{2}\bigg{)}^{2}+\bigg{(}\frac{g_{\rm eff}^{2}}{(\kappa_{d} /2)^{2}+(\Delta\omega)^{2}}-1\bigg{)}^{2}(\Delta\omega)^{2}\bigg{\}}\] \[\qquad\qquad+\kappa_{m}\kappa_{l}+\frac{g_{\rm eff}^{2}\kappa_{m }\kappa_{a}}{(\kappa_{d}/2)^{2}+(\Delta\omega)^{2}}\Bigg{]}.\] (A.30) ### Creation Rate of Magnons Let us compute the creation rate of magnons in order to estimate \(\kappa_{a}n_{a}\). We start with the equation of motion for the magnon operator, \[\frac{{\rm d}\hat{d}(t)}{{\rm d}t}=-i\omega_{L}\hat{d}(t)-\frac{\Gamma_{\rm mag }}{2}\hat{d}(t)-i\frac{C_{ann}}{f_{a}}\sqrt{\rho_{a}N}v_{a}^{+}(t)\sin[\omega_{ L}t+\varphi(t)],\] (A.31) where the last term comes from the axion-magnon interaction derived in eq. (3.8). Under the assumption \(\hat{d}(0)=0\), the formal solution is \[\hat{d}(t)=-i\frac{C_{ann}}{f_{a}}\sqrt{\rho_{a}N}\int_{0}^{t}{\rm d}t^{\prime }\ e^{(-i\omega_{L}-\Gamma_{\rm mag}/2)(t-t^{\prime})}v_{a}^{+}(t^{\prime}) \sin[\omega_{L}t^{\prime}+\varphi(t^{\prime})].\] (A.32) It is convenient to introduce the autocorrelation function \(C(t,t^{\prime})\equiv\left\langle\hat{d}^{\dagger}(t)\hat{d}(t^{\prime})\right\rangle\), where the expectation value is taken for the stochastic values: the axion velocity \(v_{a}(t)\) and the phase \(\varphi(t)\). For \(t,t^{\prime}\gg\tau_{a}\), where \(\tau_{a}\) is the axion coherence time \(\tau_{a}\simeq(m_{a}v_{a}^{2})^{-1}\), we can compute \(C(t,t^{\prime})\) as \[C(t,t^{\prime}) =\bigg{(}\frac{C_{ann}}{f_{a}}\bigg{)}^{2}\rho_{a}N\int_{0}^{t}{ \rm d}\bar{t}\int_{0}^{t^{\prime}}{\rm d}\bar{t}^{\prime}\ e^{(+i\omega_{L}- \Gamma_{\rm mag}/2)(t-\bar{t})}e^{(-i\omega_{L}-\Gamma_{\rm mag}/2)(t^{\prime}- \bar{t}^{\prime})}\] \[\qquad\times\left\langle v_{a}^{-}(\bar{t})v_{a}^{+}(\bar{t}^{ \prime})\sin[\omega_{L}\bar{t}+\varphi(\bar{t})]\sin[\omega_{L}\bar{t}^{ \prime}+\varphi(\bar{t}^{\prime})]\right\rangle\] \[\simeq\bigg{(}\frac{C_{ann}}{f_{a}}\bigg{)}^{2}\rho_{a}N\int_{0}^{ t}{\rm d}\bar{t}\int_{0}^{t^{\prime}}{\rm d}\bar{t}^{\prime}\ e^{(+i\omega_{L}-\Gamma_{\rm mag}/2)(t-\bar{t})}e^{(-i \omega_{L}-\Gamma_{\rm mag}/2)(t^{\prime}-\bar{t}^{\prime})}\] \[\qquad\times\frac{1}{3}v_{a}^{2}\cos[\omega_{L}(\bar{t}-\bar{t}^{ \prime})]\Theta(\tau_{a}-\left|\bar{t}-\bar{t}^{\prime}\right|)\] \[\simeq\frac{2}{3}\bigg{(}\frac{C_{ann}}{f_{a}}\bigg{)}^{2}\rho_{a }Nv_{a}^{2}\tau_{a}e^{i\omega_{L}(t-t^{\prime})}e^{-(\Gamma_{\rm mag}/2)(t+t^ {\prime})}\Gamma_{\rm mag}^{-1}\Big{[}e^{\Gamma_{\rm mag}\min[t,t^{\prime}]}-1 \Big{]}.\] (A.33) In order to get the second line, we used an assumption that stochastic quantities do not correlate unless \(|\bar{t}-\bar{t}^{\prime}|<\tau_{a}\). The spectral density is obtained by Fourier-transforming \(C(t,t^{\prime})\), \[\frac{1}{T_{\rm int}}\left\langle\hat{d}^{\dagger}(\omega)\hat{d} (\omega)\right\rangle =\frac{1}{T_{\rm int}}\int_{0}^{T_{\rm int}}{\rm d}t\int_{0}^{T_{ \rm int}}{\rm d}t^{\prime}\ e^{-i\omega(t-t^{\prime})}C(t,t^{\prime})\] \[\simeq\frac{8}{3}\bigg{(}\frac{C_{ann}}{f_{a}}\bigg{)}^{2}\rho_{a }Nv_{a}^{2}\tau_{a}\frac{1}{\Gamma_{\rm mag}^{2}+4\Delta\omega^{2}}.\] (A.34) Here, we used \(T_{\rm int}\gg\Gamma_{\rm mag}^{-1}\). Next, we will estimate \(\kappa_{a}n_{a}\). The solution of eq. (A.13) in the Fourier domain with \(\hat{c}=0\) leads to \[\hat{a}_{s,a}(\Delta\omega)=\frac{\kappa_{d}/2-i\Delta\omega}{\sqrt{\kappa_{a}}} \hat{d}^{\dagger}(\Delta\omega)\simeq\frac{\Gamma_{\rm mag}/2-i\Delta\omega}{ \sqrt{\kappa_{a}}}\hat{d}^{\dagger}(\Delta\omega),\] (A.35) where we assumed \(\kappa_{a}\ll\Gamma_{\rm mag}\). Thus, \(n_{a}\) is estimated as \[n_{a} =\frac{1}{T_{\rm int}}\left\langle\hat{a}_{{\rm in},a}^{\dagger}( \Delta\omega)\hat{a}_{{\rm in},a}(\Delta\omega)\right\rangle\] \[\simeq\frac{(\Gamma_{\rm mag}/2)^{2}+\Delta\omega^{2}}{\kappa_{a} }\frac{1}{T_{\rm int}}\left\langle\hat{d}^{\dagger}(\Delta\omega)\hat{d}( \Delta\omega)\right\rangle\] \[=\frac{2}{3\kappa_{a}}\bigg{(}\frac{C_{ann}}{f_{a}}\bigg{)}^{2} \rho_{a}Nv_{a}^{2}\tau_{a},\] (A.36) and hence, \[\kappa_{a}n_{a}\simeq\frac{2}{3}\bigg{(}\frac{C_{ann}}{f_{a}}\bigg{)}^{2} \rho_{a}Nv_{a}^{2}\tau_{a}.\] (A.37) Considering that \(n_{a}\) has a bandwidth \(\Delta_{a}\simeq m_{a}v_{a}^{2}\), which was neglected in eq. (A.34) because \(\Delta_{a}\gg\Gamma_{\rm mag}\), we should modify eq. (A.37) as \[\kappa_{a}n_{a}\simeq\frac{2}{3}\bigg{(}\frac{C_{ann}}{f_{a}}\bigg{)}^{2}\rho _{a}Nv_{a}^{2}\tau_{a}\Theta(\Delta_{a}/2-|\Delta\omega|).\] (A.38) ### Test Statistic In order to determine the 95% exclusion limit, we introduce a log-likelihood ratio test statistic \(q\)[34; 90]. It is computed in the limits of \(T_{\rm int}\gg\tau_{\rm mag},\tau_{a}\) and \(S(\Delta\omega)\ll B(\Delta\omega)\) as \[q=-\frac{T_{\rm int}}{2\pi}\int_{0}^{\infty}{\rm d}\omega\ \bigg{(}\frac{S(\Delta\omega)}{B(\Delta\omega)}\bigg{)}^{2}.\] (A.39) When we assume \(\kappa_{a}\ll\kappa_{l}\ll\{g_{\rm eff},\Gamma_{\rm mag},\kappa_{m}\}\), we can approximate \(q\) as \[q\simeq-\frac{8g_{\rm eff}^{4}\kappa_{m}^{2}N^{2}g_{ann}^{4}\rho _{a}^{2}v_{a}^{4}\tau_{a}^{2}T_{\rm int}}{9\pi m_{n}^{4}}G_{s}^{2}\] \[\qquad\qquad\times\int_{-\Delta_{a}/2}^{\Delta_{a}/2}{\rm d}( \Delta\omega)\ \bigg{[}(\Delta\omega)^{4}+\Bigg{\{}\Big{(}\frac{\kappa_{m}}{2}\Big{)}^{2}+ \bigg{(}\frac{\Gamma_{\rm mag}}{2}\bigg{)}^{2}-2g_{\rm eff}^{2}+G_{s}\kappa_{ m}\kappa_{l}\Bigg{\}}(\Delta\omega)^{2}\] \[\qquad\qquad\qquad+\bigg{(}\frac{\kappa_{m}}{2}\frac{\Gamma_{ \rm mag}}{2}-g_{\rm eff}^{2}\bigg{)}^{2}+G_{s}\kappa_{m}\kappa_{l}\bigg{(} \frac{\Gamma_{\rm mag}}{2}\bigg{)}^{2}\Bigg{]}^{-2}.\] (A.40) The 95% exclusion limit corresponds to the point \(q\simeq-2.71\). The parameter \(\kappa_{m}\) determines the speed of the signal readout, and we can choose the optimal coupling \(\kappa_{m}\) so that the size of the test statistic \(|q|\) is maximized. For this purpose, we maximize the following integral, \[I(\kappa_{m}) =\kappa_{m}^{2}\int_{-\Delta_{a}/2}^{\Delta_{a}/2}\mathrm{d}(\Delta \omega)\,\left[(\Delta\omega)^{4}+\left\{\left(\frac{\kappa_{m}}{2}\right)^{2}+ \left(\frac{\Gamma_{\mathrm{mag}}}{2}\right)^{2}-2g_{\mathrm{eff}}^{2}+G_{s} \kappa_{m}\kappa_{l}\right\}(\Delta\omega)^{2}\right.\] \[\qquad\qquad\left.+\left(\frac{\kappa_{m}}{2}\frac{\Gamma_{ \mathrm{mag}}}{2}-g_{\mathrm{eff}}^{2}\right)^{2}+G_{s}\kappa_{m}\kappa_{l} \bigg{(}\frac{\Gamma_{\mathrm{mag}}}{2}\bigg{)}^{2}\right]^{-2}\] \[\equiv\kappa_{m}^{2}\int_{-\Delta_{a}/2}^{\Delta_{a}/2}\mathrm{d }(\Delta\omega)\,\,\frac{1}{[\Delta\omega^{4}+\xi(\kappa_{m})\Delta\omega^{2} +\zeta(\kappa_{m})]^{2}}. \tag{100}\] The approximate width of the integrand is \[\delta\equiv 2\sqrt{\frac{-|\xi|+\sqrt{\xi^{2}+4\zeta}}{2}}=\begin{cases}2 \zeta^{1/4}&\mathrm{for}\quad\xi^{2}\ll\zeta\\ 2\sqrt{\zeta/|\xi|}&\mathrm{for}\quad\xi^{2}\gg\zeta\end{cases}, \tag{101}\] which is equal to the full width at half maximum of the integrand when \(\xi>0\). As \(\xi\) increases, the peak width of the integrand narrows but the height remains the same. On the other hand, as \(\zeta\) decreases, the peak becomes narrower and higher. We discuss the maximization of \(I(\kappa_{m})\) in two cases: \(\delta\ll\Delta_{a}\) and \(\delta\gg\Delta_{a}\). In what follows, we will fix the value of \(g_{\mathrm{eff}}\) as \(g_{\mathrm{eff}}/2\pi\equiv 0.3\,\mathrm{MHz}\) for the purpose of order estimation, following eq. (38). Note that the precise value of \(g_{\mathrm{eff}}\) highly depends on the shape of the cavity and thus should be evaluated after fixing the experimental setups. 1. \(\delta\ll\Delta_{a}\) It is necessary to fine-tune \(\kappa_{m}\) to \(\kappa_{m}^{*}\simeq 4g_{\mathrm{eff}}^{2}/\Gamma_{\mathrm{mag}}\) in order to have \(\delta\ll\Delta_{a}\). With this optimal coupling, \(\xi\) and \(\zeta\) become \[\xi(\kappa_{m}^{*}) =\left(\frac{\Gamma_{\mathrm{mag}}}{2}\right)^{2}\Biggl{[}\left( \frac{2g_{\mathrm{eff}}}{\Gamma_{\mathrm{mag}}}\right)^{2}-1\Biggr{]}^{2}+4G_{ s}\kappa_{l}\frac{g_{\mathrm{eff}}^{2}}{\Gamma_{\mathrm{mag}}}\] \[\simeq 6.5\times 10^{1}\,\mathrm{MHz}^{2}+2.6\times 10^{4} \,\mathrm{MHz}^{2}\biggl{(}\frac{G_{s}}{Q}\biggr{)}\biggl{(}\frac{m_{a}}{1\, \mathrm{\mathrm{\mathrm{\mu eV}}}}\biggr{)},\] (102) \[\zeta(\kappa_{m}^{*}) =G_{s}\kappa_{l}g_{\mathrm{eff}}^{2}\Gamma_{\mathrm{mag}}\] \[\simeq 4.4\times 10^{3}\,\mathrm{MHz}^{4}\biggl{(}\frac{G_{s}}{Q} \biggr{)}\biggl{(}\frac{m_{a}}{1\,\mathrm{\mathrm{\mathrm{\mu eV}}}}\biggr{)}.\] (103) If \(Q/G_{s}\gtrsim 4\times 10^{3}\), the first term in eq. (102) dominates \(\xi\), while if \(Q/G_{s}\lesssim 4\times 10^{3}\), the second term in eq. (102) dominates \(\xi\). Thus, \(\delta\) reads \[\delta =2\sqrt{\zeta/\xi}\] \[\simeq\begin{cases}4\sqrt{\frac{G_{s}\kappa_{l}}{\Gamma_{\mathrm{ mag}}}}g_{\mathrm{eff}}\Biggl{|}\biggl{(}\frac{2g_{\mathrm{eff}}}{\Gamma_{\mathrm{mag}}} \biggr{)}^{2}-1\Biggr{|}^{-1}\\ \qquad\simeq 1.6\times 10^{1}\,\mathrm{MHz}\,\left(\frac{G_{s}}{Q} \right)^{1/2}\biggl{(}\frac{m_{a}}{1\,\mathrm{\mathrm{\mu eV}}}\biggr{)}^{1/2 }&\mathrm{for}\quad Q/G_{s}\gtrsim 4\times 10^{3}\\ \Gamma_{\mathrm{mag}}\simeq 8.3\times 10^{-1}\,\mathrm{MHz}&\mathrm{for} \quad Q/G_{s}\lesssim 4\times 10^{3}\end{cases}.\] (104) In order to achieve \(\delta\ll\Delta_{a}\) where \[\Delta_{a}\simeq m_{a}v_{a}^{2}\simeq 1.5\,\text{kHz}\,\left(\frac{m_{a}}{1\, \text{\rm{\mu eV}}}\right), \tag{101}\] we further need to take \(Q/G_{s}\gtrsim 1.1\times 10^{8}\). Under this condition, \(I(\kappa_{m}^{*})\) is calculated as \[I(\kappa_{m}^{*}) \simeq(\kappa_{m}^{*})^{2}\int_{-\infty}^{\infty}\text{d}(\Delta \omega)\ \frac{1}{[\Delta\omega^{4}+\xi(\kappa_{m}^{*})\Delta\omega^{2}+\zeta(\kappa_{m} ^{*})]^{2}}\] \[=(\kappa_{m}^{*})^{2}\frac{\pi}{2\sqrt{\xi(\kappa_{m}^{*})\zeta( \kappa_{m}^{*})^{3}}}. \tag{102}\] Therefore, we get \[q \simeq-\frac{8g_{\text{eff}}^{4}N^{2}g_{ann}^{4}\rho_{a}^{2}v_{a}^ {4}\tau_{a}^{2}T_{\text{int}}}{9\pi m_{n}^{4}}G_{s}^{2}\cdot 16\frac{g_{ \text{eff}}^{4}}{\Gamma_{\text{mag}}^{2}}\frac{\pi}{2}\frac{2}{\Gamma_{\text {mag}}}\Bigg{|}\bigg{(}\frac{2g_{\text{eff}}}{\Gamma_{\text{mag}}}\bigg{)}^{2 }-1\Bigg{|}^{-1}\big{(}G_{s}\kappa_{l}g_{\text{eff}}^{2}\Gamma_{\text{mag}} \big{)}^{-3/2}\] \[=-\frac{128g_{\text{eff}}^{5}N^{2}g_{ann}^{4}\rho_{a}^{2}T_{\text {int}}G_{s}^{1/2}Q^{3/2}}{9m_{n}^{4}\Gamma_{\text{mag}}^{9/2}m_{a}^{7/2}}\Bigg{|} \bigg{(}\frac{2g_{\text{eff}}}{\Gamma_{\text{mag}}}\bigg{)}^{2}-1\Bigg{|}^{-1}\] \[\simeq-6.9\times 10^{58}g_{ann}^{4}\bigg{(}\frac{T_{\text{int}}}{1 \,\text{day}}\bigg{)}\bigg{(}\frac{m_{a}}{1\,\text{\rm{\mu eV}}}\bigg{)}^{-7/ 2}\bigg{(}\frac{G_{s}}{10^{2}}\bigg{)}^{1/2}\bigg{(}\frac{M}{100\,\text{g}} \bigg{)}^{2}\bigg{(}\frac{Q}{10^{11}}\bigg{)}^{3/2}. \tag{103}\] Case 2. \(\delta\gg\Delta_{a}\) When \(Q/G_{s}\lesssim 1.1\times 10^{8}\), we have \(\delta\gg\Delta_{a}\), and thus \(I(\kappa_{m})\) reads \[I(\kappa_{m})\simeq\kappa_{m}^{2}\frac{\Delta_{a}}{\zeta(\kappa_{m})^{2}}. \tag{104}\] This is maximized by taking the optimal coupling \(\kappa_{m}=\kappa_{m}^{*}=4g_{\text{eff}}^{2}/\Gamma_{\text{mag}}\). Then \(q\) becomes \[q =-\frac{8g_{\text{eff}}^{4}N^{2}g_{ann}^{4}\rho_{a}^{2}v_{a}^{4} \tau_{a}^{2}T_{\text{int}}}{9\pi m_{n}^{4}}\Delta_{a}\Bigg{[}\kappa_{l}\bigg{(} \frac{\Gamma_{\text{mag}}}{2}\bigg{)}^{2}\Bigg{]}^{-2}\] \[=-\frac{128g_{\text{eff}}^{4}N^{2}g_{ann}^{4}\rho_{a}^{2}v_{a}^{2} T_{\text{int}}Q^{2}}{9\pi m_{n}^{4}\Gamma_{\text{mag}}^{4}m_{a}^{3}}\] \[\simeq-2.5\times 10^{49}g_{ann}^{4}\bigg{(}\frac{T_{\text{int}}}{1 \,\text{day}}\bigg{)}\bigg{(}\frac{m_{a}}{1\,\text{\rm{\mu eV}}}\bigg{)}^{-3} \bigg{(}\frac{G_{s}}{10^{2}}\bigg{)}^{0}\bigg{(}\frac{M}{100\,\text{g}}\bigg{)} ^{2}\bigg{(}\frac{Q}{10^{6}}\bigg{)}^{2}. \tag{105}\] One can see that the sensitivity cannot be improved by squeezing since \(q\) does not depend on \(G_{s}\) in this case. ## Appendix B Josephson parametric amplifier (JPA) Here, we will review about Josephson parametric amplifier (JPA). ### Effective description Let us imagine that the circuit model has two junctions at \(x_{1}\) and \(x_{2}\). The effective Hamiltonian is given by \[H =\int\mathrm{d}^{3}x\left[\frac{1}{2m}|\vec{D}\Psi_{1}|^{2}+V(\Psi_ {1})+\frac{1}{2m}|\vec{D}\Psi_{2}|^{2}+V(\Psi_{2})\right] \tag{114}\] \[\qquad+\alpha\Psi_{2}^{*}\Psi_{1}\left[\delta(x-x_{1})+\delta(x- x_{2})\right]+\mathrm{h.c.}, \tag{115}\] where \(x_{1},x_{2}\) represent the places of the junctions, and \(\Psi_{1},\Psi_{2}\) are the wave functions of Cooper pairs, where \(\Psi_{1}(x)\) lives within the interval \(x_{1}<x<x_{2}\), while \(\Psi_{2}(x)\) within \(x_{2}<x<x_{1}\) (note that this is a loop). In the case of a circuit without any junctions, energy minimization requires \[\vec{0}=\vec{D}\Psi=\vec{\nabla}\Psi-i2e\vec{A}\Psi=i|\Psi|(\vec{ \nabla}\theta-2e\vec{A}), \tag{116}\] where \(\theta\) is the phase of \(\Psi\) and \(\vec{A}\) is the photon field. Integrating along the circuit, we get \[2\pi n=\oint\vec{\nabla}\theta=\oint 2e\vec{A}=2e\Phi, \tag{117}\] with an integer \(n\). We used the single-valuedness of the wave function for the left equation, and \(\Phi\) is the magnetic flux. For the Josephson junction, however, \(\vec{\nabla}\theta_{1,2}\) need to be treated independently and hence the magnetic flux does not need to be quantized, \[2e\Phi=\oint\vec{\nabla}\theta=\int_{x_{1}}^{x_{2}}\vec{\nabla} \theta_{1}+\int_{x_{2}}^{x_{1}}\vec{\nabla}\theta_{2}=\theta_{1}(x_{2})-\theta _{1}(x_{1})+\theta_{2}(x_{1})-\theta_{2}(x_{2}). \tag{118}\] For our purposes, we are not interested in the dynamics in the bulk of the superconductor where all excitations are gapped but rather only in the junctions where the phase degrees of freedom can have much smaller excitation energies. Noting the canonical commutation relation \[[\Psi(x),\Psi^{\dagger}(y)]=\delta(x-y), \tag{119}\] and rewriting it as \(\Psi(x)=\sqrt{N(x)}e^{i\theta}(x)\), we can derive5 Footnote 5: Considering the periodicity of \(\theta\), the right commutation relation of these variables is \[[e^{i\theta(x)},\,N(y)]=e^{i\theta(x)}\delta(x-y).\] \[[\theta(x),N(y)]=i\delta(x-y). \tag{120}\] In addition, we are only interested in the phase differences across the junction. Therefore we reduce the degrees of freedom down to \(\vartheta_{1}=\theta_{2}(x_{1})-\theta_{1}(x_{1})\) and \(\vartheta_{2}=\theta_{1}(x_{2})-\theta_{2}(x_{2})\) subject to the constraint \(\vartheta_{1}+\vartheta_{2}=2e\Phi\). On the other hand, across the junctions we expect a capacitance \(C\) so that the Hamiltonian contains \[\frac{Q(x_{1,2})^{2}}{2C}=\frac{(2e)^{2}}{2C}n_{1,2}^{2}, \tag{121}\] where we defined \(n_{1}=\frac{1}{2}N_{2}(x_{1})-\frac{1}{2}N_{1}(x_{1})\) and \(n_{2}=\frac{1}{2}N_{1}(x_{2})-\frac{1}{2}N_{2}(x_{2})\). Here we made a simplification that the capacitance is the same for both junctions. Combining them together, we find the simplified Hamiltonian \[H=\frac{2e^{2}}{C}(n_{1}^{2}+n_{2}^{2})+2\alpha(\sqrt{N_{1}(x_{1})N_{2}(x_{1})} \cos\vartheta_{1}+\sqrt{N_{1}(x_{2})N_{2}(x_{2})}\cos\vartheta_{2}). \tag{111}\] We define \(\vartheta\equiv(\vartheta_{1}-\vartheta_{2})/2\) and its canonical conjugate \(n=n_{1}-n_{2}\). Assuming that two Josephson energies are same, \(2\alpha\sqrt{N_{1}(x_{1})N_{2}(x_{1})}=2\alpha\sqrt{N_{1}(x_{2})N_{2}(x_{2})}= -E_{J}\), we get \[H=\frac{e^{2}}{C}n^{2}-2E_{J}\cos(e\Phi)\cos(\vartheta). \tag{112}\] Here we neglected the term proportional to \((n_{1}+n_{2})^{2}\) since the value of \(n_{1}+n_{2}\) is conserved. ### Flux-driven Josephson parametric amplifier In the following, we explain how the flux-driven Josephson parametric amplifiers (FJPA) work in squeezing and amplifying the signal. We consider the simplified theoretical model of the FJPA (fig. 6). The FJPA consists of a SQUID biased by the external flux \(\Phi_{\rm ext}\) and a shunting capacitance \(C_{t}\) and is connected to the input/output port. Similarly to eq. (112), The Hamiltonian describing the resonator part of FJPA is \[H_{\rm sys} =\frac{(2en)^{2}}{2C_{t}}-E_{J}\cos\vartheta_{1}-E_{J}\cos \vartheta_{2}\] \[=4E_{C}n^{2}-E_{J}^{\rm eff}(\Phi_{\rm ext})\cos\vartheta, \tag{113}\] where \(E_{C}=e^{2}/(2C_{t})\), \(E_{J}^{\rm eff}(\Phi_{\rm ext})=2E_{J}\cos(e\Phi_{\rm ext})=2E_{J}\cos(\pi \Phi_{\rm ext}/\Phi_{0})\). We set the DC part of \(\Phi_{\rm ext}\) to the quarter of magnetic flux quantum, i.e., bias the amplifier at \(\Phi_{DC}=\Phi_{0}/4\). Figure 6: Schematic of a flux-driven Josephson parametric amplifier (FJPA). It consists of a SQUID biased by the external flux \(\Phi_{\rm ext}=\Phi_{\rm DC}+\Phi_{\rm AC}\cos(\alpha\omega_{c}t)\) and a shunting capacitance \(C_{t}\), and is connected to the input/output port. We assume two junctions in the SQUID have the same Josephson energies \(E_{J}\). In the absence of a pump, \[H_{\rm sys}=4E_{C}n^{2}-\sqrt{2}E_{J}\cos\vartheta. \tag{114}\] Expanding \(\cos\vartheta\) to order \(\vartheta^{2}\), we can write the Hamiltonian by the ladder operator. \[H_{\rm sys}=\omega_{c}\hat{a}^{\dagger}\hat{a}, \tag{115}\] where \[\vartheta =\left(\frac{\sqrt{2}E_{C}}{E_{J}}\right)^{1/4}(\hat{a}^{\dagger }+\hat{a}), \tag{116}\] \[n =\frac{i}{2}\left(\frac{E_{J}}{\sqrt{2}E_{C}}\right)^{1/4}(\hat{a }^{\dagger}-\hat{a}),\] (117) \[\omega_{c} =2\sqrt{2\sqrt{2}E_{C}E_{J}}. \tag{118}\] Next, we consider including the AC part of the external field \(\Phi_{\rm ext}\) due to the pumping \[\Phi_{\rm ext}=\Phi_{\rm DC}+\Phi_{\rm AC}\cos(\alpha\omega_{c}t). \tag{119}\] We set the AC part of \(\Phi_{\rm ext}\) smaller than the DC part \(\Phi_{\rm AC}\ll\Phi_{\rm DC}\), and evaluate \(E_{J}^{\rm eff}(\Phi_{\rm ext})\) as \[E_{J}^{\rm eff}(\Phi_{\rm ext}) \simeq E_{J}^{\rm eff}(\Phi_{\rm DC})+\left.\frac{\partial E_{J}^{ \rm eff}(\Phi)}{\partial\Phi}\right|_{\Phi=\Phi_{\rm DC}}\Phi_{\rm AC}\cos( \alpha\omega_{c}t)\] \[=\sqrt{2}E_{J}-\left(\frac{\pi\Phi_{\rm AC}}{\Phi_{0}}\right) \sqrt{2}E_{J}\cos(\alpha\omega_{c}t). \tag{120}\] Thus, Hamiltonian \(H_{\rm sys}\) becomes \[H_{\rm sys}\simeq\omega_{c}\hat{a}^{\dagger}\hat{a}+\mu_{r}\cos(\alpha\omega_ {c}t)(\hat{a}^{\dagger}+\hat{a})^{2}, \tag{121}\] where \(\mu_{r}=\frac{\pi\Phi_{\rm AC}}{\Phi_{0}}\left(\frac{E_{C}E_{J}}{\sqrt{2}} \right)^{\frac{1}{2}}\). We focus on the parametric amplifier region (\(\alpha\simeq 2\)). Applying rotating wave approximation, we can estimate \(H_{\rm sys}\) as \[H_{\rm sys}\simeq\omega_{c}\hat{a}^{\dagger}\hat{a}+\frac{\mu_{r}}{2}e^{i \alpha\omega_{c}t}\hat{a}^{2}+\frac{\mu_{r}}{2}e^{-i\alpha\omega_{c}t}\hat{a} ^{\dagger 2}. \tag{122}\] We assume the resonator has a semi-infinite waveguide mode (the annihilation operator of which is denoted as \(\hat{b}_{k}\)) connected as an input/output port and also has internal losses in the resonator (the annihilation operator of which is denoted as \(\hat{c}_{k}\)). The schematic of this parametric amplifier using opto-mechanical analogy is fig. 7. The total Hamiltonian describing this is \[H_{\rm tot} =H_{\rm sys}+H_{\rm sig}+H_{\rm loss}, \tag{123}\] \[H_{\rm sys} =\omega_{c}\hat{a}^{\dagger}\hat{a}+\frac{\mu_{r}}{2}e^{i\alpha \omega_{c}t}\hat{a}^{2}+\frac{\mu_{r}}{2}e^{-i\alpha\omega_{c}t}\hat{a}^{ \dagger 2},\] (124) \[H_{\rm sig} =\int\mathrm{d}\omega\left[\omega\hat{b}^{\dagger}(\omega)\hat{b} (\omega)+i\sqrt{\frac{\kappa_{e}}{2\pi}}(\hat{a}^{\dagger}\hat{b}(\omega)- \hat{b}^{\dagger}(\omega)\hat{a})\right],\] (125) \[H_{\rm loss} =\int\mathrm{d}\omega\left[\omega\hat{c}^{\dagger}(\omega)\hat{c }(\omega)+i\sqrt{\frac{\kappa_{i}}{2\pi}}(\hat{a}^{\dagger}\hat{c}(\omega)- \hat{c}^{\dagger}(\omega)\hat{a})\right]. \tag{126}\] ere, \(\kappa_{e}\) is the external loss rate of the resonator, and \(\kappa_{i}\) is the internal loss rate of the resonator. As we did in appendix A, we get Heisenberg equations for the resonator mode \(\hat{a}\) and the input-output relationship of the waveguide: \[\frac{\mathrm{d}\hat{a}(t)}{\mathrm{d}t} =\left(-i\omega_{c}-\frac{\kappa}{2}\right)\hat{a}(t)-i\mu_{r}e^{- i\alpha\omega_{c}t}\hat{a}^{\dagger}(t)+\sqrt{\kappa_{e}}\hat{b}_{\mathrm{in}}(t)+ \sqrt{\kappa_{i}}\hat{c}_{\mathrm{in}}(t), \tag{112}\] \[\hat{b}_{\mathrm{out}}(t) =\hat{b}_{\mathrm{in}}(t)-\sqrt{\kappa_{e}}\hat{a}(t), \tag{113}\] where \(\kappa\equiv\kappa_{i}+\kappa_{e}\). ### Resonator equation In this subsection, we neglect the internal loss (\(\kappa=\kappa_{e}\)) and switch to a frame rotating at the angular frequency \(\alpha\omega_{c}/2\), and define the following operators: \[\hat{A}(t) =e^{i\frac{\alpha}{2}\omega_{c}t}\hat{a}(t), \tag{114}\] \[\hat{B}_{\mathrm{in}\,(\mathrm{out})}(t) =e^{i\frac{\alpha}{2}\omega_{c}t}\hat{b}_{\mathrm{in}\,( \mathrm{out})}(t). \tag{115}\] Assuming \(\alpha=2\) for simplicity, the resonator equation (112) and the input-output relation become \[\frac{\mathrm{d}\hat{A}(t)}{\mathrm{d}t} =-\frac{\kappa}{2}\hat{A}(t)-i\mu_{r}\hat{A}^{\dagger}(t)+\sqrt{ \kappa}\hat{B}_{\mathrm{in}}(t),\] \[\hat{B}_{\mathrm{out}}(t) =\hat{B}_{\mathrm{in}}(t)-\sqrt{\kappa}\hat{A}(t), \tag{116}\] We consider the case with monochromatic incident light, i.e., \[\hat{B}_{\mathrm{in}}(t)=\hat{B}_{\mathrm{in}}(0)e^{-i\Delta\omega}, \tag{117}\] where \(\Delta\omega\equiv\omega-\omega_{c}\). In this case, the stationary solution of \(\hat{A}(t)\) has only two Fourier components \(e^{\pm i\Delta\omega t}\). The resonator equations for these components are \[-i\Delta\omega\Bigg{(}\begin{matrix}\hat{A}(\Delta\omega)\\ \hat{A}^{\dagger}(-\Delta\omega)\end{matrix}\Bigg{)}=\begin{pmatrix}-\kappa/ 2&-i\mu_{r}\\ +i\mu_{r}&-\kappa/2\end{pmatrix}\Bigg{(}\begin{matrix}\hat{A}(\Delta\omega)\\ \hat{A}^{\dagger}(-\Delta\omega)\end{matrix}\Bigg{)}+\sqrt{\kappa}\Bigg{(} \begin{matrix}\hat{B}_{\mathrm{in}}(0)\\ 0\end{matrix}\Bigg{)}, \tag{118}\] Figure 7: Schematic of the parametric amplifier. Here we use opto-mechanical analogy (resonator consisting of the cavity) instead of Josephson parametric amplifier. and \[+i\Delta\omega\Bigg{(}\begin{array}{c}\hat{A}(-\Delta\omega)\\ \hat{A}^{\dagger}(\Delta\omega)\end{array}\Bigg{)}=\begin{pmatrix}-\kappa/2&-i\mu _{r}\\ +i\mu_{r}&-\kappa/2\end{pmatrix}\begin{pmatrix}\hat{A}(-\Delta\omega)\\ \hat{A}^{\dagger}(\Delta\omega)\end{pmatrix}+\sqrt{\kappa}\begin{pmatrix}0\\ \hat{B}_{\text{in}}^{\dagger}(0)\end{pmatrix}. \tag{100}\] Solving these equations, we obtain \[\hat{A}(t)=\frac{\frac{\kappa}{2}-i\Delta\omega}{\left(\frac{\kappa}{2}-i \Delta\omega\right)^{2}-\mu_{r}^{2}}\sqrt{\kappa}\hat{B}_{\text{in}}(0)e^{-i \Delta\omega t}+\frac{-i\mu_{r}}{\left(\frac{\kappa}{2}+i\Delta\omega\right)^ {2}-\mu_{r}^{2}}\sqrt{\kappa}\hat{B}_{\text{in}}^{\dagger}(0)e^{+i\Delta \omega t}. \tag{101}\] The output field is derived using eq. (101) as \[\hat{B}_{\text{out}}(t)= \Bigg{[}1-\frac{\left(\frac{\kappa}{2}-i\Delta\omega\right)\kappa }{\left(\frac{\kappa}{2}-i\Delta\omega\right)^{2}-\mu_{r}^{2}}\Bigg{]}\hat{B} _{\text{in}}(0)e^{-i\Delta\omega t}\] \[\qquad\qquad+\frac{-i\mu_{r}\kappa}{\left(\frac{\kappa}{2}+i \Delta\omega\right)^{2}-\mu_{r}^{2}}\hat{B}_{\text{in}}^{\dagger}(0)e^{+i \Delta\omega t}. \tag{102}\] The first term represents the signal component, and the second term represents the idler component. When \(\Delta\omega=0\), these two modes degenerate. In this case, the output gain shows the phase-sensitivity. In order to verify this, we define the following quadratures: \[\hat{X}_{\theta} \equiv\frac{\hat{B}e^{-i\theta}+\hat{B}^{\dagger}e^{i\theta}}{ \sqrt{2}}, \tag{103}\] \[\hat{Y}_{\theta} \equiv\frac{\hat{B}e^{-i\theta}-\hat{B}^{\dagger}e^{i\theta}}{ \sqrt{2}i}. \tag{104}\] From eq. (102) with \(\Delta\omega=0\), we find \[\hat{X}_{\theta,\,\text{out}} =\left[1-\frac{\frac{\kappa^{2}}{2}}{\frac{\kappa^{2}}{4}-\mu_{r }^{2}}-\frac{\mu_{r}\kappa\sin(2\theta)}{\frac{\kappa^{2}}{4}-\mu_{r}^{2}} \right]\hat{X}_{\theta,\,\text{in}}-\frac{\mu_{r}\kappa\cos(2\theta)}{\frac{ \kappa^{2}}{4}-\mu_{r}^{2}}\hat{Y}_{\theta,\,\text{in}}, \tag{105}\] \[\hat{Y}_{\theta,\,\text{out}} =\left[1-\frac{\frac{\kappa^{2}}{2}}{\frac{\kappa^{2}}{4}-\mu_{r }^{2}}+\frac{\mu_{r}\kappa\sin(2\theta)}{\frac{\kappa^{2}}{4}-\mu_{r}^{2}} \right]\hat{Y}_{\theta,\,\text{in}}-\frac{\mu_{r}\kappa\cos(2\theta)}{\frac{ \kappa^{2}}{4}-\mu_{r}^{2}}\hat{X}_{\theta,\,\text{in}}. \tag{106}\] When \(\theta=(1/4+n)\pi\) (\(n\in\mathbb{Z}\)) in particular, they take the following form: \[\hat{X}_{\theta,\text{out}}=\sqrt{G}\hat{X}_{\theta,\text{in}},\quad\hat{Y}_ {\theta,\text{out}}=\frac{1}{\sqrt{G}}\hat{Y}_{\theta,\text{in}}, \tag{107}\] where the parameter \(G\) is \[G=\bigg{(}\frac{\mu_{r}+\frac{\kappa}{2}}{\mu_{r}-\frac{\kappa}{2}}\bigg{)}^ {2}. \tag{108}\] Equation (107) represents the squeezing by a JPA and is what we used in eqs. (100) and (101).
2309.06253
Optimal Quota for a Multi-species Fishing Models
A Stochastic Control Problem can be solved by Dynamic Programming or Distributed Optimal Control with the Kolmogorov equation for the probability density of the Markov process of the problem. It can be solved also with Supervised Learning. We shall compare these two classes of methods for the control of fisheries. Fishing quotas are unpleasant but efficient to control the productivity of a fishing site. A popular model has a vector-valued stochastic differential equation for the biomass of the different species. Optimization of quota will be obtained by a gradient method applied to the least square difference with an ideal state weighted by the probability density of the biomasses. Alternatively a deep neural network which preserves the Markov property of the problem can be trained with a stochastic gradient algorithm. The model is extended to distributed fishing sites and biomass is stabilized by adjusting the quota to its time derivative.
Olivier Pironneau
2023-09-12T14:14:10Z
http://arxiv.org/abs/2309.06253v1
# Optimal Quota for a Multi-species Fishing Models ###### Abstract A Stochastic Control Problem can be solved by Dynamic Programming or Distributed Optimal Control with the Kolmogorov equation for the probability density of the Markov process of the problem. It can be solved also with Supervised Learning. We shall compare these two classes of methods for the control of fisheries. Fishing quotas are unpleasant but efficient to control the productivity of a fishing site. A popular model has a vector-valued stochastic differential equation for the biomass of the different species. Optimization of quota will be obtained by a gradient method applied to the least square difference with an ideal state weighted by the probability density of the biomasses. Alternatively a deep neural network which preserves the Markov property of the problem can be trained with a stochastic gradient algorithm. The model is extended to distributed fishing sites and biomass is stabilized by adjusting the quota to its time derivative. keywords: MSC classification 93E20, 3504, 9B20, 92D25. Stochastic optimal control, partial differential equations, neural networks, population dynamics, control of fisheries. + Footnote †: journal: Optimization, Control and Numerical Analysis a Journal ## Introduction The increasing need for food has led to over fishing everywhere. To avoid extinction one must measure or model the biomass and experiment with various ways to control it. The mathematics of population dynamics are old (see Verhulst [19]). For competing species (fish included) Volterra [20] introduced the logistic predator-prey model in 1931. Since then, the model has been extended and used by many (see for instance [1],[9] and [8]) and the literature is enormous. For fisheries Schaefer [18] introduced an effort function \(E(t)\) - conveniently representing the number of fishing boats at sea- and a catchability coefficient \(q\) for each class of boats. In [6] an extension relating the fishing effort to the market price \(p\) of fish is analyzed. Multi-species models are straightforward vector generalizations of single species models, however their mathematical analysis and computer solutions are much harder. The special case of a single species with different aging groups is usually analyzed by standard population dynamics arguments (see "aged structured models" in [9]). Nevertheless, the complexity of the modeling can be grasped from [12], p73. The Mathematical literature on fishing quota is scarce [17]. In [11],[21],[7] the models are either too simple or analyzed in general terms for profitability and preservation without numerical simulations. Our purpose, in this article, is to show what stochastic optimization can offer to fisheries. We leave no competence to discuss the accuracy of the models in practice. In [2] Supervised Learning was shown to be efficient to calibrate the parameters of the fishing model of [6]. In [13] a stochastic control problem was derived for the computation of optimal quotas, a solution by Supervised Learning was proposed and compared to standard stochastic control solutions using the Hamilton-Jacobi-Bellman equations (HJB). In this article we compare a Distributed Control Method (an alternative to HJB) to a new deep neural network which is an interesting modification (due to P. Bras [4]) of the one used in [13]. A final remark about "common sense control" is made. In the last section the model of [16] and [13] is extended to distributed fishing sites and solved numerically by "common sense control" for the Atlantic ocean facing Senegal. Some references to multi-sites models are available in [14] and for open sea models in[11]. ## 1 The Single Fishing Site Model In simple situations, depleting of a sight due to fishing is proportional to the fish biomass \(B\) and related to the fishing effort \(E\) (the number of boats at sea) by \[\frac{\mathrm{d}B}{\mathrm{d}t}=B(r-\kappa B)-qBE, \tag{1}\] Here \(r\) is the natural birth minus death rate, \(r/\kappa\) is the capacity of the site and \(q\) is the catchability. The rate of the fishing effort is proportional to the difference between profit \(pB\) - where \(p\) is the price of fish - and the cost \(c\) of operating a fishing boat: \[\frac{dE}{dt}=pqBE-cE. \tag{2}\] When the market is liquid the price adjusts daily to balance supply \(qBE\) and demand \(D(p)\), taken here inversely proportional to \(1+bp\) with \(b\) fitted from past data. Thence a value for \(p\) is found and the model can be rescaled to \[\frac{\mathrm{d}B}{\mathrm{d}t}=B(r-\kappa B-qE),\ \ \frac{\mathrm{d}E}{ \mathrm{d}t}=a-(qB+c)E,\ B(0)=B_{0},\ E(0)=E_{0}. \tag{3}\] The model is easily extended to multi-species including a fishing quota \(Q_{i}<q\) on each species \(i=1,...,d\) and noise: \[\mathrm{d}\mathbf{B}_{t}=\mathbf{B}\star\left[(\mathbf{r}- \underline{\boldsymbol{\kappa}}\mathbf{B}-\mathbf{Q}E)\,\mathrm{d}t+ \boldsymbol{\sigma}\mathbf{d}\mathbf{W}_{t}\right], \mathbf{B}(0)=\mathbf{B}^{0}+\boldsymbol{\sigma}\text{"}\mathbf{N}_{0}^{ 1},\] \[\mathrm{d}E_{t}=(a-(\mathbf{B}:\mathbf{Q}+c)E)\,\mathrm{d}t+E \boldsymbol{\sigma}^{\prime}\mathrm{d}\mathbf{W}_{t}^{\prime}, E(0)=E^{0}+\sigma N_{0,1}. \tag{4}\] where \(\underline{\boldsymbol{\kappa}}\) is the capacity matrix, \(A\star B\) is the vector of component \(A_{i}B_{i}\) and where \(A:B\) is the sum of \(A_{i}B_{i}\). \(\mathbf{W}\), \(\mathbf{W}^{\prime}\), \(\mathbf{N}_{0,1}^{1}\) are Gaussian noises and \(\boldsymbol{\sigma}\), \(\boldsymbol{\sigma}^{\prime}\), \(\boldsymbol{\sigma}^{\prime\prime}\), \(\sigma\) are the variance-correlation matrices and variance coefficient. Note that the sign of \(\boldsymbol{\kappa}_{ij}\) indicates whether species \(i\) eat or is eaten by species \(j\). Noises are mathematical representations of the uncertainties on the parameters and on the model. ## 2 Identification of Coefficients Consider for simplicity a single species in absence of noise and assuming that \(q\) is known; then \(z:=[r,\kappa,a,c]\) must be identified. The easiest is to choose two dates \(t_{1},t_{2}\) and measure \(Z^{d}:=[X(t_{1}),E(t_{1}),X(t_{2}),E(t_{2})]\). It amounts to counting the number of boats at sea and how much fish were caught, on two different days. Surprisingly, a root finding algorithm like broyden1 (from the Python library scipy) works very well [2] on synthetic data (i.e. choose a set \(z_{0}\) to compute \(Z(z_{0})\), then invert numerically the mapping \(z\mapsto Z\)). The same can be achieved by least squares on the gap between the current state \(Z\) and an ideal state \(Z^{d}\). With noise, \(\mathbb{E}\) being the expected value, one must solve. \[\min_{z}\mathbb{E}[|Z-Z^{d}|^{2}]\ :\ \ \text{subject to (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq the knowledge of \(E\) and \(\mathbf{u}\leq\mathbf{u}_{M}\) means that a global quota of \(\mathbf{B}\star\mathbf{u}_{M}\) is imposed. To translate it at the fisherman level requires an estimate of \(E\) (the number of boats at sea) before declaring the quota. As illegal fishing is hard to estimate, randomness in the model is welcome! Mathematically we may solve \[\min_{\mathbf{u}\in\mathcal{U}}\Big{\{} \bar{J}:=\int_{0}^{T}\mathbb{E}\left[|\mathbf{B}(t)-\mathbf{B}^{d }|^{2}\mathrm{d}t-\boldsymbol{\alpha}\cdot\mathbf{u}+\boldsymbol{\beta}\cdot[ \mathbf{u}]_{t}^{0,T}\right]\mathrm{d}t\ : \tag{5}\] \[\mathrm{d}\mathbf{B}_{t}=\mathbf{B}\star[(\mathbf{r}-\mathbf{u}- \underline{\mathbf{\kappa}}\mathbf{B})\mathrm{d}t+\boldsymbol{\underline{ \sigma}}\mathrm{d}\mathbf{W}_{t}]\,,\ \ \mathbf{B}(0)=\mathbf{B}^{0}+\boldsymbol{\underline{ \sigma}}^{\prime}\mathbf{N}_{0}^{1}\Big{\}}.\] The expectation is with respect to the laws on \(\mathbf{W}_{t}\) and \(\mathbf{B}^{0}\). To preserve the Markovian feature of the problem we assume that \(\mathbf{u}\) is a deterministic function \(\mathbf{x}\) and \(t\). Also \(\mathcal{U}=\{\mathbf{u}\ :\ \mathbf{u}_{j}\in[u_{m},u_{M}],\ j=1..d\}\). The quadratic variation is, \[[\mathbf{u}]_{t}^{0,T}=\lim_{\|P\|}\sum_{k=1..}^{t_{k}<t}|\mathbf{u}_{t_{k}}- \mathbf{u}_{t_{k-1}}|^{2}.\] where \(P\) ranges over partitions of the interval \((0,t)=\cup_{k}(t_{k-1},t_{k})\subset(0,t),\ t<T\) and the limit is in probability when \(\max_{k}|t_{k}-t_{k-1}|\to 0\). Here Ito calculus [3] tells us that: \[\mathbb{E}[\mathbf{u}]_{t}^{0,T}=\int_{0}^{t}\mathbb{E}[|\boldsymbol{\underline {\sigma}}\mathbf{B}_{t}\cdot\nabla_{\mathbf{B}}\mathbf{u}|^{2}]\mathrm{d}t.\] The term \(\boldsymbol{\alpha}\cdot\mathbf{u}\) encourages large quotas and represents the political cost of constraining the fishermen with small quotas; the term with \(\boldsymbol{\beta}\) is added to prevent large oscillations of \(\mathbf{u}\) from one day to the next. In [13] it is shown that the problem is well posed. A solution exists but it may not be unique. Three numerical methods for solutions have been analyzed in [13]: Stochastic Dynamic Programming, Hamilton-Jacobi-Bellman dynamic programming (HJB), and using Deep Neural Networks (DNN). Here we present a modified DNN proposed in [4] and compare the results with the solution of the (equivalent) distributed control problem using Kolmogorov's forward equation for the probability density of \(\mathbf{B}\). ### The Distributed Control Problem Assume for clarity that \(\boldsymbol{\underline{\sigma}}=\sigma\mathbf{I},\sigma\) constant and \(\boldsymbol{\beta}_{i}=\beta\) for all \(i\). The Kolmogorov equation for \(\rho(\mathbf{B},t)\), the PDF of \(\{\mathbf{B}_{t}\}_{0}^{T}\) is, \[\partial_{t}\rho+\nabla\cdot(\rho(\mathbf{r}-\underline{\mathbf{\kappa}} \mathbf{B}-\mathbf{u})\star\mathbf{B})-\nabla\cdot\nabla\cdot[\rho\frac{ \sigma^{2}}{2}\mathbf{B}\otimes\mathbf{B}]=0,\ \rho(\mathbf{B},0)=\rho^{0}(\mathbf{B}), \tag{6}\] for all \(t\in(0,T)\) and all \(\mathbf{B}\in\mathbf{R}:=\mathbb{R}^{+d}\). The solution of (5) is also the solution of \[\min_{\mathbf{u}\in\mathcal{U}}J(\mathbf{u}):=\int_{\mathbf{R}\times(0,T)}\left[ |\mathbf{B}-\mathbf{B}^{d}|^{2}-\boldsymbol{\alpha}\cdot\mathbf{u}(\mathbf{B}, t)+\beta|\sigma\mathbf{B}\nabla\mathbf{u}(\mathbf{B},t)|^{2}\right]\rho(\mathbf{B},t )\mathrm{d}B\mathrm{d}t, \tag{7}\] subject to (6). The conditions for having equivalence between the two control problems are detailed in [5]. ### Computation of gradients Consider the variational form of the Kolmogorov equation: find \(\rho\) such that, for all \(\hat{\rho}\), \[\int_{\mathbf{R}}\left(\hat{\rho}\partial_{t}\rho-\rho(\mathbf{r}-\underline{ \boldsymbol{\kappa}}\mathbf{B}-\mathbf{u})\star\mathbf{B}\cdot\nabla\hat{\rho}+ \frac{\sigma^{2}}{2}\nabla\hat{\rho}\cdot\nabla\cdot(\mathbf{B}\otimes \mathbf{B}\rho)\right)=0,\ \ \rho(0)=\rho^{0}.\] Calculus of variations says that a variation \(\delta\mathbf{u}\) yields a \(\delta\rho\) with \(\delta\rho(0)=0\) and \[\int_{\mathbf{R}}\left(\hat{\rho}\partial_{t}\delta\rho-\delta\rho(\mathbf{r}- \underline{\boldsymbol{\kappa}}\mathbf{B}-\mathbf{u})\star\mathbf{B}\cdot \nabla\hat{\rho}+\frac{\sigma^{2}}{2}\nabla\hat{\rho}\cdot\nabla\cdot(\mathbf{ B}\otimes\mathbf{B}\delta\rho)\right)=-\int_{\mathbf{R}}\rho\mathbf{B}\star \delta\mathbf{u}\nabla\hat{\rho}. \tag{8}\] Define the adjoint \(\rho^{*}\) by \(\rho^{*}(T)=0\) and, for all \(\hat{\rho}\), \[\int_{\mathbf{R}}\left(\partial_{t}\rho^{*}\hat{\rho}\right. +\hat{\rho}(\mathbf{r}-\underline{\boldsymbol{\kappa}}\mathbf{B}- \mathbf{u})\star\mathbf{B}\cdot\nabla\rho^{*}-\frac{\sigma^{2}}{2}\nabla\rho^{ *}\cdot\nabla\cdot(\mathbf{B}\otimes\mathbf{B}\hat{\rho})\] \[\left.+\hat{\rho}(|\mathbf{B}-\mathbf{B}^{d}|^{2}-\boldsymbol{ \alpha}\cdot\mathbf{u}+\beta|\sigma\mathbf{B}\nabla\mathbf{u}|^{2})\right)=0. \tag{9}\] Adding (8) with \(\hat{\rho}=\rho^{*}\) to (9) with \(\hat{\rho}=\delta\rho\) gives \[\int_{\mathbf{R}}[\partial_{t}(\rho^{*}\delta\rho)+(|\mathbf{B}-\mathbf{B}^{d }|^{2}-\boldsymbol{\alpha}\cdot\mathbf{u}+\beta|\sigma\mathbf{B}\nabla \mathbf{u}|^{2})\delta\rho]=-\int_{\mathbf{R}}\rho\mathbf{B}\star\delta \mathbf{u}\nabla\rho^{*}.\] As \(\rho^{*}(T)=0\) and \(\delta\rho(0)=0\), an integration in time gives \[\int_{\mathbf{R}\times|0,T|}(|\mathbf{B}-\mathbf{B}^{d}|^{2}-\boldsymbol{ \alpha}\cdot\mathbf{u}+\beta|\sigma\mathbf{B}\nabla\mathbf{u}|^{2})\delta \rho=-\int_{\mathbf{R}\times|0,T|}\rho\mathbf{B}\star\delta\mathbf{u}\nabla \rho^{*}.\] Finally, by differentiating \(J\) in (7), \[\delta J = \int_{\mathbf{R}\times[0,T]}\Big{[}\left(|\mathbf{B}-\mathbf{B}^ {d}|^{2}-\boldsymbol{\alpha}\cdot\mathbf{u}+\beta|\sigma\mathbf{B}\nabla \mathbf{u}|^{2})\right)\delta\rho \tag{10}\] \[- \boldsymbol{\alpha}\cdot\rho\delta\mathbf{u}+2\rho\beta\sigma^{2} \mathbf{B}\nabla\mathbf{u}:\mathbf{B}\nabla\delta\mathbf{u}\Big{)}\Big{]}\] \[= -\int_{\mathbf{R}\times]0,T|}\rho\Big{[}\mathbf{B}\star\delta \mathbf{u}\nabla\rho^{*}+\boldsymbol{\alpha}\cdot\delta\mathbf{u}-2\beta \sigma^{2}\mathbf{B}\nabla\mathbf{u}:\mathbf{B}\nabla\delta\mathbf{u}\Big{]}\] The computation of the gradient follows, because \(\delta J=<\mathrm{grad}_{u}J,\delta\mathbf{u}>+o(|\delta\mathbf{u}|)\). ### Numerical Simulation Two species are considered (d=2) with \(\sigma=\sigma^{\prime}=0.1\), \(\mathbf{B}_{1}(0)=1.2\), \(\mathbf{B}_{2}(0)=0.8\), \[\mathbf{r}=\begin{bmatrix}1.5\\ 1.5\end{bmatrix},\underline{\boldsymbol{\kappa}}=\begin{bmatrix}1.2&-0.1\\ 0.1&1.2\end{bmatrix},\boldsymbol{\alpha}=\begin{bmatrix}0.1\\ 0.1\end{bmatrix},\boldsymbol{\beta}=\begin{bmatrix}0.02\\ 0.02\end{bmatrix},q=1.3,u_{m}=0.4,u_{M}=1.4.\] A numerical simulation has been done using freefem [10], the finite element method and the optimization module ipopt (see [https://github.com/coin-or/Ipopt](https://github.com/coin-or/Ipopt)). Before optimization \(J=-0.24\) and after optimization \(J=-0.32\). For simplicity it is assumed that \(\mathbf{u}\) depends on \(\mathbf{B}\) but not on \(t\); it was shown numerically in [13] that the time dependence is small. The main difficulty is due to the non integrability of the right hand side in the adjoint equation. At all levels \(\mathbf{R}\) must be replaced by a finite domain smaller than the infinite integration domain of the partial differential equations. Results are shown on the following 4 figures. Figure 2, 2 show the surfaces \(\mathbf{u}_{i}\), i=1,2, functions of \(\mathbf{B}_{1},\mathbf{B}_{2}\). With this optimal quota, two sample trajectories where chosen randomly. Results are shown on Figure 3. Similar trajectories without quota are given for comparison on the left. ## 4 Quotas Computed by a Markovian Neural Network Here too, let us simplify the problem by forgetting the time dependency of the quota and represent each component of \(\mathbf{u}(\mathbf{B})\) by a Neural Network with \(K=2\) hidden layers of 50 neurons each and ReLU activations. Denote \(\mathbf{X}=(\mathbf{B},\mathbf{u})^{T}\), so that the NN represents Figure 3: Sample trajectories computed with the control from the Kolmogorov equation with and without quota for 2 species. The corresponding quotas are also shown. Without quota the biomass decays with time dangerously. also \(\mathbf{B}\mapsto\mathbf{u}(\mathbf{B})\): \[\mathbf{X}^{0}\text{ given },\mathbf{X}^{k+1}:=\sum_{k=0}^{K}\max\{\underline{ \mathbf{A}}^{k}\mathbf{X}^{k}+\mathbf{b}^{k},0\},\ \ \mathbf{u}_{NN}(\mathbf{B}):=\begin{bmatrix}I&0\end{bmatrix}\mathbf{X}^{K}.\] Then the coefficients \(\underline{\mathbf{A}}^{k}\) and \(\mathbf{b}^{k}\) are computed by minimizing \(J\) (the 'loss') defined by (5) with \(\mathbf{u}_{NN}\) in place of \(\mathbf{u}\). This method was proposed and tested in [13] but Pierre Bras [4] gave a convergence proof when a modified version (called Langevin) of the stochastic optimization algorithm ADAM is used. For the numerical tests we used his open source implementation with Keras (see [https://github.com/Bras-P/langevin-for-stochastic-control](https://github.com/Bras-P/langevin-for-stochastic-control)). The numerical results are shown on Figure 4 on the same problem described above. The converged value of the loss function is greater than the Kolmogorov solution which is typical because Supervised Learning does not compute the absolute minimum but on the other hand the solution proposed is usually more robust. The biggest asset of Supervised Learning is that it can be used with any number of species while Dynamic Programming cannot be used beyond 3 species. ## 5 A Simple Strategy Common sense tells us that if the biomass is decreasing (resp. increasing) then the quota should be made smaller (resp. bigger). In practical terms this means \[\mathbf{u}(t+\delta t)=\mathbf{u}(t)+\omega(\mathbf{B}(t)-\mathbf{B}(t-\delta t )). \tag{11}\] Figure 5 shows the results for the same problem as above with \(\omega=100\). This simple solution may stabilize the biomasses at their initial levels but it cannot bring them to a desire level different from the initial value. Furthermore, it does not account for the political cost of the quota, \(\mathbf{\alpha}\cdot\mathbf{u}\). Figure 4: Solution of the control problem with 2 species using \(\mathbf{u}(\mathbf{B})\) (called static) compared with using \(\mathbf{u}(\mathbf{B},t)\) (dynamic). The loss functions are displayed on the left. After optimization of the loss (left) \(J=-0.0315\) in the static case and \(0.15\) in the dynamic case. In the middled (static) and on the right (dynamic) two sample trajectories (blue and orange) and their control (green and red) are displayed. The results with dynamic controls are poor. ## 6 A Fishing Model with Quotas in the Open Sea ### A Behavioral model for fishermen All variables are now function of spatial \(\mathbf{x}\) and time \(t\). Recognizing that \(\nabla\mathbf{B}\) is a local indicator for a better fishing site, the position of a fishing boat \(\mathbf{Z}(t)\) is driven by \[\dot{\mathbf{Z}}(t)=U_{M}\frac{\nabla\mathbf{B}}{|\nabla\mathbf{B}|}|\mathbf{z} _{(t),t},\ \ B(0)=B^{0}. \tag{12}\] where \(U_{M}\) is the cruise speed of the boat. To be profitable the amount of fish caught should be greater that the operating cost, itself proportional to the square of the velocity of the boat, i.e. \[\boldsymbol{\gamma}\cdot\mathbf{B}(\mathbf{Z}(t),t)>U_{M}^{2},\ \text{ otherwise the fisherman returns home}. \tag{13}\] ### The Logistic equation for the Biomass Assume that fish move with a velocity \(\mathbf{v}\) and a small randomness \(\nu\). The velocity \(\mathbf{v}\) could be the sea current plus their own velocity to follow the plankton gradient \(\nabla P\) where \(P\) is the plankton biomass. Fishing depletes the fish population as before but only where fishing occurs. So if \(M\) is the number of boats, then at point \(\mathbf{x}\) of the domain studied \(\Omega\subset\mathbb{R}^{2}\), and time \(t\), the fish biomass \(\mathbf{B}(\mathbf{x},t)\) is driven by a PDE in \(\Omega\times(0,T)\), \[\partial_{t}\mathbf{B}+\nabla\cdot(\mathbf{v}(\nabla P)\mathbf{B})-\nu\Delta \mathbf{B}=\mathbf{B}\star\left(P\mathbf{r}-\sum_{1}^{M}\mathbf{u}(\mathbf{Z} ^{i},t)-\underline{\boldsymbol{\kappa}}\mathbf{B}\right),\ \ \ \mathbf{B}(0)=\mathbf{B}^{0}, \tag{14}\] with \(\partial\mathbf{B}/\partial\mathbf{n}=0\) on the border \(\Gamma\) of \(\Omega\) where \(\mathbf{n}\) is its outer normal to \(\Omega\). Plankton contributes to the reproductive welfare of fish by a positive factor for each species \(P\mathbf{r}\). The total catch is \(\mathbf{B}\star\mathbf{u}\); as before \(\kappa\) is the capacity matrix of the site. In practice it is strongly dependent on \(\mathbf{x}\) but in absence of information we ran the model with \(\kappa\) constant. Figure 5: Stabilization of the biomass of 2 species by the simple control of (11). The constraints \(\mathbf{u}_{1},\mathbf{u}_{2}\in[0.4,1.4]\) do not break the method in this case. The problem is \[\min_{\mathbf{u}\in\mathcal{U}}J:=\int_{0}^{T}\left(\int_{\Omega}|\mathbf{B}(t)- \mathbf{B}^{d}(t)|^{2}-\sum_{1}^{M}(\boldsymbol{\alpha}\cdot\mathbf{u}(\mathbf{Z }^{i},t)-\boldsymbol{\beta}[\mathbf{u}](\mathbf{Z}^{i},t))\right)\mathrm{d}t \tag{15}\] subject to (14) **Remark 1**.: _It may be feasible to replace (14) by a system equivalent at the limit \(\delta t\to 0\):_ \[\mathbf{B}(\mathbf{x},t) =\mathbf{B}(\mathbf{x}-\mathbf{v}(\mathbf{x},t)\delta t,t-\delta t) \tag{16}\] \[+\delta t\mathbf{B}\star\left[P\mathbf{r}-\sum_{1}^{M}\mathbf{u} (\mathbf{Z}^{i},t)-\underline{\boldsymbol{\kappa}}\mathbf{B}\right]_{| \mathbf{x},t-\delta t}+2\sqrt{\nu\delta t}\mathbf{N}_{0}^{1},\text{ for all }\mathbf{x},\] _The long time limit could be studied with the stationary Kolmogorov equation for the invariant measure of the process._ ### A Logistic Equation for the Plankton Letting the fish drift with the currents is too simple. If fish follows a plankton density \(P\) then \(\mathbf{v}\nabla\mathbf{B}\) in (14) is replaced by \(\nabla\cdot(\mathbf{B}\nabla P)\). Assume plankton is regenerated at rate one and eaten by some fish species at rate \(\mathbf{b}\). The logistic equation for \(P\) is: \[\partial_{t}P+\mathbf{v}\cdot\nabla P-\mu\Delta P=P(1-P-\mathbf{b}\cdot \mathbf{B}),\ \ \frac{\partial P}{\partial n}|_{\Gamma}=0\ or\ P|_{\Gamma}=0,\ \ P(0)=P^{0} \tag{17}\] where \(P^{0}(x)\) is the plankton density at initial time. The model assumes that in absence of fish the long time limit (the fishing site plankton capacity) of \(P\) is one. Here \(\mathbf{v}\) is the sea current velocity. Other models, perhaps more realistic, can be found in [15]. **Remark 2**.: If \(\mathbf{b}\cdot\mathbf{B}<1\), then \(P\) is positive and bounded by \(1-\mathbf{b}\cdot\mathbf{B}\), if it is initially so. Otherwise \(P\) may become negative and the model is no longer meaningful. **Remark 3**.: When \(c^{\prime}:=1-\mathbf{b}\cdot\mathbf{B}\) is constant and \(\mathbf{v}=0\) and \(\mu=0\), the solution of \(\dot{P}=P(c^{\prime}-P)\) is \(P=c^{\prime}\mathrm{e}^{c^{\prime}t}/(1+\mathrm{e}^{c^{\prime}t})\), and it tends to \(c^{\prime}\) when \(t\to\infty\). When \(\mu>0,\ \mathbf{v}=0\) and \(\Omega\) is bounded, then limit \(\lim_{t\to\infty}P=c^{\prime}\). ### Numerical Simulation Without Quota We ran the model with one species only but with plankton, with \(\Omega\) a portion of the Atlantic Ocean facing Senegal (see Figure 5), with the following parameters, \[T=2,\delta t=0.02,c=0.7,a=0.2,b=1,\mu=0.1,r=1,\kappa=1,K=100,U_{M}=2,\gamma=1.\] A random noise of variance \(\sigma=0.05\) is added to the position of the boats at each time iteration. Initialization is \[Q_{t=0}=0.05,\ \ P^{0}=[1-\frac{1}{40}((x-4)^{2}+(y-6)^{2})]^{+},\ \ B^{0}=[1-\frac{1}{40}((x-4)^{2}+(y-6)^{2})]^{+}.\] To obtain a meaningful sea current we set \[\mathbf{v}=10\cos(2\pi t)\nabla\psi\text{ where }\Delta\psi=0,\ \ \psi|_{\Gamma_{1}}=\mathbf{x}_{1}-6,\ \ \psi|_{\Gamma_{2}}=0.\] where \(\Gamma_{1}\) and \(\Gamma_{2}\) are the upper and lower boundaries of the domain. The following plots in Figure 7 show 1/ the initial position of the 50 boats on the coast and the level lines of \(B\) (left) and \(2P\) (right), 2/ their position and the values of \(B\) and \(P\) at time at 0.4, then 3/, 4/ are the same but at time 0.8 and 1.2. The integrals of \(P\) and \(B\) in \(\Omega\) are displayed on top of the plots of \(B\) and also on Figure 9.. We see that the fishing boats move towards the maximum zone of \(B\) and then spread because the biomass reduces drastically. Shortly after \(t=1\) the catch is too small for profit (see (13)) so the boats return to the coast and stay there until \(t=T\). ### Numerical Simulation with Quota All parameters are as above but now \(Q\) is adjusted by \[Q(t+\delta t)=Q(t)+\delta t\int_{\Omega}(B_{t}-B_{t-\delta t})\mathrm{d}x. \tag{18}\] We see on Figure 8 that the behavior is very different with quota. The boats move to the maximum zone of \(B\) but stay there because the quota prevents to fishermen from depleting the biomass. The boats stay at the same spot till \(B\) plateaus and the boat positions spread due to the noise added to \(\mathbf{Z}\) at each time step. Figure 7: From left to right and top to bottom: Level lines of fish (left) and plankton (right) biomass at times \(t=0.,0.4,0.8,1.2\). The color map legends apply to \(B\) and \(P/2\). The total biomass and plankton are indicated above the B-plots. The positions of the 50 fishing boats are indicated by small red squares. In this case without quota the fishermen fish extensively until \(t=1\) and then run out of resource (fishing is no longer profitable) and go back to the coast. This is seen too on Figure 9 which shows the evolution with time of the mean of \(B\), the mean of \(P\) and the mean of \(Q\). Figure 8: From left to right and top to bottom: Level lines of fish (left) and plankton (right) biomass at times \(0,.,0.4,1.2,1.6\). The color map legends apply to \(B\) and \(P/2\). The total biomass and plankton are indicated above the B-plots. The positions of the 50 fishing boats are indicated by small red squares. In this case with quota the fishermen sail to the maximum of the biomass but as the catch is limited by the quota, \(\vec{B}\) stays above the level of profitability at all time. Later \(B\) plateaus over a large area in the center of the domain and so the fishermen to not correct the spatial scattering due to the noise. Figure 9: Evolution of the total biomass \(\int_{\Omega}B\) and scaled total plankton \(\int_{\Omega}P/2\) with and without quota. Notice (on the right) that the quota strategy (18) is very efficient at maintaining the biomass constant. The quotas are displayed in green, it is constant by hypothesis on the left figure. ## Conclusion With the single site model of [13], we have confronted two methods to adjust the quotas for single sites fisheries and shown that Supervised Learning does fairly well on a problem with 2 species. For more than 2 species only Supervise Learning is applicable. Then we have put some foundation stones for a distributed model for fishing in the Atlantic ocean facing Senegal and shown that a common sense strategy to keep the biomass constant works. We have seen that the effect of quotas on the fishing strategy of fishermen is striking. A more sophisticated strategy is yet to be found for the control of the biomasses in large areas like the Atlantic ocean. Whatever has been said for fisheries translates to several other population control problems but once again these are theoretical case studies which are far from applicable directly to real life situations. ## Acknowledgement We thank P. Auger and M. Lauriere for their helpful comments; All PDE computations have been done with the public domain FreeFEM++ [10].
2309.14657
Field Testing of a Stochastic Planner for ASV Navigation Using Satellite Images
We introduce a multi-sensor navigation system for autonomous surface vessels (ASV) intended for water-quality monitoring in freshwater lakes. Our mission planner uses satellite imagery as a prior map, formulating offline a mission-level policy for global navigation of the ASV and enabling autonomous online execution via local perception and local planning modules. A significant challenge is posed by the inconsistencies in traversability estimation between satellite images and real lakes, due to environmental effects such as wind, aquatic vegetation, shallow waters, and fluctuating water levels. Hence, we specifically modelled these traversability uncertainties as stochastic edges in a graph and optimized for a mission-level policy that minimizes the expected total travel distance. To execute the policy, we propose a modern local planner architecture that processes sensor inputs and plans paths to execute the high-level policy under uncertain traversability conditions. Our system was tested on three km-scale missions on a Northern Ontario lake, demonstrating that our GPS-, vision-, and sonar-enabled ASV system can effectively execute the mission-level policy and disambiguate the traversability of stochastic edges. Finally, we provide insights gained from practical field experience and offer several future directions to enhance the overall reliability of ASV navigation systems.
Philip Huang, Tony Wang, Florian Shkurti, Timothy D. Barfoot
2023-09-26T04:27:41Z
http://arxiv.org/abs/2309.14657v2
# Field Testing of a Stochastic Planner for ASV Navigation Using Satellite Images ###### Abstract We introduce a multi-sensor navigation system for autonomous surface vessels (ASV) intended for water-quality monitoring in freshwater lakes. Our mission planner uses satellite imagery as a prior map, formulating offline a mission-level policy for global navigation of the ASV and enabling autonomous online execution via local perception and local planning modules. A significant challenge is posed by the inconsistencies in traversability estimation between satellite images and real lakes, due to environmental effects such as wind, aquatic vegetation, shallow waters, and fluctuating water levels. Hence, we specifically modelled these traversability uncertainties as stochastic edges in a graph and optimized for a mission-level policy that minimizes the expected total travel distance. To execute the policy, we propose a modern local planner architecture that processes sensor inputs and plans paths to execute the high-level policy under uncertain traversability conditions. Our system was tested on three km-scale missions on a Northern Ontario lake, demonstrating that our GPS-, vision-, and sonar-enabled ASV system can effectively execute the mission-level policy and disambiguate the traversability of stochastic edges. Finally, we provide insights gained from practical field experience and offer several future directions to enhance the overall reliability of ASV navigation systems. ## 1 Introduction Autonomous Surface Vessels (ASVs) have seen increasing attention as a technology to monitor rivers, lakes, coasts, and oceans in recent years (Ang et al., 2022; Cao et al., 2020; Dash et al., 2021; Dunbabin and Marques, 2012; Ferri et al., 2015; Madeo et al., 2020; MahmoudZadeh et al., 2022; Odetti et al., 2020). A fundamental challenge to the wide adoption of ASVs is the ability to navigate safely and autonomously in uncertain environments, especially for long durations. For example, many existing ASV systems require the user to precompute a waypoint sequence. The robot then visits these target locations on a map and attempts to execute the path online (Tang et al., 2020; Vasilj et al., 2017). However, disturbances such as strong winds, waves, unseen obstacles, aquatic plants that may or may not be traversable, and even simply changing visual appearances in a water environment are challenging for ASV navigation (Fig. 1). Many potential failures in robot perception and control systems may also undermine the mission's overall success. Our long-term goal is to use an ASV to monitor lake environments and collect water samples for scientists. A requirement for achieving this, and the primary focus of this paper, is to ensure robust global and safe local navigation. To enhance the robustness of the overall system, we identify waterways that are prone to local blockage as stochastic edges and plan mission-level policies on our high-level map. Uncertainties that arise during policy execution are handled by the local planner. One planning framework that is suitable for modelling uncertain paths is the Canadian Traveller Problem (CTP) (Papadimitriou and Yannakakis, 1991), a variant of the shortest-path planning problem for an uncertain road network. The most significant feature in a CTP graph is the stochastic edge, which has a probability of being blocked. The state of any stochastic edge can be disambiguated by visiting the edge. Once the state has been visited and classified as traversable or not, it remains the same. In our prior work (Y. Huang et al., 2023), we proposed a navigation framework -- the Partial Covering Canadian Traveller Problem (PCCTP) -- to solve a mission-planning problem in an uncertain environment. The framework used a stochastic graph derived from coarse satellite images to plan an adaptive policy that visits all reachable target locations. Stochasticity in the graph represents possible events where a water passage between two points is blocked due to changing water levels, strong wind, and other unmapped obstacles. The optimal policy is computed offline with a best-first tree-search algorithm. We evaluated our solution method on 1052 Canadian lakes selected from the _CanVec Series_ Ontario dataset (Natural Resources Canada, 2019) and showed it can reduce the total distance to visit all targets and return. Figure 1: Real-world challenges that motivate the use of stochastic edges in our planning setup. This article extends our previous work as described by Y. Huang et al. (2023) in two ways. First, we made significant improvements to our local planner responsible for tracking the global path and handling any locally occurring uncertainties such as obstacles. Our ASV system estimates the waterline using a learned network and a stereo camera and detects underwater obstacles using a mechanically scanning sonar. We fuse both sensors into an occupancy grid map, facilitating a sampling-based local motion planner to compute a pathway to track the global path while avoiding local obstacles. As in our previous research, we use a timer to distinguish stochastic edges and select appropriate policy branches based on the traversability assessment of the stochastic edges. Secondly, we have validated the overall system on three distinct missions, two of which are new. Our field trials show that our ASV reliably and autonomously executes precomputed policies from the mission planner under varying operating conditions and amid unmapped obstacles, even when the local planner does not perfectly map the local environment or optimally steer the ASV. We have also tested the local planner through an ablation study to identify bottlenecks in localization, mapping, and sensor fusion in the field. Our lessons learned from our field tests are detailed, and we believe this work will serve as a beneficial reference for any future ASV systems developed for environmental monitoring. ## 2 Related Works Autonomous ASV navigation for environmental monitoring requires domain knowledge from multiple fields, such as perception, planning, and overall systems engineering. In this section, we present a brief survey of all these related fields and discuss the relationship to our methods and any remaining challenges. **Satellite Imagery Mapping** First, mission planning in robotics often requires a global, high-level map of the operating environment. Remote sensing is a popular technique to build maps and monitor changes in water bodies around the world because of its efficiency (C. Huang et al., 2018; X. Yang et al., 2017). The Figure 2: A high-level overview of our navigation framework for water sampling. Given a set of user-selected target locations (red icons), our algorithm identifies stochastic edges from coarse satellite images and plans a mission-level policy for ASV navigation. Aerial views of two stochastic edges from real-world experiments are shown here. _JRC_ Global Surface Water dataset (Pekel et al., 2016) maps changes in water coverage from 1984 to 2015 at a 30 m by 30 m resolution, produced using _Landsat_ satellite imagery. Since water has a lower reflectance in the infrared channel, an effective method is to calculate water indices, such as Normalized Difference Water Index (NDWI) (McFeeters, 1996) or MNDWI (Xu, 2006), from two or more optical bands (e.g., green and near-infrared). However, extracting water data using a threshold in water indices can be nontrivial due to variations introduced by clouds, seasonal changes, and sensor-related issues. To address this, Li and Sheng (2012) and Feyisa et al. (2014) have developed techniques to select water-extraction thresholds adaptively. Our approach aggregates water indices from historical satellite images to estimate probabilities of water coverage (see Sec. 3.3). Overall, we argue that it is beneficial to build stochastic models of surface water bodies due to their dynamic nature and imperfect knowledge derived from satellite images. **Global Mission Planning** The other significant pillar of building an ASV navigation system is mission planning. First formulated in the 1930s, the Travelling Salesman Problem (TSP) (Laporte, 1992) studies how to find the shortest-path in a graph that visits every node once and returns to the starting node. Modern TSP solvers such as the _Google_ OR-tools (Perron and Furnon, 2023) can produce high-quality approximate solutions for graphs with about 20 nodes in a fraction of a second. Other variants have also been studied in the optimization community, such as the Travelling Repairman Problem (Afrati et al., 1986) that minimizes the total amount of time each node waits before the repairman arrives, and the Vehicle Routing Problem (Toth and Vigo, 2002) for multiple vehicles. In many cases, the problem graphs are built from real-world road networks, and the edges are assumed to be always traversable. In CTP (Papadimitriou and Yannakakis, 1991), however, edges can be blocked with some probability. The goal is to compute a policy that has the shortest expected path to travel from a start node to a single goal node. CTP can also be formulated as a Markov Decision Process (Bellman, 1957) and solved optimally with dynamic programming (Polychronopoulos et al., n.d.) or heuristic search (Aksakalli et al., 2016). The robotics community has also studied ways in which the CTP framework can be best used in path planning (Ferguson et al., 2004; Guo and Barfoot, 2019). Our problem setting, the Partial Covering Canadian Traveller Problem (PCCTP), lies at the intersection of TSP and CTP, where the goal is to visit a partial set of nodes on a graph with stochastic edges. A similar formulation, known as the Covering Canadian Traveller Problem (CCTP) (Liao and Huang, 2014), presents a heuristic, online algorithm named Cyclic Routing (CR) to visit every node in a complete \(n\)-node graph with at most \(n-2\) stochastic edges. A key distinction between CCTP and our setting is that CCTP assumes all nodes are reachable, whereas in PCCTP, the robot may give up on unreachable nodes located behind an untraversable edge. **ASV Systems** In recent years, more ASV systems and algorithms for making autonomous decisions to monitor environments have been built. Schiaretti et al. (2017) classify the autonomy level for ASVs into 10 levels based on control systems, decision making, and exception handling. Many works consider the mechanical, electrical, and control subsystems of their ASV designs (Ang et al., 2022; Ferri et al., 2015; Madeo et al., 2020). Dash et al. (2021) validated the use and accuracy of deploying ASVs for water-quality modelling by comparing the data collected from ASVs against independent sensors. Two examples of vertically integrated autonomous water-quality monitoring systems using ASVs are presented by H.-C. Chang et al. (2021) and Cao et al. (2020). In contrast, our main contribution is a robust mission-planning framework that is complementary to existing designs of ASV systems. Finally, informative path planning is another orthogonal area where the robot relies on a probabilistic model to identify targets that maximize information gain; Bai et al. (2021) reviews this topic. **Local Motion Planning** Path planning for navigation and obstacle avoidance is a comprehensive field that has been extensively studied (Sanchez-Ibanez et al., 2021). The primary purpose of the local planner in this project is to successfully identify and follow a safe path that tracks the global path while averting locally detected obstacles in real-time. Sampling-based motion planners such as RRT* (Karaman and Frazzoli, 2011) and BIT* (Gammell et al., 2015) are favourable, owing to their probabilistically complete nature and proven asymptotic optimality given the right heuristics. Our local motion planner is based on Sehn et al. (2023), a variant of the sampling-based planner designed to follow a reference path. Using a new edge-cost metric and planning in the curvilinear space, their proposed planner can incrementally update its path to avoid new or moving obstacles without replanning from the beginning while minimizing deviation to the global reference path. Search-based algorithms, such as D* lite (Koenig & Likhachev, 2002) and Field D* (Ferguson & Stentz, 2007), commonly used in mobile robots and autonomous vehicles, operate on a discretized 2D grid and employ a heuristic to progressively locate a path from the robot's present location to the intended destination. Subsequently, the optimal solution from the path planning is submitted to a low-level controller tasked with calculating the necessary velocities or thrusts in mobile robotics systems. Parallel to the planning and control framework, other models such as direct tracking with a constrained model-predictive controller (Ji et al., 2016) and training policies for path tracking through reinforcement learning (Shan et al., 2020) have emerged as new areas of research in recent years. **Perception** Lastly, our navigation framework requires local perception modules to clarify uncertainties in our map and avoid obstacles. Vision-based obstacle detection and waterline segmentation have also received renewed attention in the marine robotics community. Recent contributions have largely focused on detecting or segmenting obstacles from RGB images using neural networks (Lee et al., 2018; Qiao et al., 2022; Steccanella et al., 2020; Tersek et al., 2023; J. Yang et al., 2019). A substantial amount of research has been dedicated to identifying waterlines (Steccanella et al., 2020; Steccanella et al., 2019; Yin et al., 2022; Zhou et al., 2022) since knowing the whereabouts of navigable waterways can often be sufficient for navigation. Several annotated datasets collected in different water environments, such as inland waterways (Cheng et al., 2021) and coastal waters (Bovcon et al., 2019, 2021) have been published by researchers. Foundational models for image segmentation, such as 'Segment Anything' (Kirillov et al., 2023), have also gathered increasing attention due to their incredible zero-shot generalization ability and are being used in tracking (Maalouf et al., 2023) or remote sensing tasks (Chen et al., 2023). Sonar is another popular sensor that measures distance and detects objects on or under water surfaces using sound waves. Heidarsson and Sukhatme (2011a) pioneered the use of a mechanical scanning sonar for ASV obstacle detection and avoidance and demonstrated that obstacles generated from sonar could serve as labels for aerial images (Heidarsson & Sukhatme, 2011b). Karoui et al. (2015) focused on detecting and tracking sea-surface objects and wakes from a forward-looking sonar image. Occupancy-grid mapping, a classic probabilistic technique for mapping the local environment, was used to fuse measurements from sonars and stereo cameras on a mobile ground robot (Elfes, 1989). For our perception pipeline, we combine the latest advances in computer vision, large datasets from the field, and traditional filtering techniques to make the system robust in real-world operating conditions. Despite advances, accurate sensor fusion of above-water stereo cameras and underwater sonar for precise mapping on an ASV remains a formidable research challenge. ## 3 Global Mission Planner In this section, we will describe the mathematical formulation of the planning problem and present a detailed breakdown of our algorithm. ### The Problem Formulation We are interested in planning on a graph representation of a lake where parts of the water are stochastic (i.e., uncertain traversability). Constructing such a graph using all pixels of satellite images is impractical since images are very high-dimensional. Thus, we extend previous works from CTP (Guo & Barfoot, 2019; Liao & Huang, 2014; Papadimitriou & Yannakakis, 1991) and distill satellite images into a high-level graph \(G\) where some stochastic edges \(e\) may be untraversable with probability \(p\). The state of a stochastic edge can be disambiguated only when the robot traverses the edge in question. The robot begins at the starting node \(s\) and is tasked to visit all reachable targets \(J\) specified by the user (e.g., scientists) before returning to the starting node. If some target nodes are unreachable because some stochastic edges block them from the starting node, the robot may give up on these sampling targets. We call this problem the Partial Covering Canadian Traveller Problem (PCCTP). Fig. 3 is a simplified graph representation of a lake with two stochastic edges. The state of the robot is defined as a collection of the following: a list of target nodes that it has visited, the current node it is at, and its knowledge about the stochastic edges. A policy sets the next node to visit, given the current state of the robot. The objective is to find the optimal policy \(\pi^{*}\) that minimizes the expected cost to cover all reachable targets. In the example problem (Fig. 3), the robot can either disambiguate the left or right stochastic edge to reach the sampling location. Formally, we define the following terms: * \(G=(V,E)\) is an undirected graph. * \(c:E\rightarrow\mathbb{R}_{\geq 0}\) is the cost function for an edge, which is the length of the shortest waterway between two points. * \(p:E\rightarrow[0,1]\) is the blocking probability function. An edge with 0 blocking probability is deterministic; otherwise, it is stochastic. * \(k\) is the number of stochastic edges. * \(s\in V\) is the start and return node. * \(J\subseteq V\) is the subset of target nodes to visit. There are \(|J|\leq|V|\) goal nodes. * \(I=\{\text{A},\text{T},\text{U}\}^{k}\) is an information vector that represents the robot's knowledge of the status of all \(k\) stochastic edges. A, T, and U stand for ambiguous, traversable, and untraversable, respectively. * \(S\subseteq J\) is the subset of target nodes that the robot has visited. * \(a\) is the current node the robot is at. * \(x=(a,S,I)\) is the state of the robot. \(a\) is the current node, \(S\) is the set of visited targets, and \(I\) is the current information vector. * \(\pi^{*}\) is the optimal policy that minimizes the cost \(\mathbb{E}_{w\sim p(w)}\left[\phi\left(\pi\right)\right]\), where \(\phi\) is cost functional of the policy \(\pi\) and \(w\) is a possible world of stochastic graph, where each stochastic edge is assigned a traversability state. Figure 3: A toy example graph shown on the water mask generated from _Sentinel-2_ satellite images, with the corresponding graph on an aerial view image shown on the right. The planned paths between nodes are simplified for ease of understanding. The number beside each edge of the high-level graph is the path length in km, and the number in brackets is the blocking probability, which is computed using the probability of water coverage in each pixel (represented by its shade of orange) on the path. Note that traversable and ambiguous edges are the state before any action. ### Exactly Solving PCCTP with AO* We extend the AO* search algorithm (Aksakalli et al., 2016) used in CTP to find exact solutions to our problem. AO* is a heuristic, best-first search algorithm that iteratively builds an AO tree to explore the state space until the optimal solution is found. In this section, we will first explain how to use an AO tree to represent a PCCTP instance, then break down how to use AO* to construct the AO tree containing the optimal policy. AO Tree Representation of PCCTPThe construction of the AO tree is a mapping of all possible actions the robot can take and all possible disambiguation outcomes at every stochastic edge. Following Aksakalli et al. (2016), an AO tree is a rooted tree \(T=(N,A)\) with two types of nodes and arcs. A node \(n\in N\) is either an OR node or an AND node; hence the node set \(N\) can be partitioned into the set of OR nodes \(N_{O}\) and the set of AND nodes \(N_{A}\). Each arc in \(A\) represents either an action or a disambiguation outcome and is not the same as \(G\)'s edges (\(A\neq E\)). For all \(n\in N\), a function \(c:A\rightarrow\mathbb{R}_{\geq 0}\) assigns the cost to each arc. Also, for all \(n\in N_{A}\), a function \(p:A\rightarrow[0,1]\) assigns a probability to each arc. A function \(f:N\rightarrow\mathbb{R}_{\geq 0}\) is the cost-to-go function if it satisfies the following conditions: * if \(n\in N_{A}\), \(f(n)=\sum_{n^{\prime}\in N(n)}[p(n,n^{\prime})\times(f(n^{\prime})+c(n,n^{ \prime}))]\), * if \(n\in N_{O}\), \(f(n)=\min_{n^{\prime}\in N(n)}[f(n^{\prime})+c(n,n^{\prime})]\), * if \(n\in N\) is a leaf node, \(f(n)=0\). Now, we can map each node and edge such that the AO tree represents a PCCTP instance. Specifically, each node \(n\) is assigned a label \((n.a,n.S,n.I)\) that represents the state of the robot. \(n.a\) is the current node, \(n.S\) is the set of visited targets, and \(n.I\) is the information vector containing the current knowledge of the stochastic edges. The root node \(r\) is an OR node with the label \((s,\emptyset,\text{AA}...\text{A})\), representing the starting state of the robot. An outgoing arc from an OR node \(n\) to its successor \(n^{\prime}\) represents an action, which can be either visiting the remaining targets and returning to the start or going to the endpoint of an ambiguous edge via some target nodes along the way. An AND node corresponds to the disambiguation event of a stochastic edge, so it has two successors describing both possible outcomes. Each succeeding node of an OR node is either an AND node or a leaf node. A leaf node means the robot has visited all reachable target nodes and has returned to the start node. Each arc \((n,n^{\prime})\) is assigned a cost \(c\), which is the length of travelling from node \(n.a\) to node \(n^{\prime}.a\) while visiting the subset of newly visited targets \(n^{\prime}.S\setminus n.S\) along the way. For all outgoing arcs of an AND node, the function \(p\) assigns the traversability probability for the stochastic edge. The cost of disambiguating that edge is its length. Once the complete AO tree is constructed, the optimal policy is the collection of nodes and arcs included in the calculation of the cost-to-go from the root of the tree, and the optimal expected cost is \(f(r)\). For example, the optimal action at an OR node \(n\) is the arc \((n,n^{\prime})\) that minimizes the cost-to-go from \(n\), while the next action at an AND node depends on the disambiguation outcome. However, constructing the full AO tree from scratch is not practical since the space complexity is exponential with respect to the number of stochastic edges. Instead, we use the heuristic-based AO* algorithm, explained below. PCCTP-AO* AlgorithmOur PCCTP-AO* algorithm (Algorithm 1 and 2) is largely based on the AO* algorithm (C. L. Chang and Slagle, 1971; Martelli and Montanari, 1978). AO* utilizes an admissible heuristic \(h:N\rightarrow\mathbb{R}_{\geq 0}\) that underestimates the cost-to-go \(f\) to build the AO tree incrementally from the root node until the optimal policy is found. The algorithm expands the most promising node in the current AO tree based on a heuristic and backpropagates its parent's cost recursively to the root. This expansion-backpropagation process is repeated until the AO tree includes the optimal policy. One key difference between AO* and PCCTP-AO* is that the reachability of a target node may depend on the traversability of a set of critical stochastic edges connecting the target to the root. If a target \(j\in J\) is disconnected from the current node \(a\) when all the stochastic edges from a particular set are blocked, then this set of edges is critical. For example, the two stochastic edges in the top-right graph of Fig. 3 are critical because target node 1 would be unreachable if both edges were blocked. Thus, a simple heuristic that assumes all ambiguous edges are traversable may overestimate the cost-to-go if skipping unreachable targets reduces the overall cost. Alternatively, we can construct the following relaxed problem to calculate the heuristic. If a stochastic edge is not critical to any target, we still assume it is traversable. Otherwise, we remove the potentially unreachable target for the robot and instead disambiguate one of the critical edges of the removed target. The heuristic is the cost of the best plan that covers all definitively reachable targets and disambiguates one of the critical stochastic edges. For example, consider computing the heuristic at starting node 0 in Fig. 5. The goal is to visit both nodes 1 and 2 if they are reachable. Node 1 is always reachable; hence we assume it is traversable in the relaxed problem. Node 2 may be unreachable, so we remove the stochastic edge (4, 2) and ask the boat to visit Node 4 instead in the relaxed problem. This heuristic is always admissible because the path to disambiguate a critical edge is always a subset of the eventual policy. We can compute this by constructing an equivalent generalized travelling salesman problem (Noon & Bean, 1993) and solve it with any optimal TSP solver. Fig. 4 shows the result of applying PCCTP-AO* to the example problem in Fig. 3. The returned policy (coloured in green nodes) tries to disambiguate the closer stochastic edge \((2,3)\) to reach target node 1. Note that the AO* algorithm stops expanding as soon as the lower bound of the cost of the right branch exceeds that of the left branch. This guarantees the left branch has a lower cost and, thus, is optimal. ### Estimating Stochastic Graphs From Satellite Imagery We will now explain our procedure to estimate the high-level stochastic graph from satellite images. **Water Masking** Our first step is to build a water mask of a water area across a specific period (e.g., Figure 4: The final AO tree after running PCCTP-AO* on the example in Fig. 3. The label inside each node is the current state of the robot. OR nodes are rectangles, and AND nodes are ellipses. Nodes that are part of the final policy are green, extra expanded nodes are yellow, and leaf nodes terminated early are red. Some red nodes that are terminated early are left out in this figure for simplicity. 30 days). We use the _Sentinel-2_ Level 2A dataset (Drusch et al., 2012), which has provided multispectral images at 10 m by 10 m resolution since 2017. Each geographical location is revisited every five days by a satellite. We then select all satellite images in the target spatiotemporal window and filter out the cloudy images using the provided cloud masks. For each image, we calculate the Normalized Difference Water Index (NDWI) (McFeeters, 1996) for every pixel using green and near-infrared bands. However, the distribution of NDWI values varies significantly across different images over time. Thus, we separate water from land in each image and aggregate the indices over time. We then fit a bimodal Gaussian Mixture Model on the histogram of NDWIs to separate water pixels from non-water ones for each image. We average all water masks over time to calculate the probabilistic water mask at the target spatiotemporal window. Each pixel on the final mask represents the probability of water coverage on this 10 m by 10 m area. If a pixel may or may not have water, we call it a stochastic pixel. Finally, we identify the boundary of all deterministic water pixels. Fig. 6 shows an overview of these steps. **Stochastic Edge Detection: Pinch Points** We can now identify those stochastic water paths (i.e., narrow straits, pinch points (Ferguson et al., 2004)) that are useful for navigation. A pinch point is a sequence of stochastic water pixels connecting two parts of topologically far (or distinct) but metrically close water areas. Essentially, this edge is a shortcut connecting two points on the water boundary that are otherwise far away or disconnected. To find all such edges, we iterate over all boundary pixels, test each shortest stochastic water path to nearby boundary pixels, and include those stochastic paths that are shortcuts. The blocking probability of a stochastic edge is one minus the minimum water probability along the path. Since this process will produce many similar stochastic edges around the same narrow passage, we run DBSCAN (Ester et al., 1996) and only choose the shortest stochastic edge within each cluster. **Stochastic Edge Detection: Windy Edges** The second type of stochastic edges are those with strong wind. In practice, when an ASV travels on a path far away from the shore, there is a higher chance of running into a strong headwind or wave, making the path difficult to traverse. We define an edge as a windy edge if it is 200 m away from the water boundary at some point and assign a small probability for the event where the wind blocks the edge. **Path Generation** The next step is to construct the geo-tagged path and calculate all edge costs in the high-level graph. The nodes in the high-level graph are composed of all sampling targets, endpoints of stochastic edges, and the starting node. We run A* (Hart et al., 1968) on the deterministic water pixels to calculate the shortest-path between every pair of nodes except for the stochastic edges found in the previous step. Since the path generated by A* connects neighbouring pixels, we smooth them by downsampling. Then, we can discard any unnecessary stochastic edges if they do not reduce the distance between a pair of nodes. For Figure 5: Example of how we relax the original problem graph to calculate the heuristic \(h(n)\). At a high level, we construct a relaxed problem by removing all stochastic edges and unreachable nodes from the original graph. Then, the heuristic of the original problem is the cost of the relaxed problem and is always admissible. Figure 6: Overview of water-masking steps that calculate water probabilities from satellite images. Pixels with lower water probabilities are shaded more orange. every stochastic edge, we loop over all pairs of nodes and check if setting the edge traversable would reduce the distance between the pair of nodes. Finally, we check if each deterministic edge is a windy edge and obtain the high-level graph used in PCCTP. In summary, we estimate water probabilities from historical satellite images with adaptive NDWI indexing and build a stochastic graph connecting all sampling locations and pinch points. The resulting compact graph representing a PCCTP instance can be solved optimally with AO* heuristic search. ## 4 Simulations In this chapter, we will verify the efficacy of our PCCTP planning framework in a large-scale simulation of mission-planning on real lakes. ### Testing Dataset We evaluate our mission-planning framework on Canadian lakes selected from the _CanVec Series_ Ontario dataset (Natural Resources Canada, 2019). Published by _Natural Resources Canada_, this dataset contains geospatial data of over 1.1 million water bodies in Ontario. Considering a practical mission length, lakes are filtered such that their bounding boxes are 1-10 km by 1-10 km. Then, water masks of the resulting 5190 lakes are generated using _Sentinel-2_ imagery across 30 days in June 2018-2022 (Drusch et al., 2012). We then detect any pinch points on the water masks and randomly sample five different sets of target nodes on each lake, each with a different number of targets. The starting locations are sampled near the shore to mimic real deployment conditions. Furthermore, we generate high-level graphs and windy edges from the water mask. Graphs with no stochastic edges are removed as well as any instances with more than nine stochastic edges due to long run times. Ultimately, we evaluate our algorithm on 2217 graph instances, which come from 1052 unique lakes. Figure 7: Results of PCCTP and baselines in simulation. When there are no windy edges, all stochastic edges are pinch points. We only show the runtime of PCCTP because all the baselines are online methods. ### Baseline Planning Algorithms The simplest baseline is an online greedy algorithm that always goes to the nearest unvisited target node assuming all ambiguous edges are traversable. For a graph with \(k\) stochastic edges, we simulate all \(2^{k}\) possible worlds, each with a different traversability permutation, and evaluate our greedy actor on each one. The greedy actor recomputes a plan at every step and queries the simulator if it encounters a stochastic edge to disambiguate it. Also, it checks the reachability of every target node upon discovering an untraversable edge and gives up on any unreachable targets. A more sophisticated baseline is the optimistic TSP algorithm. Instead of always going to the nearest target node, it computes the optimal tour to visit all remaining targets assuming all ambiguous edges are traversable. Similar to the greedy actor, TSP recomputes a tour at every step and may change its plan after encountering an untraversable edge. The expected cost is computed via a weighted sum on all \(2^{k}\) possible worlds. In contrast to PCCTP, both baselines require onboard computation to update their optimistic plans, whereas PCCTP precomputes a single optimal policy that is executed online. Lastly, we modify the CR algorithm, originally a method for CCTP (Liao and Huang, 2014), to solve PCCTP. CR precomputes a cyclic sequence to visit all target nodes using the Christofides algorithm (Christofides, 1976) and tries to visit all target nodes in multiple cycles while disambiguating stochastic edges. If a target node turns out to be unreachable, we allow CR to skip this node in its traversal sequence. ### Results Fig. 6(a) compares our algorithm against all baselines. To measure the performance across various graphs of different sizes, we use the average expected regret over all graphs. The expected regret of a policy \(\pi\) for one graph \(G\) is defined as \[\mathbb{E}_{w}[\text{Regret}(\pi)]=\sum_{w}[p(w)(\phi(\pi,w)-\phi(\pi^{p},w))],\] where \(\pi^{p}\) is a privileged planner with knowledge of the states of all stochastic edges, \(\phi\) is the cost functional, and \(w\) is a possible world of the graph. PCCTP precomputes the optimal policy in about 50 seconds on average in our evaluation, and there is no additional cost online. Compared to the strongest baseline (TSP), our algorithm saves the robot about 1%(50m) of travel distance on average and 15%(1.8km) in the extreme case. Although the improvement is marginal on average, our planner can still be beneficial in edge cases (e.g., high blocking probability, long stochastic edges). The performance of PCCTP may be further enhanced if the estimated blocking probabilities of the stochastic edges are refined based on historical data. We also find that the performance gap between our algorithms and baselines becomes more significant with more windy edges. In fact, if the only type of stochastic edges in our graph is pinch points (i.e., the number of windy edges is 0), the performance gap is almost negligible between PCCTP and the optimistic TSP baseline. The main reason is that most pinch points only reduce the total trip distance by hundreds of meters on a possible world. Pinch points are most likely to be found either on the edges of a lake or as the only water link connecting two water bodies. In the first case, these pinch points are unlikely to be a big shortcut. As for the latter case, if the pinch point is the only path connecting the starting location to a target node, disambiguating this edge has to be part of the policy. On the other hand, windy edges passing through the centre of a lake are often longer, and the gap between the optimal and suboptimal policy is much more significant. The worst-case complexity of our optimal search algorithm is \(O(2^{k})\) with respect to the number of stochastic edges \(k\). The median runtime of our algorithm, implemented in Python, is less than one second, and 99% of the instances run under 3 minutes. However, in rare cases with eight or ten nodes, PCCTP can take up to ten hours using our unoptimized implementation. Note that the runtime can be considerably improved when our implementation is rewritten in a more efficient language, such as C++. More importantly, we argue that this one-time cost can occur offline before deploying the robot into a water-sampling mission. Although the worst-case runtime of the AO* algorithm can increase exponentially as the graph increases in size, the number of target locations in each graph cannot grow infinitely for real-world water-sampling missions. Hence, the runtime of PCCTP is not a concern for practical applications. ## 5 Autonomous Navigation System This section will explain our local navigation framework in detail and how the robot can execute the mission-level policy and safely follow its planned trajectory. ### Stochastic Edge Disambigutaion One crucial aspect required for fully autonomous policy execution is the capacity to disambiguate stochastic edges. Our approach is to build a robust autonomy framework (Fig. 8) that relies less on lower-level components such as perception and local planning to execute a policy successfully. In more general terms, the mission planner precomputes the navigation policies from satellite images given user-designated sampling locations. During a mission, the robot will try to follow the global path published by the policy. Sensor inputs from a stereo camera and sonars are processed and filtered via a local occupancy-grid mapper. The local planner then tries to find a path in the local frame that tracks the global plan and avoids any obstacles detected close to the future path of the robot. When the robot is disambiguating a stochastic edge, the policy executor will independently decide the edge's traversability based on the GPS location of the robot and a timer. A stochastic edge is deemed traversable if the robot reaches the endpoint of the prescribed path of this edge within the established time limit. If it fails to do so, the edge is deemed untraversable. There is no explicit traversability check on an ambiguous stochastic edge, such as a classifier or a local map. The timer allows us to address complications we cannot directly sense, such as heavy prevailing winds or issues with the local planner. Following this, the executor branches into different policy cases depending on the outcome of the disambiguation. ### Terrain Assessment with Stereo Camera An experienced human paddler or sailer can easily estimate traversability in a lake by visually distinguishing water from untraversable terrains, obstacles, or any dynamic objects. We use semantic information from RGB video streams and neural stereo disparity maps to estimate traversable waters in front of the robot and identify obstacles. We learn a water segmentation network and bundle it with a temporal filter to estimate the waterline in image space and remove outliers. The estimated waterline is then projected to 3D using the disparity map and used to update the occupancy grid. We provide more details in the following sections. #### 5.2.1 Water Segmentation Network A robust neural network relies on a large and diverse dataset. The characteristics of water's appearance exhibit considerable variation contingent on factors such as wind, reflections, and ambient brightness. Yet, the stereo camera falters in difficult lighting conditions due to the lack of dynamic range, culminating in inadequately exposed images and the emergence of artifacts such as shadows, lens flare, and noises. To navigate this challenge, applying data augmentation techniques such as colour-jittering and CutMix (Yun et al., 2019) during training greatly enhances out-of-distribution performance, yielding superior generalization in challenging weather conditions as in Fig. 9. Essentially, regions of one training image are cut and overlaid onto another, as demonstrated in Fig. 10 to encourage the model to learn more diverse and challenging features while also expanding our limited dataset. Another problem arises as manual annotation of thousands of images is impractical due to its labour and time intensity. Thus, since semantic segmentation is a well-explored research area, we used a pretrained SAM (Segment Anything Model) to automate the process of creating ground-truth labels (Kirillov et al., 2023). SAM will try to segment everything beyond just water, outputting numerous masks of different irrelevant items. While it is not yet capable of classifying labelled regions, because water normally occupies the lower half of the frame and is commonly characterized by substantial area and continuity, we can apply a heuristic that heavily favours these features, scoring regions to distinguish water mask \(m_{\text{water}}\) from other masks with very high accuracy: \[m_{\text{water}}=\max_{i}\left[\frac{A(m_{i})}{d(m_{i})+1}\right],\] where \(m_{i}\) denotes the mask of class \(i\) from SAM, \(A\) computes the total number of pixels a mask occupies, and \(d\) represents the vertical distance, in pixels, of the masked area's centroid from the image's bottom. Then, false positives within the identified mask will be filtered out. With manual checking, we found that this simple approach successfully labelled the entirety of our dataset without failure. An example of this process is shown in Fig. 11. Finally, we have a binary mask ready to be fed into training. Figure 8: The autonomy modules of our navigation system. Global mission planners are coloured in green, sensors inputs are labelled in blue, localization and local mapping nodes are shaded in orange while planning and control nodes are in purple. Figure 9: Examples of challenging conditions for semantic segmentation and disparity mapping. Our model architecture and pretrained weights are adopted from the eWaSR maritime obstacle detection network based on the ResNet-18 backbone (Tersek et al., 2023). From previous field tests in Nine Mile Lake and a stormwater management pond at the University of Toronto, we gathered around 4,000 images to serve as our training dataset and 200 for testing. In addition, 10 more labels are generated randomly using CutMix during training for every original labelled image. In the end, we are left with a lightweight Figure 11: The steps in the automatic process of generating ground-truth labels. Each distinct colour overlay represents a different object as segmented by SAM. The red region is the final ground-truth water mask after heuristic filtration. Figure 10: Example of CutMix augmented training data. A second image with a random exposure multiplier is randomly resized and placed on top of the original image. The red region highlights the pixels that are labelled as water. yet powerful neural network that outputs binary masks that accurately and consistently segment water, achieving a mIoU of 0.992 with respect to ground truth on the held-out test set of 200 images (Fig. 12). #### 5.2.2 Waterline Estimation and Tracking There are several issues associated with the direct use of raw segmentation masks produced by a neural network. Firstly, the 2D, per-pixel water labels are not inherently suited for determining traversability before the robot. Secondly, depth estimations derived from the stereo camera can be severely distorted due to unfavourable conditions such as sun glare or tranquil water reflections. Lastly, both the neural segmentation masks and depth maps can exhibit noise and inconsistency over successive timestamps. These issues necessitate that we avoid combining segmentation masks directly with depth maps to ascertain the existence of a 3D water surface. Instead, we filter the segmentation masks both spatially and temporally to approximate a waterline in 2D image space, then project this line into 3D space. This projected line then forms the basis for traversability estimates in 3D based on stereo data. Figure 12: Example images from our test set. Diverse and challenging scenarios were hand-picked to better assess model capabilities. Figure 13: The stereo-based waterline estimation pipeline. Red dots indicate the detected waterline in the image plane and in 3D. We approximate the 2D waterline as a vector comprising \(n\) elements, where \(n\) represents the image's width. Each element serves to indicate the waterline's position for that column. The fundamental premise here is that each column contains a clear division between the water and everything else - the sky, trees, people, shoreline buildings, and other dynamic obstacles. Thus, we can presume that only the pixels below the waterline are navigable, while those above are impassable. This model works well because water surfaces are typically horizontal when viewed from the first-person perspective of the ASV. Therefore, for the purpose of evaluating the robot's forward navigation, we can safely disregard any water pixels located behind in 3D, or higher than in the image space, the defined waterline. The position of the waterline on every column is identified by scanning upwards from the column's bottom until a non-water region is detected using a small moving window. If \(s\) is the window size, the separation point is the first pixel from the bottom such that the next \(s\) pixels above are all non-water. Usually, the window size is five. Our filtering process consists of two stages: spatial filtering based on RANSAC (Fischler & Bolles, 1981) and employing a Kalman filter subsequently for temporal tracking of the waterline. We design the spatial filtering step to smooth the waterline and remove spatial outliers; to this end, we employ nearest-neighbour interpolation to fit the random samples in each iteration. RANSAC uses the squared loss function to compare the interpolated waterline and the raw waterline. Then, we apply a linear Kalman filter with outlier rejection to track each individual element (column) of the waterline temporally. The Kalman filter uses RANSAC-filtered waterline as observations and maintains an estimated waterline as the state. Both the state transition matrix and the observation matrix are identities. We use a chi-squared test to discard outliers, which compares the normalized innovation squared to a predetermined threshold. Using both filters, we can eliminate noises in the segmentation mask and mitigate any temporal oscillation or abrupt changes in the predicted water segmentation masks. ### Obstacle Detection with Sonar Sonar is commonly used as a sensor in maritime applications for both ships and submarines. A specific type, the Blue Robotics Ping360 mechanical scanning sonar, serves as our primary sensing module underwater. It is mounted underwater and operates by emitting an acoustic beam within a forward-facing fan-shaped cone. This beam has a consistent width (1deg) and height (20deg). The sonar then records the echoes reflected by objects, with the reflection strength relating directly to the target's density. By measuring the return time and factoring in the speed of sound in water, the range of these echoes can be determined. The sonar's transducer can also be rotated to control the horizontal angle of the acoustic beam. Configured to scan a 120deg fan-shaped cone ahead of the boat, the sonar can complete these scans up to a range of 20m in approximately 3.5 seconds. Additionally, we also have a Ping1D Sonar Echosounder from Blue Robotics that measures water depth. The echosounder is mounted underwater and is bottom-facing. Each sonar scan Figure 14: A sonar scan and obstacle detection result. The scan is taken from the same scene and timestamp as in Fig. 13. yields a one-dimensional vector that corresponds to the reflection's intensity along the preset range. If an obstacle impedes the path of the acoustic beam, it prevents the beam from passing beyond the obstruction, leading to an acoustic shadow. This phenomenon facilitates obstacle detection via sonar scanning. Fig. 13(a) illustrates a typical sonar scan cycle that detects obstacles. A single sonar scan's raw and processed data with the resulting detected obstacle are shown in Fig. 13(b). The process begins with the removal of noisy reflections within a close range (\(<\)2.5m) before smoothing the scan using a moving-average filter. Following this, all local maxima above a specific peak threshold (50) are detected. An obstacle is identified at the first local maxima, where the average intensity post-peak falls below the shadow threshold (5). Both these thresholds are determined empirically. A post-processing filter removes detections that do not persist across a minimum of \(n\) scans (with \(n=2\) in our configuration). This is accomplished by calculating the cosine similarity between the current intensity vector and its predecessor. If an obstacle is consistently detected \(n\) times, and the cosine similarity across these successive intensity vectors exceeds 0.9, along with spatial proximity, this detected obstacle point is included. In other words, any detections occurring in isolation, either spatially or temporally, are excluded. ### Sensor Fusion with Local Occupancy Grid Upon receiving detections from sonar and stereo cameras, they are fused into a coherent local representation to facilitate local path planning and robot control. We utilize the classic occupancy map (Elfes, 1989) for our local mapping representation. The traversability of each cell is determined by naively summing the separately maintained log-odds ratios for sonar and camera. Our occupancy grid is 40m x 40m, with a cell resolution of 0.5m x 0.5m, its centre moving in sync with the robot's odometry updates. Waterline points, as detected by the stereo camera, are ray-traced in 3D back to the robot, thus lowering the occupied probability of cells within the ray-tracing range. Cells containing or adjacent to waterline points have their occupied Figure 15: Example of sensor fusion with occupancy map before an extruding rock. Yellow line is the waterline estimated by stereo camera, red dots indicate underwater obstacles detected by the scanning sonar, and white dots mean that the sonar did not detect an obstacle at that angle. probabilities increased. However, points exceeding a set maximum range do not affect occupied probabilities beyond the maximum range due to the decreasing reliability of depth measurements with increasing range. The protocol for updating the log-odds ratios for sonar is similar. Each sonar scan is ray-traced to clear the occupancy grid and marks any cells containing or close to the obstacles. The log-odds ratios of existing cells are decayed with incoming measurement updates, enhancing the map's adaptability to noisy localizations, false positives, and dynamic obstacles. Finally, we apply a median filter to the occupancy grid to smooth out and remove outliers. A limitation of this system is that sonars and the stereo camera observe different sections of the environment. The sonar may detect underwater obstacles invisible to the camera and vice versa for surface-level objects. Fig. 15 provides an example where a shallow rock in the front-right of the ASV is detected by the sonar but missed by the stereo camera. Without ample ground-truth data on the marine environment, reconciling discrepancies between these sensors proves challenging. Traversability estimation, especially in shallow water, is also complicated due to the potential presence of underwater flora (e.g. Fig. 1c) or terrain. As a solution, we opt for the simplest fusion method: directly summing the log-odds ratio in each cell. Additionally, we adjust the occupancy grid dilation based on the echosounder's water depth measurements, increasing the dilation radius when the ASV is in shallower water. The workflow of this strategy is shown in Fig. 15. While this strategy may only present a coarse traversability estimate, it still reliably detects the shoreline despite possible undetected smaller obstacles such as lily pads or weeds. The dilation adjustment employed in shallow water allows the ASV to navigate safely, avoiding prevalent aquatic plants near the shore. ### Local Path Tracking and Control The local planner and control we use are grounded in a modified version of Lateral BIT*, as proposed by Sehn et al. (2023). This optimal sampling-based planner, set within the VT&R (Furgale & Barfoot, 2010) framework, follows an arbitrary global path while veering minimally around obstacles. Lateral BIT* builds upon BIT* (Gammell et al., 2015) by implementing a weighted Euclidean edge metric in the curvilinear planning domain, with collision checks performed against the occupancy grid in the original Euclidean space. Samples are pre-seeded along the whole global path in the curvilinear coordinates before random sampling in a fixed-size sampling window around the robot. The planner operates backward from a singular goal node to the current robot location without selecting any intermediate waypoints. Lateral BIT* is also an anytime planner and can be adapted for dynamic replanning. Once an initial solution is found, an MPC tracking controller can track the solution path. The MPC optimizes the velocity commands in a short horizon to minimize the deviation from the planner solution while enforcing robot kinematic models and acceleration Figure 16: Example of the planner replanning around an obstacle and avoiding it. Blue line is the global plan (see Sec. 3.3 for details). Green is the current local plan planned using the local occupancy grid and tries to stay close to the global plan as much as possible (see Sec. 5.5). Red is the robot’s actual trajectory estimated by the GPS. The actual trajectory of the robot is jagged due to both noisy GPS signals and overaggressive control. constraints. Adopted from Sehn et al. (2023), the MPC solves the following least-squares problem: \[\underset{\mathbf{T},\mathbf{u}}{\text{argmin}}J(\mathbf{T},\mathbf{u})=\sum_{k=1 }^{K}\ln(\mathbf{T}_{\text{ref},k}\mathbf{T}_{k}^{-1})^{\vee^{T}}\mathbf{Q}_{ k}\ln(\mathbf{T}_{\text{ref},k}\mathbf{T}_{k}^{-1})^{\vee}+\mathbf{u}_{k}^{T} \mathbf{R}_{k}\mathbf{u}_{k}\] s.t. \[\mathbf{T}_{k+1}=\exp\left((\mathbf{P}^{T}\mathbf{u}_{k})^{\wedge}h\right) \mathbf{T}_{k},k=1,2,...K\] where \(\mathbf{T}\in SE(3)\) are poses and \(\mathbf{u}=[v\;\omega]^{T}\) are velocities. The objective function minimizes the pose error between the reference trajectory \(\mathbf{T}_{\text{ref},k}\) and the predicted trajectory \(\mathbf{T}_{k}\) while keeping the control effort \(\mathbf{u}_{k}\) minimum. The two constraints are the generalized kinematic constraint and actuation limits. We tune the cost matrices \(\mathbf{Q}\) and \(\mathbf{R}\) to balance the cost between different degrees of freedom. We refer readers to Sec. V of Sehn et al. (2023) for more details. If a newly detected obstacle obstructs the current best solution path, the planner will truncate its planning tree from the obstacle to the robot, triggering a replan or rewire from the truncated tree to the robot's location. Fig. 16 shows an example from the field test where our robot detected obstacles and replanned its trajectory accordingly. Because the resolution of the satellite map is low (10m/cell), our global path could be blocked by large rocks and terrains, especially the pinch points. Hence, we adjust the maximum width and length of the sampling window and tune the parameters balancing lateral deviation and path length. If there are no viable paths locally within the sampling window and the planner cannot find a solution after 1 second, the controller will stop the ASV and stabilize it at its current location. ## 6 Real World Experiments ### Robot Our ASV platform, as depicted in Fig. 17, consists of a modified _Clearpath Heron_ ASV equipped with a GPS, IMU, Zed2i stereo camera, Ping360 scanning sonar, and a Ping1d Sonar Echosounder Altimeter. The stereo camera is positioned in a forward-facing configuration and has a maximum depth range of 35 m. The Ping360 sonar is configured to perform a 20 m by 125\({}^{\circ}\) cone scan in front of the robot every 3.5 seconds, achieving a resolution of 1.8\({}^{\circ}\). All computational tasks are handled by an Nvidia Jetson AGX Xavier and the onboard Intel Atom (E3950 @ 1.60GHz) PC on the Heron. A lithium-ion battery with an energy capacity of 88 WH powers the Jetson, stereo camera, and Ping360 sonar for approximately one hour, while a 417.6-Wh NiMH battery pack powers the motors and other electronic components for around two hours. A schematic of the electrical system is presented in Fig. 17c. The additional payloads carried by the ASV have a combined mass of roughly 9 kg. Although water samplers have not been integrated into our system, they can be easily fitted in the future. The maximum speed of our ASV is approximately 1.2 m/s. Additionally, we have a remote controller available for manual mode operation, which can be utilized for safety purposes if needed. ### System Implementation Details Our system's computational load is divided into offline and online processes (Figure 8). The online tasks are distributed between two onboard computers: the Atom PC and the Jetson. An Ethernet switch connects these computers, the sonar, and Heron's WiFi Radio. The GPS and IMU are connected to the Atom PC via USB, while the echo-sounder sonar and stereo cameras are connected to the Jetson via USB. The switch allows remote SSH access and data transfer between the Atom and the Jetson. We use the ROS framework (Quigley et al., 2009) for implementing our autonomy modules in C++ and Python. To synchronize time between the two machines, we employ Chrony for network time protocol setup. The Atom PC acts as the ROS master, responsible for vehicle interface, localization, updating the occupancy grids, running the local planner, and MPC controller. The Jetson handles resource-intensive tasks such as depth map processing, semantic segmentation, sonar obstacle detection, and data logging. Additionally, we provide web visualization using a ROS node with a Node.js server, projecting the robot's global pose, trajectory, and plan onto a cached _OpenStreetMap_ and water mask. We also provide a Rviz visualizer to display the occupancy grid and outputs of the local planner and MPC. During the mission, the web server publishes the robot's locations and states in real-time on a web page served on the local network, using pre-downloaded satellite maps. The web visualization and Rviz can be accessed in the field from a laptop connected to Heron's WiFi. Prior to the mission, we precompute the high-level graph and optimal policy, which are loaded onto the onboard PC. We periodically save the status of policy execution online, enabling easy policy reloading in case of a battery change during testing. Figure 17: Our _Clearpath Heron_ ASV for water-quality monitoring during a field test. The ASV has onboard sensors (GPS, IMU, underwater scanning sonar, stereo camera) and an Nvidia Jetson to process sensor measurements. Power and communication lines for our ASV are shown in (c). ### Testing Site Our planning algorithm was evaluated at Nine Mile Lake in McDougall, Ontario, Canada. Detailed test sites and the three executed missions can be found in Fig. 18. The Lower Lake Mission in Fig. 18a repeats the field test from our prior work (Y. Huang et al., 2023), involving a 3.7 km mission with five sampling points, three of which are only reachable after navigating a stochastic edge. The stochastic edge at the bottom-left compels the ASV to maneuver through a thin opening amid substantial rocks not discernible in the _Sentinel-2_ satellite images. Besides repeating the old experiment from our prior work, we added two additional missions in the lake's upper areas. Also, to assess our local mapping and planning stack's capabilities, an ablation mission was executed to see if the robot could safely navigate the stochastic edge at the bottom-left of the Lower Lake Mission. The policy in Upper Lake Mission (Short) was directly generated from the Fig. 3 water mask. In fact, the high-level graph in Fig. 3 and policy in Fig. 4 is a simplified toy version of our testing policy in the Upper Lake Mission (Short). We observed that our NovAtel GPS receiver's reliability was impaired by large trees on the left stochastic edge in Fig. 18b. On the right stochastic edges of the same subfigure, shallow regions, lily pads, and weeds were numerous. Lastly, we extended this short mission to include three additional sampling sites and another stochastic edge at the lake's farthest point. The expected length of this Upper Lake Mission (Long) in Fig. 18c is approximately 3.3km. ### Results of Mission Planner The aim of our field experiments is to test if our autonomy stack can successfully execute a global mission policy correctly and fully autonomously, without any manual interventions. Results are summarized in Table 1 and we provide the overview and analysis of our results below. **Lower Lake Mission** We undertook the lower lake mission twice - first, using both sonar and camera, and second, using only the camera. The ASV successfully reached 4/5 targeted locations during both trials, with the exception of the bottom-left location. This was due to the ASV's inability to autonomously navigate through large rocks within the designated time frame. When contrasted with prior experiments noted in Y. Huang et al. (2023), our trials showed marked improvement, with only a single manual intervention required due to algorithmic failure during the first run, and none during the second. The intervention was necessitated by the ASV's collision with a tree trunk (the one in Fig. 1b) it failed to identify, resulting in manual maneuvering to remove the obstruction. In both trials, the policy executor deemed the bottom-left stochastic edge untraversable because the local planner did not find a path through large rocks within the time limit. The ASV was then safely directed back to the last sampling location and starting location. Moreover, through these trials, we noted a significant improvement in the stability of our navigational autonomy compared to the same field test last year. The inclusion of a new semantic segmentation network for the stereo camera allowed the ASV to navigate confidently even in conditions of high sunlight glare or calm water. Sonar detection capabilities facilitated the identification and avoidance of underwater rocks by the local planner. Through the incorporation of a Model Predictive Control (MPC) tracking controller, the reliance on GPS velocity estimates was removed. Lastly, the decision to use an 88Wh battery on the \begin{table} \begin{tabular}{l l l l} \hline \hline Mission & Sensors Used & Node Visited & Interventions \\ \hline Lower Lake Mission & Sonar + Camera & 4/5 & Once \\ Lower Lake Mission & Camera Only & 4/5 & None \\ \hline Upper Lake Mission (Short) & Sonar + Camera & 1/1 & None \\ Upper Lake Mission (Short) & Sonar + Camera & 1/1 & None \\ Upper Lake Mission (Short) (Left Edge Blocked) & Sonar + Camera & 1/1 & None \\ Upper Lake Mission (Short) (Left Edge Blocked) & Sonar + Camera & 0/1 & None \\ \hline Upper Lake Mission (Long) & Sonar + Camera & 5/5 & Once \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of the results of our tests for different policies, including any interventions due to algorithmic failure (excluding battery changes). ASV markedly improved the Jetson's battery life, thereby negating the need for battery changes during each mission. In Table 2, we show that the Jetson and onboard PC are very power-hungry during one of the testing trials. A microcontroller inside our ASV measures the power of the onboard PC, and we use the jetson-stats tool to log the power of the Jetson. Although the measurement is anecdotal and the exact power of the Jetson, the Jetson and onboard PC are very power-hungry during one of the testing trials. Figure 18: Representative examples of global plans and trajectories traversed during field experiments. consumption can depend on other factors, such as the state of the battery and operating temperatures, the 88WH battery powering the Jetson can certainly last through a two-hour-long experiment. **Upper Lake Mission (Short)** We performed four tests of this policy on the upper lake to determine if our robot could execute different policy branches and navigate both sides of the central island, which were visibly passable based on aerial observations. The criteria for success entailed either a safe traversal through the stochastic edges on either side within the assigned time limit or a safe return to the start point without collisions. We initially executed the policy twice as it was. The policy guided the ASV to navigate and return along the left stochastic edge, which had a lower expected cost than the right edge. For the following two trials, we artificially executed a different branch of the policy by setting an early timeout, thereby blocking the left edge and compelling the ASV to navigate the right edge. The ASV executed the mission-level policy fully autonomously throughout all four trials, except for a battery change. Navigating the left side proved straightforward without any intervention, despite occasional GPS signal disruptions. On the right side, the ASV reached the target area once but timed out on the second trial. Despite the need for a battery change during one trial, no collisions occurred during any test. Importantly, we considered a trial a success despite the ASV not reaching the designated target if the overall policy was autonomously executed. **Upper Lake Mission (Long)** We expanded the previous policy to a more extensive mission, covering a larger area of Nine Mile Lake's upper parts with the same starting point as the shorter mission. First, the boat navigated the stochastic edges on the island's left side to reach the sample point, and it returned using the same path. Despite this, significantly deteriorated GPS signals were observed at the edge's end, preventing the mission-level policy executor from detecting the completion of the edge traversal due to GPS solution noise. Consequently, a manual restart of the policy executor was necessary. Thereafter, the ASV proceeded upwards to the next sample point before making a left turn to go through a shortcut pinch point, visiting two more sample points. Following a brief stop for battery replacement, the ASV completed the remaining mission. In evaluation, our local perception pipeline performed commendably in this area despite never previously collecting data here. Particularly, the synergy of sonar obstacle detection and stereo camera's semantic waterline estimation showed high reliability in close-range shoreline and obstacle detection with very minimal false positives. ### Isolated Testing of Local Planner A main contribution of our current work is the new perception and local planner modules that can safely disambiguate stochastic edges and navigate safely and autonomously in obstacle and terrain-rich waterways without high-resolution prior maps. To verify this, we tested the local planner on a stochastic edge ten times with the exact same parameter, five times each in either direction. Success was demonstrated by either reaching the stochastic edge's other endpoint within a set time frame or returning to the starting point upon timeout of the policy executor. Without intervention, the ASV accomplished this 70% of the time. However, in three instances, it collided with or became trapped by obstacles, such as rocks and a tree trunk. The global path extracted from the _Sentinel-2_ Image was interrupted by a large rock, with only two narrow openings between the rocks, manually traversable, as demonstrated in Fig. 20 (b). One of the narrow openings is visible from the aerial view in Fig. 20 (d). Our ASV can detect these rocks; however, the over-aggressive dilation parameter obstructs the local planner from charting a path through the central passageway (see Fig. 19). Although marginally wider, the other opening exceeded the maximum corridor width of our local planner's curvilinear space, being over 30m away from the nearest point on the global \begin{table} \begin{tabular}{c|c|c c} \hline Device & Heron CPU & Jetson CPU & Jetson GPU \\ \hline Usage(\%) & 75.2 & 61.6 & 89.3 \\ Power(W) & 9.2 & 19.3 (Combined) \\ \hline \end{tabular} \end{table} Table 2: Usage and power consumption of our compute devices during a Lower Lake Mission. path. Relying exclusively on GPS/IMU for location and a local occupancy grid focused around the ASV poses considerable challenges in this terrain, due to imprecision in localizing obstacles relative to the robot and issues controlling tight turns and precise path tracking, escalating the collision risk in confined spaces. In order to mitigate noise and path plan conservatively, occupancy values were decayed over time, and substantial dilation was applied around occupied cells. As such, the ASV would not construct and finetune a consistent local map but would instead overlook previously encountered obstacles. Consequently, the local planner oscillates between two temporarily obstacle-free paths in the occupancy grid, while the ASV stops and unsuccessfully searches for a traversable path locally until the timer limit is reached, as shown in Fig. 20 (a) and Fig. 20 (c). Figure 19: Comparison between the robot’s occupancy grid maps and aerial image. Yellow dots are waterline estimated in 3D. Red dots are obstacles detected by sonar. Boat symbols are added to (b) and (d) for context. The global plan (blue line) is blocked by rocks, so the ASV needs to detour through the narrow opening. However, the passage is blocked on the occupancy grid due to our inaccurate detection, localization, and excessive dilation. Another key reason for the low quality of the occupancy grid is the difficulty of fusing sonar and stereo camera measurements, especially at longer ranges. Since sensor fusion occurs solely within the occupancy map, both sensors need to detect an obstacle simultaneously at the same location in the map for accurate fusion. This can prove challenging due to a variety of reasons. For instance, depth measurements produced by the stereo camera tend to be noisier over a larger range. Our camera is not capable of detecting underwater obstacles detected by sonar. Additionally, our system lacks effective uncertainty measures for updating sonar and stereo observations within the occupancy map, especially when the two sources provide conflicting data. For example, the ASV simply did not detect the tree trunk. Thus, our sensor fusion mechanism proves effective Figure 20: Comparison of the global plan, manual traversal, and autonomous navigation through the stochastic edge. The global plan, calculated from coarse satellite images, is blocked by a rock. In (b), the ASV was able to pass the narrow opening under manual teleoperation. However, the ASV was unable to identify the opening in the local occupancy grid in autonomous mode (see. Fig. 19), so it searched for an opening in place until the time limit and returned to the start. only over shorter ranges where the sonar and camera are more likely to align. If it is possible to extend the range of our perception modules, the ASV could formulate more optimized navigation paths, preventing collisions with obstacles such as rocks. ## 7 Lessons Learned In this section, we outline insights garnered from our field tests, emphasizing successful design aspects related to field-tested ASV navigation systems and suggesting potential improvements for future iterations. **Timer** Primarily, we found that using a timer to disambiguate stochastic edges was simple, robust, and practical. Integration of a timer within our ROS-based system was easy and could accommodate unexpected hindrances such as strong winds, making stochastic edges difficult to traverse. This allowed for uninterrupted policy execution even when the local planner failed to identify viable paths through a traversable stochastic edge. Essentially, the inclusion of a timer fostered independence between the execution of our mission-level policy and the selections of local planners, enabling the ASV to conduct water sampling missions irrespective of local planner errors. **Localization** A critical limitation of our system lies in the absence of precise GPS localization. Our system necessitates a seamless integration of local mapping with broader satellite maps to facilitate accurate navigation in complex scenarios, such as those illustrated in Fig. 20. A GPS alternative, such as SLAM, would introduce redundancy, bolstering navigation robustness when GPS signals become compromised due to obstructions, interferences, or adverse weather conditions. Furthermore, minimizing localization noise could enhance speed and steering control, enabling the ASV to operate more swiftly and smoothly. **Occupancy Grid** As demonstrated in the previous section, our occupancy grid map also struggles with sensor fusion - particularly over long ranges where sonar and stereo camera measurements can contradict. These inconsistencies necessitated the introduction of a time-decay factor and significant dilation around obstacles. As a result, we observed a 'drunken sailor' phenomenon, wherein the ASV constantly navigates within a confined space without any real progress. We think that semantic SLAM integration with the stereo camera could ameliorate local occupancy map issues. If SLAM can provide a locally consistent and metrically accurate map of higher quality, the decay factor in the occupancy grid becomes unnecessary and the planner will not oscillate. While SLAM is impractical in open water due to the absence of stationary features near the robot, it becomes viable in densely obstacle-populated scenes such as pinch points or shorelines. Localizing the robot against semantic-based local features could lead to more accurate localization and, furthermore, improve obstacle-relative pose estimation and traversability assessment. As we can store and grow the map as the robot explores unknown areas, the planner can also work with a static occupancy grid and avoid any oscillation. Furthermore, we also recommend better exploration strategies to build local maps and search for traversable paths rather than fixing the planning domain size around the precomputed global path from inaccurate satellite images. As the map can be expanded when the robot explores unknown areas, the planner can work with a fixed occupancy grid to avoid oscillation. Additionally, more effective local map building and traversable path searching strategies might provide better solutions than confining the planning domain size around inaccurate satellite images' precomputed global path. **Evasive Maneuvers** Our system currently lacks evasive maneuvers. Despite collisions with obstacles, the robot could feasibly retreat and navigate back to unobstructed waters. However, our local planner often fails to detect forward obstacles, continuing to chart a forward path after collisions. Both the stereo camera and sonar have minimum range limitations, resulting in undetected proximate obstacles. We could introduce the timer mechanism to prompt evasive maneuvers. For instance, if the ASV remains stationary despite forward movement instructions from the planner and controller, it should back up and reset its local planner to circumnavigate the same area. While the ASV may struggle to self-extricate from a beach or shallow rock without human assistance, evasive maneuvers could facilitate the avoidance of obstacles such as tree trunks or aquatic plants. **Sonar** The incorporation of sonar in our system entails both advantages and drawbacks. Positively, it enabled the detection and circumvention of underwater obstacles, beyond the stereo camera's capabilities. Conversely, the sonar's slow scanning rate (3 seconds per scan) restricts it from being the solitary onboard perception sensor. Additionally, our heuristic-based obstacle detection method fails to recognize minor obstacles, such as lilypads or weeds. While the sonar effectively gauges obstacle distances from the ASV, it cannot determine the depth of underwater obstacles since it scans horizontally. This depth ambiguity complicates traversability estimation, which relies on exact water and underwater obstacle depth knowledge. Moreover, merging sonar with the stereo camera proves challenging due to their observing different world sections. **System Integration** While our autonomy algorithms demonstrated commendable field performance, potential improvements from a system engineering standpoint remain. An immediate goal is enhancing our software's efficiency to decrease computational load and power consumption on both the Atom PC and the Jetson. For instance, running semantic SLAM alongside the existing stack would require additional power and considerable software optimization to avoid straining our computers further. Aside from optimizing power use, improvements to efficiency, reliability, and usability could be advantageous, particularly for nontechnical users. Our Rviz and web interface user displays contain critical monitoring and debugging information but demand extensive navigation system familiarity. Our data logging pipeline consumes substantial storage space (about 1GB/min), imposing both storage and time cost burdens for copying and analysis. Booting up the GPS in the field was another challenge due to prolonged wait times for adequate satellite acquisition for autonomous navigation. In terms of future hardware, vegetation-proof boat hulls and propellers should be considered given the increased drag and potential damage to the propeller blades from aquatic plant interference. Furthermore, electronic connectors capable of withstanding transportation-induced vibrations and cables that shield connections from interference would enhance overall system robustness. **Field Logistics** Our field logistics proved successful largely due to employing a motorboat, facilitating rapid transportation of the robot, personnel, and supplies to remote testing locations on the lake. During trials, staying in close proximity to the robot or flying a drone for tracking was straightforward using a motorboat. In case of forgotten crucial equipment, swift return trips to the base camp for recovery were possible. Our field tests, spanning three days, were completed as planned, despite limited time and battery life. ## 8 Conclusions In conclusion, we have introduced a multi-sensor navigation system for Autonomous Surface Vehicles (ASVs) that uses satellite images to plan mission-level navigation policies offline. Our mission planner models the uncertainty in satellite images as stochastic edges, and formulates a Partial Covering Canadian Traveller Problem (PCCTP) on a high-level graph. We propose PCCTP-AO*, an optimal, informed-search-based method capable of finding a policy with the minimum expected cost. Our method has been evaluated in simulated graphs generated from real Canadian lakes, and results show that our optimal policy can save from 1%(50m) to 15%(1.8km) of travel distance. To deploy the policy for field operations, we construct a GPS-, vision-, and sonar-enabled navigation system to execute preplanned policies. Our local mapping modules combine a neurally estimated waterline from the stereo camera with underwater obstacles detected by a mechanically scanning sonar. The local motion planner then determines a path to avoid obstacles while still adhering to the global path from the precomputed policy. Through extensive field tests, we have demonstrated that our ASV navigation system can effectively execute the mission-level policy in the presence of unmapped obstacles with minimal intervention. Despite its simplicity, we found that the timer-based architecture can safely disambiguate stochastic edges and reliably complete km-scale missions. However, according to our ablation tests, traversability assessment and localization remain bottlenecks for local mapping and motion planning performance. We hope the lessons from this development process and the insights gained will foster advancements in future ASV systems. #### Acknowledgments We would like to acknowledge the Natural Sciences and Engineering Research Council of Canada (NSERC) for supporting this research.
2309.05982
Anisotropy-assisted magnon condensation in ferromagnetic thin films
We theoretically demonstrate that adding an easy-axis magnetic anisotropy facilitates magnon condensation in thin yttrium iron garnet (YIG) films. Dipolar interactions in a quasi-equilibrium state stabilize room-temperature magnon condensation in YIG. Even though the out-of-plane easy-axis anisotropy generally competes with the dipolar interactions, we show that adding such magnetic anisotropy may even assist the generation of the magnon condensate electrically via the spin transfer torque mechanism. We use analytical calculations and micromagnetic simulations to illustrate this effect. Our results may explain the recent experiment on Bi-doped YIG and open a pathway toward applying current-driven magnon condensation in quantum spintronics.
Therese Frostad, Philipp Pirro, Alexander A. Serga, Burkard Hillebrands, Arne Brataas, Alireza Qaiumzadeh
2023-09-12T06:22:36Z
http://arxiv.org/abs/2309.05982v3
# Anisotropy-assisted magnon condensation in ferromagnetic thin films ###### Abstract We theoretically demonstrate that adding an easy-axis magnetic anisotropy facilitates magnon condensation in thin yttrium iron garnet (YIG) films. Dipolar interactions in a quasi-equilibrium state stabilize room-temperature magnon condensation in YIG. Even though the out-of-plane easy-axis anisotropy generally competes with the dipolar interactions, we show that adding such magnetic anisotropy may assist the generation of the magnon condensation electrically, via the spin transfer torque mechanism. We use analytical calculations and micromagnetic simulations to illustrate this effect. Our results may explain the recent experiment on Bi-doped YIG and open a new pathway toward application of current-driven magnon condensation in quantum spintronics. _Introduction--_. Magnon condensation with nonzero momentum at room temperature [1] is a fascinating phenomenon first observed in 2006. The condensed magnons were observed at the two degenerate magnon band minima of yttrium iron garnet (YIG), and easy-plane ferrimagnetic insulator with very low magnetic dissipation [2; 3], as the spontaneous formation of a quasi-equilibrium and coherent magnetization dynamics in the momentum space [4]. To generate condensate magnons, magnon must be pumped into the system by an incoherent stimulus such as parametric pumping [5; 6; 7; 8; 9; 10; 11; 12; 13; 14] and/or spin-transfer torque [15; 16; 17; 18; 19]. The system may thermalize above a critical magnon density to form a quasi-equilibrium magnon condensation state at the bottom of magnon bands. The study of magnon condensation is not only interesting from an academic point of view, but it is also of great importance in various areas of quantum technology and applied spintronics [20; 21; 11; 22]. At high magnon densities, the relevant regime for the magnon condensation state and nonlinear magnon-magnon interactions becomes important. A (meta)stable and steady quasi-equilibrium magnon condensation requires an effective repulsive interaction between magnon quasiparticles. It was shown that in a system mainly influenced by exchange interaction, magnons are attractive, but dipolar interactions in YIG may change the sign of nonlinear magnon interactions and thus are crucial for the creation of a (meta)stable condensate magnon state [23; 24; 25; 26; 27; 28; 29; 8; 9; 10; 23; 29; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. Recently, it was shown that the thermalization time of magnon condensation is reduced in confined nanoscopic systems [30]. It was also demonstrated that the lateral confinement in YIG enhances the dipolar interaction along the propagation direction and causes a deeper band depth, i.e., the difference between ferromagnetic resonance (FMR) and magnon band minima. Increasing the magnon condensation lifetime was attributed to this enhancement of the band depth [30]. In another recent achievement in magnon condensation experiments, Divinsky et al. [31] found evidence of condensation of magnons by spin-transfer torque mechanism. They introduced a small perpendicular magnetocrystalline anisotropy (PMA) through bismuth doping in the thin film of YIG, while the magnetic ground state still resides within the plane. This discovery opens a new route toward electronic control of magnon condensation. However, the interplay between the dipolar interactions, which was previously shown to be essential for the stability and thermalization of magnon condensation, and the counteracting out-of-plane easy-axis magnetic anisotropy, is so far uncharted. This article studies the nonlinear magnon interactions by analyzing the mechanism behind the anisotropy-assisted formation of the magnon condensate. We present simulations within the Landau-Lifshitz-Gilbert framework [32; 33; 34] that support analytical calculations. _Model--_. We consider a thin ferromagnetic film in the \(y-z\) plane to model YIG. The magnetic moments are directed along the \(z\) direction by an external magnetic field of strength \(H_{0}\). The magnetic potential energy of the film contains contributions from the isotropic exchange interaction \(\mathcal{H}_{\rm ex}\), Zeeman interaction \(\mathcal{H}_{\rm Z}\), dipolar interaction \(\mathcal{H}_{\rm dip}\), and additionally a PMA energy \(\mathcal{H}_{\rm an}\) in the \(x\) direction, normal to the film plane. YIG has a weak in-plane easy-axis that can be neglected compared to the other energy scales in the system. The total spin Hamiltonian of the system reads, \[\mathcal{H}=\mathcal{H}_{\rm ex}+\mathcal{H}_{\rm Z}+\mathcal{H}_{\rm dip}+ \mathcal{H}_{\rm an}. \tag{1}\] The PMA energy is given by, \[\mathcal{H}_{\rm an}=-K_{\rm an}\sum_{j}(\mathbf{S}_{j}\cdot\hat{x})^{2}, \tag{2}\] where \(K_{\rm an}>0\) is the easy-axis energy, \(\hbar\mathbf{S}_{j}\) is the vector of spin operator at site \(j\), with \(\hbar\) is the reduced Planck constant. Details of the Hamiltonian can be found in the Supplemental Material (SM) [35]. The Holstein-Primakoff spin-boson transformation [36] allows us to express the spin Hamiltonian in terms of the magnon creation and annihilation operators. The amplitude of the effective spin per unit cell in YIG at room temperature is large \(\hbar S\approx 14.3\hbar\), [27, 37, 38], and thus we can expand the spin Hamiltonian in the inverse powers of the spin \(S\). Up to the lowest order in nonlinear terms, the magnon Hamiltonian \(\mathcal{H}\) of a YIG thin film can be expressed as the sum of two components: \(\mathcal{H}_{2}\) and \(\mathcal{H}_{4}\). The former represents a noninteracting magnon gas comprising quadratic magnon operators. The latter, on the other hand, constitutes nonlinear magnon interactions characterized by quartic magnon operators; see the SM for details [35]. Note that three-magnon interactions are forbidden in our geometry by the conservation laws [39] _Magnon dispersion of YIG with a finite PMA--_. The magnon dispersion in YIG is well known and has been studied extensively in both experimental and theoretical works [40, 2, 41]. Magnons travelling in the direction of the external magnetic field have the lowest energy. These so-called backward volume magnetostatic (BVM) magnons have a dispersion with a double degenerate minimum at finite wavevectors \(q_{z}=\pm Q\). When pumping magnons into the thin film, the magnons may thermalize and eventually form a condensate state in these two degenerate minima with opposite wavevectors. The noninteracting magnon Hamiltonian and the dispersion of BVM magnons, along the \(z\) direction, in the presence of a finite PMA reads, \[\mathcal{H}_{2} =\sum q_{z}\hbar\omega_{q_{z}}\hat{c}_{q_{z}}^{\dagger}\hat{c}_{q _{z}}, \tag{3a}\] \[\hbar\omega_{q_{z}} =\sqrt{A_{q_{z}}^{2}-B_{q_{z}}^{2}}, \tag{3b}\] where \(\hat{c}_{q_{z}}^{\dagger}(\hat{c}_{q_{z}})\) are the magnon creation (annihilation) operators, and \[A_{q_{z}} =D_{\text{ex}}q_{z}^{2}+\gamma(H_{0}+2\pi M_{S}f_{q})-K_{\text{an }}S, \tag{4a}\] \[B_{q_{z}} =2\pi M_{S}f_{q}-K_{\text{an}}S. \tag{4b}\] Here, \(D_{\text{ex}}\) is the exchange stiffness, \(M_{S}=\gamma\hbar S/a^{3}\) is the saturation magnetization, with \(\gamma=1.2\times 10^{-5}\,\text{eV}\,\text{O}\text{e}^{-1}\) is the gyromagnetic ratio, and \(a=12.376\text{\AA}\) is the lattice constant of YIG. The form factor \(f_{q}=(1-e^{-|q_{z}|L_{x}})/(|q_{z}|L_{x})\) stems from dipolar interactions in a thin magnetic film with thickness \(L_{x}\)[42, 43]. Fig. 1 shows the effect of PMA on the magnon dispersion of YIG. PMA decreases both the ferromagnetic resonance (FMR), and the magnon band gap at the \(\Gamma\) point \(\omega_{q_{z}=0}\), in addition to a greater decrease in the magnon band gap at the band minima \(\omega_{q_{z}=\pm Q}\). Therefore the band depth \(\Delta\omega=\omega_{q_{z}=0}-\omega_{q_{z}=\pm Q}\) is increased. The position of the band minima at \(q_{z}=\pm Q\) is also shifted to larger momenta. In addition, the curvature of the minima increases as a function of the anisotropy strength. Above a critical PMA, \(K_{\text{an}}^{c_{2}}\), the magnetic ground state is destabilized and the in-plane magnetic state becomes out-of-plane. We are interested in the regime in which the magnetic ground state remains in-plane, and thus the effective saturation magnetization is positive \(M_{\text{eff}}=M_{S}-2K_{\text{an}}/(\mu_{0}M_{S})>0\) The effect of PMA on magnon dispersion resembles the effect of confinement in the magnon spectra of YIG. In Ref. [30], it was shown that transverse confinement in a YIG thin film leads to an increase of the FMR frequency, the band depth, as well as shifting the band minima to higher momenta while the magnon band gap at the band minima is also increased. It was shown that this change of the spectrum in confined systems increases the magnon condensate lifetime. Therefore, we expect, in a similar way, PMA increases the magnon condensate lifetime and assists the generation of magnon condensation. _Nonlinear magnon interactions in the presence of PMA--_. Magnons are considered quasiparticles that exhibit weak interactions in the low-density regime, but their intensity of nonlinear interactions increases as their density increases. Repulsive interactions are essential for thermalizing injected nonequilibrium magnons and creating a metastable magnon condensation at a steady and quasi-equilibrium state. Since the discovery of magnon condensation, there has been a long debate over the origin of magnon thermalization [26, 27, 28, 8, 44, 9]. The nonlinear interaction of condensate magnons at the two degenerate minima, \(q_{z}=\pm Q\), consists of intra- and inter-band contributions, \(\mathcal{H}_{4}=\mathcal{H}_{4}^{\text{intra}}+\mathcal{H}_{4}^{\text{inter}}\), where \[\mathcal{H}_{4}^{\text{intra}} =A(\hat{c}_{Q}^{\dagger}\hat{c}_{Q}^{\dagger}\hat{c}_{Q}\hat{c}_{ Q}+\hat{c}_{-Q}^{\dagger}\hat{c}_{-Q}^{\dagger}\hat{c}_{-Q}), \tag{5a}\] \[\mathcal{H}_{4}^{\text{inter}} =2B(\hat{c}_{Q}^{\dagger}\hat{c}_{-Q}^{\dagger}\hat{c}_{Q}\hat{c} _{-Q})+C(\hat{c}_{Q}^{\dagger}\hat{c}_{-Q}\hat{c}_{Q}\hat{c}_{-Q}\] \[\hat{c}_{-Q}^{\dagger}\hat{c}_{-Q}\hat{c}_{-Q}\hat{c}_{Q}+\text{h. c.})+D(\hat{c}_{Q}^{\dagger}\hat{c}_{Q}^{\dagger}\hat{c}_{-Q}^{\dagger}\hat{c}_{-Q}^{ \dagger}+\text{h.c.}). \tag{5b}\] Figure 1: The dispersion of noninteracting BVM magnons in a YIG thin film for various PMA strengths. The inset shows the depth of the band minima as a function of the PMA strength. We set \(L_{x}=50\,\text{nm}\) and \(H_{0}=1\,\text{kOe}\) The interaction amplitudes are given by, \[A= -\frac{\gamma\pi M_{S}}{SN}\big{[}(\alpha_{1}+\alpha_{3})f_{Q}-2 \alpha_{2}(1-f_{2Q})\big{]}\] \[-\frac{D_{\mathrm{ex}}Q^{2}}{2SN}(\alpha_{1}-4\alpha_{2})+\frac{K_{ \mathrm{an}}}{2N}(\alpha_{1}+\alpha_{3}), \tag{6a}\] \[B= \frac{\gamma 2\pi M_{S}}{SN}\big{[}(\alpha_{1}-\alpha_{2})(1-f_{2Q})-( \alpha_{1}-\alpha_{3})f_{Q})\big{]}\] \[+\frac{D_{\mathrm{ex}}Q^{2}}{2SN}(\alpha_{1}-2\alpha_{2})+\frac{K_ {\mathrm{an}}}{N}(\alpha_{1}+\alpha_{3}),\] (6b) \[C= \frac{\gamma\pi M_{S}}{2SN}\big{[}(3\alpha_{1}+3\alpha_{2}+4 \alpha_{3})f_{Q}-\frac{8}{3}\alpha_{3}(1-f_{2Q})\big{]}\] \[+\frac{D_{\mathrm{ex}}Q^{2}}{3SN}\alpha_{3}+\frac{K_{\mathrm{an} }}{4N}(3\alpha_{1}+3\alpha_{2}+4\alpha_{3}),\] (6c) \[D= \frac{\gamma\pi M_{S}}{2SN}\big{[}(3\alpha_{1}+3\alpha_{2}+4 \alpha_{3})f_{Q}-2\alpha_{2}(1-f_{2Q})\big{]}\] \[+\frac{D_{\mathrm{ex}}Q^{2}}{2SN}\alpha_{2}+\frac{K_{\mathrm{an} }}{2N}(3\alpha_{2}+\alpha_{3}). \tag{6d}\] Here, \(N\) is the total number of spin sites. The dimensionless parameters \(\alpha_{1}\), \(\alpha_{2}\), and \(\alpha_{3}\) are related to the Bogoliubov transformation coefficients, listed in the SM [35]. An off-diagonal long-rage order characterizes the condensation state. The condensate state is a macroscopic occupation of the ground state and can be represented by a classical complex field. Therefore, to analyze the stability of the magnon condensate state, we perform the Madelung's transform \(\hat{c}_{\pm Q}\rightarrow\sqrt{N_{\pm 0}}e^{i\phi_{\pm Q}}\), in which the macroscopic condensate magnon state is described with a coherent phase \(\phi_{\pm Q}\) and a population number \(N_{\pm Q}\)[27; 28]. The total number of condensed magnons is \(N_{c}=N_{+Q}+N_{-Q}\), while the distribution difference is \(\delta=N_{+Q}-N_{-Q}\). We also define the total phase as \(\Phi=\phi_{+Q}+\phi_{-Q}\). Finally, the macroscopic four-magnon interaction energy of condensed magnons is expressed as, \[\mathcal{V}_{4}(\delta,\Phi)= \frac{N_{c}^{2}}{2}\big{[}A+B+2C\cos\Phi\sqrt{1-\frac{\delta^{2}} {N_{c}^{2}}}\] \[+D\cos 2\Phi-\big{(}B-A+D\cos 2\Phi\big{)}\frac{\delta^{2}}{N_{c}^{ 2}}\big{]}. \tag{7}\] Without PMA, this expression is reduced to the one derived in [45]. Now, we can look at the total interaction energy and interaction amplitudes in more detail. Figure 2 shows the effective interaction potential as a function of the PMA. In a critical PMA strength, \(K_{\mathrm{an}}^{c_{1}}\), the sign of the interaction changes from repulsive to attractive. This critical anisotropy is well below the critical magnetic anisotropy strength \(K_{\mathrm{an}}^{c_{2}}\) that destabilizes the inplane magnetic ground state. The necessary condition to reach a steady-state quasi-equilibrium magnon condensation is the presence of repulsive interactions between magnons; thus, in the following, we consider a PMA strength below the critical anisotropy \(K_{\mathrm{an}}<K_{\mathrm{an}}^{c_{1}}\). I this regime, the intraband interaction is attractive, and thus interband contributions are important. The interacting potential energy, Eq. (7), has five extrema at, \[\delta_{1} =0,\Phi=0; \tag{8a}\] \[\delta_{2} =0,\Phi=\pi;\] (8b) \[\delta_{3} =0,\Phi=\cos^{-1}(-\frac{C}{D});\] (8c) \[\delta_{4} =N_{c}\big{[}1-(\frac{C}{B-A+D})^{2}\big{]}^{\frac{1}{2}},\Phi=0;\] (8d) \[\delta_{5} =\delta_{4},\Phi=\pi. \tag{8e}\] Whether these extrema represent minima of the interacting potential energy relies on the system thickness \(L_{x}\) and the strength of the applied magnetic field \(H_{0}\). _Phase diagram for magnon condensate--_. Now, we explore the stability of the magnon condensate as a function of the thickness of the film \(L_{x}\) and the strength of the external magnetic field \(H_{0}\), using the typical YIG parameters, see Table 1. \begin{table} \begin{tabular}{l l l} Parameter & Symbol & Value \\ Saturation magnetization & \(4\pi M_{S}\) & \(1.75\,\mathrm{kOe}\) \\ Effective spin & \(S\) & \(14.3\) \\ Exchange stiffness & \(D_{\mathrm{ex}}\) & \(0.64\times 10^{-20}\,\mathrm{eV}\,\mathrm{m}^{2}\) \\ Gilbert damping parameter & \(\alpha\) & \(1\times 10^{-3}\) \\ \end{tabular} \end{table} Table 1: The material parameters used in the micromagnetic simulations. Figure 2: The total nonlinear magnon interaction energy, Eq. (7), as a function of the PMA strength. \(N\) and \(N_{c}\) are the total number of spins and condensate magnons, respectively. \(K_{\mathrm{an}}^{c_{1}}\) represents the critical value of the PMA at which the sign of nonlinear interactions is changed. On the other hand, \(K_{\mathrm{an}}^{c_{2}}\) corresponds to the critical value of PMA at which the in-plane magnetic ground state becomes unstable. We set \(L_{x}=50\,\mathrm{nm}\) and \(H_{0}=1\,\mathrm{kOe}\). \(K_{\mathrm{an}}^{s_{\mathrm{in}}}=0.5\,\mathrm{\mu eV}\) denotes the PMA used in our micromagnetic simulations. First, we present the phase diagram for magnon condensation in YIG, in the absence of PMA, in Fig. (a)a. The thinnest films are expected to have a symmetric distribution of magnon condensation between the two minima. This phase diagram is in agreement with previous studies [27, 45]. Next, we add a PMA, with strength \(K_{\text{an}}=0.5\,\mu\text{eV}\), and plot the phase diagram of the magnon condensate in Fig. (b)b for different thicknesses. For the selected material parameters, PMA tends to push the magnon condensate towards a more asymmetric population distribution between the two magnon band minima. Since both minima are degenerate thus there is an oscillation of magnon condensate between these two minima. The asymmetry of condensate magnon populations agrees with our previous analysis of interaction amplitudes. Within out parameters for thickness and PMA strength, the intraband interaction \(A\) is attractive, while the interband interactions are still repulsive. This phase diagram shows that in the presence of a PMA, condensate magnon can still be a metastable state. In addition, as we discussed earlier, a PMA increases the band depth and reduces the curvature of noninteracting magnon dispersion, see Fig. 1, which leads to an enhancement of the condensate magnon lifetime. Thus, we expect that introducing a small PMA into a thin film of YIG facilitates the magnon condensation process. _Micromagnetic simulation of magnon condensate--_. To validate our theoretical predictions and demonstrate the facilitation of condensate formation by including a PMA, we conducted a series of micromagnetic simulations using the LLG framework [46]. We simulate a ferromagnetic system where the magnons are excited by a spin-transfer torque. We perform calculations at zero temperature; thus, the system has no thermal magnons. Nonequilibrium magnons in the magnetic thin film are Figure 3: The phase diagram pf the condensate magnon in the absence (a) and presence (b) of PMA. We plot the magnon interaction energy \(\mathcal{V}_{4}/N_{c}^{2}\), Eq. (7), as a function of the film thickness \(L_{x}\) and external magnetic field strength \(H_{0}\). The dashed black lines indicate the boundaries between the different condensate phases, Eq. (8). We set \(K_{an}=0.5\,\mu\text{eV}\) in (b). Figure 4: Magnon distribution from micromagnetic simulations of a \(50\,\text{nm}\) thick YIG film at an external magnetic field strength \(H_{0}=1\,\text{kOe}\). In the absence of the PMA, \(K_{\text{an}}=0\) : (a) and (b) show magnon distributions of initial nonequilibrium excited magnons and final quasi-equilibrium magnon condensate steady state, respectively. In the presence of the PMA, \(K_{\text{an}}=0.5\,\text{eV}:\) (c) and (d) show magnon distributions of initial nonequilibrium excited magnons and final quasi-equilibrium magnon condensate steady state, respectively. The dotted line indicates the analytical dispersion relation of noninteracting magnons, Eq. (b)b. Because of nonlinear magnon interactions, there is a spectrum shift in the simulated magnon dispersion compared to the noninteracting result. Although the duration of magnon pumping by spin-transfer torque is the same in the absence or presence of the PMA, the critical torque amplitude is lower in the presence of PMA. excited by a spin-transfer torque mechanism through injection of a spin current on the entire sample surface [31]. The sign of the spin torque and its amplitude should be chosen so that the injected magnon population reaches the condensation critical density, we refer to the SM for simulation details [35]. With the spin-transfer torque mechanism, we expect nonequilibrium magnons with different wavevectors and frequencies to be excited. A fraction of these magnons will eventually be thermalized via repulsive nonlinear magnon-magnon interactions and form a steady and quasi-equilibrium state of condensate magnons at the bottom of magnon band dispersion, see Fig. 4. The numerical simulations confirm the supportive role of PMA in the condensation process. First, there is a reduction in the threshold of spin transfer torque necessary to inject the critical magnon density into the system enabling the system to attain said critical magnon density even at lower torque amplitudes. Second, final condensate magnons in the presence of the PMA are more localized around the band minima than in the case where PMA is absent. Simulations also indicate that PMA shifts the population of condensate magnons from a symmetric distribution between two band minima to an asymmetric distribution, Fig. 4. This is in agreement with the analytical phase diagram in Fig. 3b. _Summary and concluding remarks--_. The thermalization of nonequilibrium-magnons and the stability of the condensate require a repulsive sign for effective magnon-magnon interactions. This typically requires the presence of strong dipolar interactions. The presence of PMA is expected to counteract dipolar interactions. We show that even at intermediate strengths of the PMA field, the magnon interactions are still repulsive, and the magnon condensate can be created as a metastable state. We note that the anisotropy increases the band depth and curvature of the magnon dispersion. These adjustments to the spectra shape are expected to benefit the condensate formation. From the calculations of effective magnon-magnon interactions at the band minima, we present a classification diagram predicting whether the relative number of condensate magnons in the two degenerate minima might be symmetric. The presence of PMA strength, in a certain range, will tend to push the condensate toward a more uneven population distribution between two degenerate band minima. Micromagnetic simulations, within the LLG framework, confirms our analytical results and analyses. ## Acknowledgements The authors thank Anne Louise Kristoffersen for helpful discussions. We acknowledge financial support from the Research Council of Norway through its Centers of Excellence funding scheme, project number 262633, "QuSpin". A. Q. was supported by the Norwegian Financial Mechanism Project No. 2019/34/H/ST/00515, "2Dtronics".
2309.04444
A Generalized Stopping Criterion for Real-Time MPC with Guaranteed Stability
Most of the real-time implementations of the stabilizing optimal control actions suffer from the necessity to provide high computational effort. This paper presents a cutting-edge approach for real-time evaluation of linear-quadratic model predictive control (MPC) that employs a novel generalized stopping criterion, achieving asymptotic stability in the presence of input constraints. The proposed method evaluates a fixed number of iterations independent of the initial condition, eliminating the necessity for computationally expensive methods. We demonstrate the effectiveness of the introduced technique by its implementation of two widely-used first-order optimization methods: the projected gradient descent method (PGDM) and the alternating directions method of multipliers (ADMM). The numerical simulation confirmed a significantly reduced number of iterations, resulting in suboptimality rates of less than 2\,\%, while the effort reductions exceeded 80\,\%. These results nominate the proposed criterion for an efficient real-time implementation method of MPC controllers.
Kristína Fedorová, Yuning Jiang, Juraj Oravec, Colin N. Jones, Michal Kvasnica
2023-09-08T17:09:26Z
http://arxiv.org/abs/2309.04444v1
# A Generalized Stopping Criterion for Real-Time MPC ###### Abstract Most of the real-time implementations of the stabilizing optimal control actions suffer from the necessity to provide high computational effort. This paper presents a cutting-edge approach for real-time evaluation of linear-quadratic model predictive control (MPC) that employs a novel generalized stopping criterion, achieving asymptotic stability in the presence of input constraints. The proposed method evaluates a fixed number of iterations independent of the initial condition, eliminating the necessity for computationally expensive methods. We demonstrate the effectiveness of the introduced technique by its implementation of two widely-used first-order optimization methods; the projected gradient descent method (pGDM) and the alternating directions method of multipliers (ADMM). The numerical simulation confirmed a significantly reduced number of iterations, resulting in sub-optimality rates of less than 2 %, while the effort reductions exceeded 80 %. These results nominate the proposed criterion for an efficient real-time implementation method of MPC controllers. ## I Introduction Model Predictive Control (MPC) is an advanced and widely used control strategy that can effectively address many complex control problems in various fields, including process control [1], automotive control [2], and robotics [3]. The MPC framework operates under the paradigm of moving horizon control strategies and is executed at every control step to account for the current state measurement [4]. It builds up a mathematical model of the system to predict its behavior over a future time horizon. Then, it generates an optimal control action by solving an optimization problem subject to constraints. Therefore, MPC can handle complex system dynamics and consider input and output constraints. In practice, deploying MPC into real-time scenarios requires an efficient and reliable optimization approach to deal with the closed-loop optimal control problem [1]. To this end, two conventional approaches are used to implement the MPC control policy, parametric (explicit) MPC [5] and implicit (non-explicit) MPC [4]. The real-time implementation of explicit MPC [5] benefits from efficient division-free computation, yielding an optimal control law in the form of a piecewise affine (PWA) function for _a priori_ performance certification. Despite successful implementations, see [6], and the references therein, the memory limitations and hardly-tractable construction of explicit MPC lead to the use of implicit MPC as an alternative. Implicit MPC employs iterative methods like active set [7], interior-point [8], and first-order methods [9, 10] for online quadratic programming. Active-set methods are quick for small-medium problems but lack scalability and robustness; interior-point methods suit large problems and robustness but struggle with closed-loop warm-starting. First-order methods [11] can be easily scaled up, while its slow convergence limits its application in practice. Fortunately, their iterations are computationally cheap, and their runtimes are contingent upon the chosen stopping criterion. As a result, one can stop the iterations in advance to design a suboptimal real-time MPC scheme. Numerous publications in this field have already been presented, where the authors have focused on establishing stopping criteria for specific optimization algorithms, ensuring asymptotic stability of suboptimal MPC. The authors of [12] introduced a stopping criterion for an interior-point algorithm called the method of centers (MC). The work [13] gained a stopping criterion formula for the dual decomposition (DD) algorithm. In [14], authors have derived a criterion for the fixed-point iteration (FPI) algorithm. The authors of [15] have presented a formula determining the iteration count for the dual accelerated gradient projection algorithm (DAGPM). The paper [16] provides a formula for projected gradient (PGM) and accelerated projected gradient method (APGM). Each of the above-mentioned papers defines a different type of stopping criterion while considering only input constraints in MPC (see Table I). The stopping criterion that guarantees asymptotical stability in the presence of state constraints can be found in [17] for the projected gradient method (PGM) and in [18] for the fast alternating minimization algorithm (FAMA) through invariant set derivation, tightening techniques and additional assumptions. To the author's knowledge, a universal stopping criterion has not yet been published to apply to various optimization algorithms. In this paper, we formulate a generalized stopping criterion that preserves the asymptotic stability of a closed-loop system while achieving a comparable control performance to the optimal solution with a significant decrease in the number of iterations. The main contribution of this work is twofold, (i) we propose a novel generalized stopping criterion represent ing a maximum finite number of iterations for any first-order optimization method applied to solve the MPC problem. We prove the guarantee of the asymptotic stability of the closed-loop system using a generalized stopping criterion under mild assumptions at the cost of suboptimal control performance; (ii) we adopt the generalized stopping criterion to two widely used first-order optimization algorithms solving the MPC problems (PGDM, ADMM). The numerical case study of the well-known double-integrator benchmark is used to analyze the performance loss associated with a reduced number of iterations, accelerating the real-time evaluation. _Notation:_ We denote the set of real numbers by \(\mathbb{R}\), the set of \(n\)-dimensional real-valued vectors by \(\mathbb{R}^{n}\), and the set of \(n\times m\)-dimensional real-valued matrices by \(\mathbb{R}^{n\times m}\). Moreover, we denote the subspace of symmetric matrices in \(\mathbb{R}^{n\times n}\) by \(\mathbb{S}^{n}\) and the cone of positive (semi-)definite matrices by \(\mathbb{S}^{n}_{++}(\mathbb{S}^{n}_{+})\). For real-valued vector \(x\) and the symmetric positive definite matrix \(Q\), \(\|x\|_{Q}^{2}:=x^{\top}Qx\). ## II Preliminaries Consider a linear time-invariant (LTI) system in a discrete-time domain having a form \[x(t+T_{\mathrm{s}})=Ax(t)+Bu(t), \tag{1}\] where \(x\in\mathbb{R}^{n_{\mathrm{s}}}\) is a system state vector, \(u\in\mathbb{R}^{n_{\mathrm{s}}}\) is a vector of control actions, \(A\in\mathbb{R}^{n_{\mathrm{s}}\times n_{\mathrm{s}}}\) is system matrix, \(B\in\mathbb{R}^{n_{\mathrm{s}}\times n_{\mathrm{s}}}\) is input matrix, and \(T_{\mathrm{s}}\) is the sampling time. The corresponding linear-quadratic MPC problem is given by \[V(x_{0}) =\min_{x,u}\,\|x_{N}\|_{P}^{2}+\sum_{k=0}^{N-1}\left(\|x_{k}\|_{Q }^{2}+\|u_{k}\|_{R}^{2}\right) \tag{2}\] \[\text{s.t.}\,\,\begin{cases}x_{k+1}=Ax_{k}+Bu_{k},\ u_{k}\in \mathbb{U}_{k},\\ \quad\forall k\in\{0,\dots,N-1\}\\ x_{0}=x(t),\end{cases}\] where the decision variables \(x=[x_{0}^{\top},\dots,x_{N}^{\top}]^{\top}\in\mathbb{R}^{Nn_{\mathrm{s}}}\) and \(u=[u_{0}^{\top},\dots,u_{N-1}^{\top}]^{\top}\in\mathbb{R}^{Nn_{\mathrm{s}}}\) are the sequences of the predicted system states and control actions, respectively, with \(x(t)\in\mathbb{R}^{n_{\mathrm{s}}}\) as a state measurement. In (2), \(V\) denotes the minimized value function, \(\mathbb{U}_{k}\subset\mathbb{R}^{n_{\mathrm{u}}}\) is the set of the constrained control inputs, the matrices \(Q,P\in\mathbb{R}^{n_{\mathrm{s}}\times n_{\mathrm{s}}}\), \(R\in\mathbb{R}^{n_{\mathrm{s}}\times n_{\mathrm{s}}}\) are given tuning parameters, and \(N\) is a finite prediction horizon. **Assumption 1**: _We assume that for MPC design problem in (2) hold:_ * _the matrix pair_ \((A,B)\) _is controllable,_ * _the penalty factors_ \(Q\)_,_ \(P\)_, and_ \(R\) _are symmetric positive definite matrices,_ * _terminal penalty_ \(P\) _is computed as a solution of the matrix Riccati equation, i.e.,_ \(P=A^{\top}PA-(A^{\top}PB)(R+B^{\top}PB)^{-1}+(B^{\top}PA)+Q\)_,_ * _the sets_ \(\mathbb{U}_{k}\) _of control inputs are convex and closed, containing origin in their strict interiors._ Assumption 1 enforces the strong convexity of the MPC design problem in (2), leading to a unique optimal solution. Moreover, under Assumption 1, the feasible solution of the MPC design problem in (2) leads to the asymptotic stability of the closed-loop LTI system in (1). As the state constraints are not considered in (2), the recursive feasibility is satisfied by design. The MPC design problem in (2) has the form of the optimization problem of the quadratic programming (QP) in the general form \[V(x_{0}) =\min_{u}\,J(u;x_{0})=\frac{1}{2}\begin{bmatrix}u\\ x_{0}\end{bmatrix}^{\top}\begin{bmatrix}H&S\\ S^{\top}&D\end{bmatrix}\begin{bmatrix}u\\ x_{0}\end{bmatrix} \tag{3}\] \[\text{s.t.}\,\,\,u\in\mathbb{U}:=\mathbb{U}_{0}\times\dots\times \mathbb{U}_{N-1},\] where \(\mathbb{U}\subset\mathbb{R}^{Nn_{\mathrm{u}}}\) is the set of the constrained sequence of the control inputs, \(H\in\mathbb{S}^{Nn_{\mathrm{u}}}_{++}\) is a symmetric positive definite matrix defined as \[H=\underbrace{\begin{bmatrix}R&&\\ &\ddots&\\ &&R\end{bmatrix}}_{\widehat{R}}+\begin{bmatrix}\Phi_{1}\\ \vdots\\ \Phi_{N}\end{bmatrix}^{\top}\underbrace{\begin{bmatrix}Q&&\\ &\ddots&\\ &&P\end{bmatrix}}_{\widehat{Q}}\underbrace{\begin{bmatrix}\Phi_{1}\\ \vdots\\ \Phi_{N}\end{bmatrix}}_{\Phi} \tag{4}\] such that \(\Phi_{k}=[A^{k-1}B,\cdots,AB,B,0,...,0]\in\mathbb{R}^{n_{\mathrm{s}}\times Nn_{ \mathrm{u}}}\). Then, matrices \(S\in\mathbb{R}^{n_{\mathrm{s}}\times Nn_{\mathrm{u}}}\) and \(D\in\mathbb{R}^{n_{\mathrm{s}}\times n_{\mathrm{s}}}\) are constructed as \(S=\Psi^{\top}\widetilde{Q}\Phi\) and \(D=\Psi^{\top}\widetilde{Q}\Psi\) for \(\Psi^{\top}=[A,\dots,A^{N}]^{\top}\). The main idea of MPC design in receding horizon policy is to solve Problem (3) within each sampling time to determine an optimal sequence of control actions \(u^{\star}\) for a given initial condition \(x_{0}\), and then, apply the first input \(u^{\star}_{0}\) to the controlled plant. **Assumption 2**: _For the MPC problem in (2), the set of feasible initial conditions is (sub)set of the corresponding region of attraction. Technically, the terminal penalty in (2) is determined to satisfy Theorem 1 in [19]._ Solving problem (3) to the optimal solution under Assumptions 1, 2 leads to the Lyapunov descent [4] \[V(x_{1})\leq V(x_{0})-\|x_{0}\|_{Q}^{2}, \tag{5}\] where \(x_{1}=Ax_{0}+Bu^{\star}_{0}\), ensuring the asymptotic stability for closed-loop systems under the receding horizon MPC control policy. Note the solution of QP in (3) leads to the \(u^{\star}_{0}=f(x^{\star}_{0})\), where \(f:\mathbb{U}^{n_{\mathrm{u}}}\to\mathbb{R}^{n_{\mathrm{s}}}\) has the form of the piecewise affine (PWA) function, see [20]. Dealing with (3), using an online solver to achieve the optimal solution \(u^{\star}\) can be computationally intractable within the given sampling time. On the other hand, the real-time \begin{table} \begin{tabular}{|c c c c|} \hline Reference & \begin{tabular}{c} Constraints \\ *public solution \\ \end{tabular} & Fixed no. iterations & Algorithm \\ \hline \hline [12] & input & ✓ & MC \\ [13] & input & – & DD \\ [14] & input & ✓ & FPI \\ [15] & input* & ✓ & DAGPM \\ [16] & input, ✓ & PGM, APGM \\ [17] & input, state & ✓ & PGM \\ [18] & input, state & ✓ & FAMA \\ \hline \end{tabular} \end{table} TABLE I: Selective summary of stopping criterion for suboptimal MPC. implementation of iterative optimization procedures yields suboptimal solutions, as we need to stop the algorithm after reaching a certain number of iterations. Thereafter, (5) cannot be applied to ensure asymptotic stability in general. In the following section, we will propose a generalized stopping criterion with a fixed number of iterations for an arbitrary linearly convergent optimization algorithm to solve (3) such that an \(\epsilon\)-suboptimal solution results in an asymptotic stability guarantee in (5). ## III Generalized Stopping Criterion We consider solving the MPC problem (3) by using the real-time implementation of the first-order optimization algorithms online. This yields a sequence of suboptimal solutions by reaching a predefined finite number of iterations \(m\) during every sampling period. Such a suboptimal solution is defined as \(u_{0}^{m}\) at the current time instant and is applied into the closed-loop LTI system \[x_{0}^{+}=Ax_{0}+Bu_{0}^{m}. \tag{6}\] To guarantee the asymptotic stability of the real-time MPC, we need to augment (5) by term \(V(x_{0}^{+})\) into the form \[V(x_{0}^{+})\leq V(x_{0})-\left(\|x_{0}\|_{Q}^{2}-V(x_{0}^{+})+V(x_{1})\right). \tag{7}\] Enforcing (7) to hold, we need the following property of \(V(\cdot)\) representing Lipschitz continuity property \[|V(\widetilde{x}_{1})-V(\widetilde{x}_{2})|\leq\eta_{1}\|\widetilde{x}_{1}- \widetilde{x}_{2}\|_{2}+\frac{\eta_{2}}{2}\|\widetilde{x}_{1}-\widetilde{x}_ {2}\|_{2}^{2} \tag{8}\] providing for the properly selected real-valued constants \(\eta_{1}\geq 0\) and \(\eta_{2}>0\). Note, (8) is satisfied by design, if Assumption 1 holds. **Proposition 1** ([21]): _In the origin, such a local neighborhood \(\mathcal{B}_{r}\) with radius \(r\) exists, that if \(\widetilde{x}_{1},\widetilde{x}_{2}\in\mathcal{B}_{r}\) holds, then no constraints are active at the solution of (3)._ Proposition 1 implies that for any \(x\in\mathcal{B}_{r}\), the value function is reduced to \(V(x)=x^{\top}Px\) and \(\eta_{1}=0\) in (8) if \(\widetilde{x}_{1},\widetilde{x}_{2}\in\mathcal{B}_{r}\), i.e., holds \[|V(\widetilde{x}_{1})-V(\widetilde{x}_{2})|\leq\frac{\eta_{2}}{2}\|\widetilde {x}_{1}-\widetilde{x}_{2}\|_{2}^{2}. \tag{9}\] Next, we adjust (8) to express \(|V(x_{0}^{+})-V(x_{1})|\) formula complemented with applying the state update resulting in \[|V(x_{0}^{+})-V(x_{1})|\leq \eta_{1}\|B(u_{0}^{m}-u_{0}^{\star})\|_{2} \tag{10}\] \[+\eta_{2}\|B(u_{0}^{m}-u_{0}^{\star})\|_{2}^{2}.\] Substituting \(u_{0}^{m}=\Gamma u^{m}\) and \(u_{0}^{\star}=\Gamma u^{\star}\) for \(\Gamma=[I\;0\;...\;0]\in\mathbb{R}^{n_{u}\times Nn_{u}}\), we rewrite \[B(u_{0}^{m}-u_{0}^{\star})=B\Gamma(u^{m}-u^{\star}) \tag{11}\] in (10) such that holds \[|V(x_{0}^{+})-V(x_{1})|\leq \eta_{1}\|B\Gamma(u^{m}-u^{\star})\|_{2} \tag{12}\] \[+\eta_{2}\|B\Gamma(u^{m}-u^{\star})\|_{2}^{2}.\] If we take the first-order optimization algorithm with a linear convergence rate, then the following applies \[\|u^{m+1}-u^{\star}\|_{2}\leq\kappa\|u^{m}-u^{\star}\|_{2}, \tag{13}\] where \(\kappa<1\) represents convergence factor, \(u^{m}\) stands for QP solution at \(m\)-th iteration of algorithm, and \(u^{\star}\) is the optimal solution of QP in (3). Consequently, for a given \(m\) iterations, we have \[\|u^{m}-u^{\star}\|_{2}\leq\kappa^{m}\|u^{0}-u^{\star}\|_{2}. \tag{14}\] Specifically, under the assumption of \(Q\in\mathbb{S}_{+}^{n_{u}}\), \(R\in\mathbb{S}_{++}^{n_{u}}\), and \(\mathbb{U}\) is a compact convex set, we have two widely-used first-order method algorithms: (i) the projected gradient descent method (PGDM) and (ii) the alternating direction method of multipliers (ADMM), achieving linear convergence. **Example 1** (Projected Gradient Descent Method): _The projected gradient descent method is an extension of the original gradient descent method by including the constraints through projection into the constraint set \(\mathbb{U}\). Applying the PGDM algorithm to the MPC design problem in the form of QP (3) results in an iteration_ \[u^{m+1}:=\;\operatorname{Proj}_{\mathbb{U}}(u^{m}-\alpha\nabla J(u^{m},x_{0})), \tag{15}\] _where \(\alpha>0\) is the step size and \(u^{m}\) is the initial guess at \(m\)-th iteration. We consider \(J(u,x_{0})\) is \(\mu\)-strongly convex and \(L\)-smooth, such that \(\mu\) and \(L\) are computed by evaluating the eigenvalue of \(H\) in (3). If we determine the step size \(\alpha=\frac{1}{L}\), then we have linear convergence of PGDM given by_ \[\kappa=1-\frac{\mu}{L} \tag{16}\] _such that \(\kappa<1\), see [22]._ **Example 2** (Alternating Direction Method of Multipliers): _The alternating direction method of multipliers is an algorithm described in more detail in [23]. Generally, the algorithm distributes the original optimization problem into smaller-scaled problems that are solved quickly. Applying ADMM to QP problem in (3) leads to following iterations_ \[u^{m+1}= \min_{u}\;J(u;x_{0})+u^{\top}\lambda^{m}+\frac{\rho}{2}\|u-v^{m} \|_{2}^{2}, \tag{17a}\] \[v^{m+1}= \min_{v\in\mathbb{U}}\;-v^{\top}\lambda^{m}+\frac{\rho}{2}\|u^{m+ 1}-v\|_{2}^{2},\] (17b) \[\lambda^{m+1}= \lambda^{m}+\rho(u^{m+1}-v^{m+1}), \tag{17c}\] _for some initial guesses of global coordination variable \(v^{m}\in\mathbb{R}^{Nn_{u}}\), dual variable \(\lambda^{m}\in\mathbb{R}^{Nn_{u}}\), and tuning parameter \(\rho>0\). The evaluation of the linear convergence rate of ADMM for QPs refers to Section IV of [24], resulting in_ \[\kappa=\frac{1}{2}\|2M-I\|, \tag{18}\] _where the matrix \(M\) depends on the formulation of the MPC problem in (3) and has the form_ \[M=\widetilde{G}-\widetilde{G}(I+\widetilde{G})^{-1}\widetilde{G} \tag{19}\] _for \(\widetilde{G}=\rho\,GH^{-1}G^{\top}\). The matrix \(G\in\mathbb{R}^{2Nn_{u}\times Nn_{u}}\) is determined by the matrix representation \(Gu\leq w\) of input constraints from MPC problem in (3), where vector \(w\in\mathbb{R}^{2Nn_{u}}\)._ Then, combining (12) with (14) for Examples 1, 2 yields inequality \[|V(x_{0}^{+})-V(x_{1})|\leq \bar{\eta}_{1}\|u^{m}-u^{\star}\|_{2}+\bar{\eta}_{2}\|u^{m}-u^{ \star}\|_{2}^{2} \tag{20}\] \[\leq \bar{\eta}_{1}\kappa^{m}\|u^{0}-u^{\star}\|_{2}+\bar{\eta}_{2} \kappa^{2m}\|u^{0}-u^{\star}\|_{2}^{2}\] for the real-valued constants \(\bar{\eta}_{1}=\eta_{1}\sqrt{\nu_{\max}(\Gamma^{\top}B^{\top}B\Gamma)}\) and \(\bar{\eta}_{2}=\frac{\eta_{2}}{2}\nu_{\max}(\Gamma^{\top}B^{\top}B\Gamma)\). Here, \(\nu_{\max}(\cdot)\) defines the maximal eigenvalue for a given matrix. **Assumption 3**: _We have the real-valued constant \(\gamma>0\) that for any feasible solution \(x_{0}\) of (3) satisfies_ \[\|u^{\star}\|_{2}\leq\gamma\|x_{0}\|_{Q}. \tag{21}\] Since \(u^{\star}\) is a piecewise affine function of \(x_{0}\), the constant \(\gamma\) in (21) can be determined offline, i.e., before running the real-time MPC control (see e.g. [17]). Furthermore, the initialization also satisfies \(\|u^{0}\|_{2}\leq\gamma\|x_{0}\|_{Q}\) enforced by re-scale in the algorithm, if necessary. Then, (20) reads \[\begin{split}|V(x_{0}^{+})-V(x_{1})|&\leq 2\bar{\eta}_{1} \gamma\kappa^{m}\|x_{0}\|_{Q}\\ &\quad+2\bar{\eta}_{2}\gamma^{2}\kappa^{2m}\|x_{0}\|_{Q}^{2}.\end{split} \tag{22}\] **Assumption 4**: _We assume that if we have non-empty convex set \(\mathbb{Q}:=\{x_{0}\in\mathbb{R}^{n_{\star}}\mid\|x_{0}\|_{Q}\leq 1\}\), then \(\mathbb{Q}\subseteq\mathcal{B}_{r}\) holds._ Note that Assumption 4 can be enforced by adjusting the scaling matrices \(Q\) and \(R\) of MPC problem (2) as shown in Figure 1. Consequently, a finite integer-valued parameter \(\overline{m}\) exists such that the inequality (22) is satisfied. Finally, this finite number of iterations \(\overline{m}\) guarantees the asymptotic stability of the real-time implementation of the MPC designed by (2). **Theorem 1** (Generalized stopping criterion): _Let Assumptions 1-4 hold. If constant \(\overline{m}\in\mathbb{N}\) satisfies_ \[\overline{m}>\log\left(2\bar{\eta}_{1}\gamma+2\bar{\eta}_{2}\gamma^{2}\right)/ \log\left(1/\kappa\right), \tag{23}\] _then the control action \(u_{0}:=\Gamma u^{\overline{m}}\) implemented to the LTI system (1) ensures the asymptotic stability of the MPC in (3)._ The proof is established in two steps. First, if \(\|x_{0}\|_{Q}>1\) holds, the upper bound on (22) is given by \[|V(x_{0}^{+})-V(x_{1})|\leq\underbrace{\left(2\bar{\eta}_{1}\gamma+2\bar{\eta} _{2}\gamma^{2}\right)\kappa^{\overline{m}}}_{\beta}\|x_{0}\|_{Q}^{2}, \tag{24}\] for \(\|x_{0}\|_{Q}^{2}\geq\|x_{0}\|_{Q}\) and for \(\kappa<1\) leading to \(\kappa^{\overline{m}}>\kappa^{2\overline{m}}\). Next, by combining (7) with (24), we have \[V(x_{0}^{+})\leq V(x_{0})-\left((1-\beta)\|x_{0}\|_{Q}^{2}\right), \tag{25}\] where \((1-\beta)>0\) is sufficient to achieve asymptotic stability. Therefore, the minimum number of iterations \(\overline{m}\) is determined from following inequality \[\beta=\left(2\bar{\eta}_{1}\gamma+2\bar{\eta}_{2}\gamma^{2}\right)\kappa^{ \overline{m}}<1, \tag{26}\] that straightforwardly implies (23) to hold. Secondly, if \(\|x_{0}\|_{Q}\leq 1\), using Proposition 1 and Assumption 4, we have (22) in form \[|V(x_{0}^{+})-V(x_{1})|\leq 2\bar{\eta}_{2}\gamma^{2}\kappa^{2\overline{m}}\|x_ {0}\|_{Q}^{2}\leq\beta\|x_{0}\|_{Q}^{2}, \tag{27}\] resulting into \[\overline{m}>\frac{\log(2\bar{\eta}_{2}\gamma^{2})}{2\log(1/\kappa)}. \tag{28}\] Consequently, the following inequality holds \[\overline{m}>\frac{\log\left(2\bar{\eta}_{1}\gamma+2\bar{\eta}_{2}\gamma^{2} \right)}{\log\left(1/\kappa\right)}>\frac{\log(2\bar{\eta}_{2}\gamma^{2})}{2 \log(1/\kappa)}, \tag{29}\] that concludes the proof. Even though the algorithm to solve MPC problem (3) is stopped after \(\overline{m}\) iterations determined by the generalized criterion in (23), enforcing the asymptotic stability by this approach leads to the conservative control performance. The performance loss originates in (24) representing the upper bound on the original requirement formulated in (22), i.e., represents just the necessary condition. Therefore, there may exist fewer iterations guaranteeing asymptotic stability of the closed-loop LTI system in (1) under receding horizon MPC control policy. ## IV Numerical Case Study To analyze the properties of the proposed stopping criterion, we adopted the well-known benchmark system of the double integrator system, which has the system matrices \[A=\begin{bmatrix}1&1\\ 0&0.5\end{bmatrix},\ \ B=\begin{bmatrix}0.5\\ 1\end{bmatrix},\] and the model predictive controller in (3) is designed with weight matrices \[Q=\begin{bmatrix}1&0\\ 0&1\end{bmatrix},\ \ P=\begin{bmatrix}2.367&1.118\\ 1.118&2.588\end{bmatrix},\ \ R=1,\] where \(P\) is computed as a solution to the matrix Riccati equation. Furthermore, we define the input constraints in (3) as \(\mathbb{U}=\{u\in\mathbb{R}\mid-1\leq u\leq 1\}\), and finite prediction horizon as \(N=10\). The simulations were executed in MATLAB. For solving the double integrator problem with ADMM-based MPC and nominal MPC, we incorporated the QUADPROG solver. Note that the PGDM method can be implemented without the need for any external optimization tools. Furthermore, we have designed the ADMM-based MPC algorithm in a way where local step (17a) was completely distributed in the time domain to \(N\) computing units. Both algorithms disposed of two stopping criteria: either satisfying the convergence condition or meeting a maximum number of iterations. The results for PGDM-based MPC were obtained using the following setup of the parameters \(L=3\,200\) and Fig. 1: Example of set \(\mathbb{Q}\) (blue) and LQR-based control invariant set (green) for two setups of matrix pairs \((Q,R)\) of MPC design problem (2): a) \(\mathbb{Q}\) is not the subset of the LQR-based control invariant set, b) LQR-based set contains set \(\mathbb{Q}\). \(2\) leading to \(\kappa_{\text{PGDM}}=0.9992\) based on (16). Then, the maximum number of iterations was calculated according to Theorem 1 by (23) as \(\overline{m}_{\text{PGDM}}=172\) using parameters \(\gamma=1\), \(\eta_{1}=0.4\), \(\eta_{2}=0.1\,\). The convergence condition was defined as \(\|\nabla J\|\leq\varepsilon\) with \(\varepsilon=10^{-3}\). The parameters for the ADMM-based MPC algorithm were set based on (18) as \(\kappa_{\text{ADMM}}=0.9980\) and formula from Section IV of [24] as \(\rho=3.1231\). The maximum number of iterations was determined according to Theorem 1 as \(\overline{m}_{\text{ADMM}}=14\) considering \(\gamma=1\), \(\eta_{1}=0.2\), \(\eta_{2}=0.3\). Furthermore, convergence condition was specified as \(\|\lambda^{l+1}-\lambda^{l}\|<\varepsilon\) for \(\varepsilon=10^{-3}\). The results of the numerical simulations of the closed-loop control for the proposed real-time approach are illustrated in Figure 2, where we can see the comparison of control profiles for nominal MPC, \(\overline{m}\)-bounded ADMM-based MPC and \(\overline{m}\)-bounded PGDM-based MPC. To validate their accuracy, the ADMM-based and PGDM-based MPC algorithms were also evaluated for a maximum number of iterations \(m=15\,000\) (we will address it as "unbounded"). However, these control performance results are not depicted in Figure 2 due to overlapping with the control profiles generated by the nominal MPC that serves as the reference profile without any performance loss. As shown in Figure 2, the ADMM-based and PGDM-based MPC approaches lead to suboptimal solutions for \(\overline{m}\). Nevertheless, each approach preserves the asymptotic stability and drives the system states into its origin. The last graph of Figure 2 shows the number of real-time iterations (RTI) per simulation step \(j\) of unbounded ADMM-based and unbounded PGDM-based MPC and with \(\overline{m}\)-bounded (suboptimal) approaches as defined above. As visualized, the number of iterations for unbounded PGDM-based MPC is extremely higher than for \(\overline{m}\)-bounded PGDM-based MPC. Numerically \(47\,794\) vs. \(3\,620\) iterations per visualized \(40\)-steps simulation resulting in \(92\,\)% decrease in the number of iterations. In comparison, the unbounded ADMM-based MPC executes \(466\) iterations altogether while \(\overline{m}\)-bounded ADMM-based MPC performs \(198\) iterations (\(67\,\)% decrease). To conclude, even a drastically lower number of iterations yields the desired result with stability guarantees at the cost of suboptimal performance. The stability validation of our proposed approach involves checking the inequality (22) at each simulation step. Figure 3 depicts real-time \(|V_{j}(x_{j}^{+})-V_{j}(x_{j})|\) alongside an upper bound derived as \(2\bar{\eta}_{1}\gamma\kappa^{m}\|x_{0}\|_{Q}+2\bar{\eta}_{2}\gamma^{2}\kappa ^{2m}\|x_{0}\|_{Q}^{2}\). The plot demonstrates that the real-time values of \(|V_{j}(x_{j}^{+})-V_{j}(x_{j})|\) remain under the upper bound in each step \(j\). A comparative summary is provided in Table II, which analyzes suboptimality rates and iteration counts for unbounded and \(\overline{m}\)-bounded methods. The investigation covers control performance and evaluated iterations using metrics including the average number of iterations per simulation step \(m_{\text{avg}}\), the maximum number of iterations per step \(m_{\text{max}}\), the average suboptimality rate per step \(\delta_{\text{avg}}\), and the maximum suboptimality rate per step \(\delta_{\text{max}}\). The analysis involves 201 initial conditions \(x_{0}\) representing distinct segments of the PWA control law. Suboptimality rates \(\delta_{j}(x_{0}(i))\) for step \(j\) and initial condition \(i\) are calculated as \(\delta_{j}(x_{0}(i))=\left(|V_{j}(x_{j}^{+})-V_{j}(x_{j})|\right)/\left(|V_{j+ 1}(x_{1})|\right)\). For the unbounded methods, both ADMM-based and PGDM-based MPCs exhibit zero average and maximum suboptimality rates due to \(\delta_{\text{avg}}<10^{-6}\) and \(\delta_{\text{max}}<10^{-6}\). Comparable suboptimality rates are observed for \(\overline{m}\)-bounded ADMM and PGDM methods. While ADMM has \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline Method & \(m_{\text{avg}}\) [–] & \(m_{\text{max}}\) [–] & \(\delta_{\text{avg}}\) [\%] & \(\delta_{\text{max}}\) [\%] \\ \hline \hline ADMM & 39 & 2 331 & 0 & 0 \\ ADMM(\(\overline{m}\)) & 6 & 14 & 0.7 & 0.9 \\ \hline PGDM & 1 873 & 9 786 & 0 & 0 \\ PGDM(\(\overline{m}\)) & 99 & 172 & 0.5 & 1.2 \\ \hline \end{tabular} \end{table} TABLE II: Comparison of suboptimality rate and a number of iterations for unbounded and \(\overline{m}\)-bounded methods. Fig. 2: Control performance of the nominal MPC (black), PGDM-based MPC (blue), and ADMM-based MPC (red) after \(\overline{m}\) iterations. The bottom graph depicts the number of real-time iterations (RTI) for ADMM and PGDM for unbounded stopping criterion (light blue, light red) and for predefined \(\overline{m}\) (dark blue, dark red) iterations. slightly higher suboptimality, it demonstrates lower variance than PGDM. Analysis of \(m_{\text{avg}}\) and \(m_{\text{max}}\) highlights that unbounded methods require substantially more iterations for optimal solutions, whereas \(\overline{m}\)-bounded techniques execute significantly fewer iterations (around 94% decrease for PGDM and 84% decrease for ADMM) while ensuring asymptotic stability. ## V Conclusions This paper has presented a novel method for the real-time evaluation of linear-quadratic MPC with input constraints, offering a generalized stopping criterion representing a fixed number of iterations for first-order optimization algorithms. The proposed method establishes the asymptotic stability of the real-time solution. This approach significantly reduces the maximum number of iterations required for real-time evaluation of MPC while preserving the acceptable suboptimality rates. The proposed method was analyzed using a benchmark double-integrator problem, demonstrating an average reduction in the number of iterations by up to 80 % leading to less than 2 % suboptimality rate for the worst-case control scenarios. The analyzed results indicate that using a fixed number of iterations as a generalized stopping criterion has the potential to significantly decrease the run times of real-time MPC for the systems with fast dynamics and/or in the framework of the remote battery-supplied control platforms. Our future work will be focused on the elimination of two major drawbacks of our approach: (i) parameters gained by exhaustive offline numerical computations, (ii) state constraints were not considered.
2309.13138
Bootstrap Percolation, Connectivity, and Graph Distance
Bootstrap Percolation is a process defined on a graph which begins with an initial set of infected vertices. In each subsequent round, an uninfected vertex becomes infected if it is adjacent to at least $r$ previously infected vertices. If an initially infected set of vertices, $A_0$, begins a process in which every vertex of the graph eventually becomes infected, then we say that $A_0$ percolates. In this paper we investigate bootstrap percolation as it relates to graph distance and connectivity. We find a sufficient condition for the existence of cardinality 2 percolating sets in diameter 2 graphs when $r = 2$. We also investigate connections between connectivity and bootstrap percolation and lower and upper bounds on the number of rounds to percolation in terms of invariants related to graph distance.
Hudson LaFayette, Rayan Ibrahim, Kevin McCall
2023-09-22T18:48:46Z
http://arxiv.org/abs/2309.13138v1
# Bootstrap percolation, connectivity, and graph distance ###### Abstract. Bootstrap Percolation is a process defined on a graph which begins with an initial set of infected vertices. In each subsequent round, an uninfected vertex becomes infected if it is adjacent to at least \(r\) previously infected vertices. If an initially infected set of vertices, \(A_{0}\), begins a process in which every vertex of the graph eventually becomes infected, then we say that \(A_{0}\) percolates. In this paper we investigate bootstrap percolation as it relates to graph distance and connectivity. We find a sufficient condition for the existence of cardinality \(2\) percolating sets in diameter \(2\) graphs when \(r=2\). We also investigate connections between connectivity and bootstrap percolation and lower and upper bounds on the number of rounds to percolation in terms of invariants related to graph distance. Key words and phrases:Bootstrap percolation, extremal graph theory, diameter, connectivity 2020 Mathematics Subject Classification: 05C12, 05C35, 05C40 ## 1. Introduction Bootstrap percolation is a process defined on a graph, \(G\). The process begins with an initial set of infected vertices \(A_{0}\subseteq V(G)\). In each subsequent round, an uninfected vertex, \(v\), becomes infected if \(v\) is adjacent to at least \(r\) previously infected vertices. Once infected, vertices remain infected. We use \(A_{t}\) to denote the set of all infected vertices as of round \(t\). Symbolically, \[A_{t}=A_{t-1}\cup\{v\in V(G):|N(v)\cap A_{t-1}|\geq r\}\] The parameter \(r\) is called the percolation threshold. If \(G\) is a finite graph, then after a finite number of rounds, either all vertices of \(G\) become infected or the infection stops at some proper subset of \(V(G)\). The set of infected vertices after the percolation process finishes is called the closure of \(A_{0}\), denoted \(\langle A_{0}\rangle\). If \(\langle A_{0}\rangle=V(G)\), then we say that \(A_{0}\) is contagious or \(A_{0}\) percolates. Bootstrap percolation was introduced by Chalupa et al. [10]. One model that has received much attention is when the vertices of \(A_{0}\) are selected randomly; each vertex is selected independently and every vertex of \(G\) has probability \(p\) of being initially selected. After the initial step, the infection proceeds deterministically. This model has been studied extensively, for example in [1, 6, 3, 4, 5, 16]. Another area of study is extremal problems. The minimum size of a percolating set in a graph \(G\) with percolation threshold \(r\) is denoted \(m(G,r)\). Observe that if \(|V(G)|\) is at least \(r\), then \(m(G,r)\geq r\). Freund et al. [13] showed that for a graph \(G\) of order \(n\), if \(\delta(G)\geq\frac{r-1}{r}n\) then \(m(G,r)=r\). Let \(\sigma_{2}(G)\) be the minimum sum of degrees over all pairs of non-adjacent vertices of \(G\). Freund et al. [13] proved that if \(G\) satisfies Ore's condition, i.e., \(\sigma_{2}(G)\geq n\), then \(m(G,2)=2\). Furthermore, they proved that both of these bounds are sharp. Gunderson [14] extended the first result by showing that if the order of \(G\) is sufficiently large, then the bound on the minimum degree can be weakened. Wesolek [23] extended Gunderson's result by proving a lower bound on the minimum degree sufficient to guarantee a percolating set of size \(\ell\geq r\). Dairyko et al. [12] extended Freund et al. [13]'s theorem on Ore's condition by characterizing the graphs for which \(\sigma_{2}(G)\geq n\) and \(\sigma_{2}(G)\geq n-1\) is required to guarantee \(m(G,2)=2\). For all other graphs, \(\sigma_{2}(G)\geq n-2\) is sufficient. Degree conditions on bootstrap percolation have also been studied in Reichman [22]. Bushaw et al. [9] investigated other conditions for which \(m(G,2)=2\). Another problem is investigating \(m(G,r)\) for particular classes of graphs. One class which has received significant attention is the \(d\)-dimensional lattice on \(n^{d}\) vertices, denoted \([n]^{d}\). This has been studied in [2, 4, 15, 19, 21]. In this paper, we continue the investigations of Bushaw et al. [9] by studying bootstrap percolation from the perspective other non-degree conditions. In particular, we focus on diameter. We begin with a proof of a conjecture from [9]. Suppose \(G\) is a connected graph of order at least \(3\) with at most \(2\) blocks. If \(G\) is diameter \(2\) and contains no induced cycle of length \(5\), then \(m(G,2)=2\). In section 3, we explore the consequences of percolating sets of size \(r\) on the connectivity of a graph. In section 4, we examine the minimum number of rounds to percolation given the size of the percolating set in relation to the diameter and radius of a graph. In section 5, we investigate the maximum number of rounds to percolation in terms of graph distance. The problem of the number of rounds to percolation has also been investigated in [7, 20]. We close with some open problems. ## 2. A sufficient condition for \(2\)-bootstrap percolation Before introducing the conjecture, we provide some background and definitions. If a graph \(G\) contains at least one pair vertices which percolate when \(r=2\), then we say that \(G\) is \(2\)-bootstrap good or \(2\)-BG. A _block_ of \(G\) is a maximally \(2\)-connected subgraph of \(G\). If \(B\) is a block of \(G\), then we use \(G[B]\) to denote the subgraph of \(G\) induced by \(B\). It is shown in Bushaw et al. [9] that a graph with more than two blocks cannot be \(2\)-BG. Since disconnected graphs of order more than two also cannot be \(2\)-BG, we only concern ourselves with connected graphs. Furthermore, since graphs of order less than two are trivially 2-BG, we only examine graphs of order three or more. Hence, we define the set \(\mathcal{G}\) as the collection of all connected graphs of order 3 or more with at most two blocks. A graph \(G\) has a _dominating vertex_ if \(G\) contains a vertex \(v\) adjacent to all other vertices of \(G\). A graph \(G\) is _locally connected_ if the open neighborhood of every vertex forms a connected graph. We present the following lemma: **Lemma 2.1**.: _If a graph \(G\) is 2-connected and has a dominating vertex, then \(G\) is locally connected._ Proof.: Let \(v\) be a dominating vertex of \(G\) and let \(u\) be some other vertex of \(G\). Since \(v\) is in the open neighborhood of \(u\), any two vertices in \(N(u)\) are joined by \(v\). Hence, \(N(u)\) is connected. As \(G\) is 2-connected, \(v\) cannot be a cut vertex. Hence, \((V(G)\setminus\{v\})=N(v)\) is also connected. The following are Theorem 2.16 and Conjecture 4.1 respectively in [9]. **Lemma 2.2**.: _If a graph \(G\in\mathcal{G}\) is locally connected, then it is 2-BG. Furthermore, if \(G\) has no leaf, then any pair of adjacent vertices will percolate in \(G\)._ **Conjecture 2.3**.: _If a graph in \(\mathcal{G}\) is perfect and its diameter is no more than 2 then the graph is 2-bootstrap good._ We present the following theorem, weakening the assumption that \(G\) is perfect. **Theorem 2.4**.: _If a graph \(G\in\mathcal{G}\) has diameter 2 and contains no induced cycle of length 5, then \(G\) is 2-bootstrap good._ We divide the proof into two cases. **Case 1**: \(G\) has 2 blocks. Proof.: In this case, we do not need the assumption that \(G\) contains no induced cycles of length 5 or more. Let \(v\) be a cut vertex of \(G\), and let \(B_{1}\) and \(B_{2}\) be the blocks of \(G\). Since \(G\) has diameter 2, \(v\) is dominating in \(G\). Hence, \(G[B_{1}]\) and \(G[B_{2}]\) are locally connected by Lemma 2.1. Pick \(w\in B_{1}\) and \(x\in B_{2}\) with \(\{w,x\}\) as the initial infected set, which then infect \(v\). Then, \(\{w,v\}\) percolates in \(B_{1}\) and \(\{v,x\}\) percolates in \(B_{2}\) by Lemma 2.2. So \(G\) is 2-BG, where any pair of vertices, with one vertex of the pair in \(B_{1}-v\) and the other in \(B_{2}-v\), percolates in \(G\). **Case 2**: \(G\) is 2-connected. Proof.: Assume that \(G\) is 2-connected, has diameter 2, and contains no induced cycles of length 5 or more. Suppose toward a contradiction that \(G\) is not 2-BG. Let \(H\) be a maximal 2-connected, 2-BG subgraph of \(G\). In other words, any subgraph of \(G\) containing (other than \(H\) itself) fails to be 2-connected or fails to be 2-BG. Observe that any vertex in \(V(G)-V(H)\) has at most one neighbor in \(H\). Since \(G\) is connected and \(H\) is a proper subgraph of \(G\), there is a vertex \(v\in V(G)-V(H)\) with exactly one neighbor, \(w\), in \(H\). **Claim 1**: \(w\) is adjacent to every vertex in \(H\). **Proof of Claim 1**: Suppose towards a contradiction that \(w\) is not adjacent to some vertex \(z\in V(H)\). Since \(G\) has diameter 2, there is some vertex \(y\in V(G)\) such that \(y\) is adjacent to both \(w\) and \(z\). Observe, since \(y\) is adjacent to \(w\) and \(z\), i.e. \(y\) has 2 neighbors in \(H\), it must be that \(y\in V(H)\). Since \(v\) is only adjacent to a single vertex in \(H\) and \(G\) is diameter 2, there must be some vertex, \(v^{\prime}\) outside of \(H\) such that \(v^{\prime}\) is adjacent to both \(v\) and \(z\) (see Figure 1). Recall that a vertex outside \(H\) can only be adjacent to a single vertex within \(H\). Hence, \(v^{\prime}\) cannot be adjacent to any vertex in \(H\) other than \(z\). The five vertices \(v,v^{\prime},w,y,z\) form an induced 5-cycle, contradicting our assumption that \(G\) has no induced 5-cycles. This proves our claim that \(w\) is adjacent to every vertex in \(H\). If \(w\) was the only vertex in \(H\) adjacent to vertices outside of \(H\), then \(w\) would be a cut vertex, contradicting the assumption that \(G\) is 2-connected. So there must be a vertex \(w^{\prime}\) in \(H\) with a neighbor \(v^{\prime}\) outside of \(H\). Note that \(v\neq v^{\prime}\), as \(v\) and \(v^{\prime}\) each have a unique neighbor in \(H\). We now have two cases. **Case 2a**: \(v\) is adjacent to \(v^{\prime}\). If so, then infect \(\{v,w^{\prime}\}\). These in turn infect \(v^{\prime}\) along with \(w\). By Lemma 2.2, \(\{w,w^{\prime}\}\) infects all of \(H\). But this means that \(H\cup\{v,v^{\prime}\}\) is a 2-connected, 2-BG subgraph of \(G\) containing \(H\), in contradiction to our earlier assumptions. **Case 2b**: \(v\) is not adjacent to \(v^{\prime}\). Since \(G\) has diameter 2, there must be a vertex \(v^{\prime\prime}\) which joins \(v\) and \(v^{\prime}\). This vertex cannot be in \(H\) because each of \(v,v^{\prime}\) is only adjacent to a single vertex in \(H\). Figure 1. Claim 1. We now have two possibilities: \(v^{\prime\prime}\) is adjacent exactly one of \(w\) or \(w^{\prime}\); or \(v^{\prime\prime}\) is adjacent to neither \(w\) nor \(w^{\prime\prime}\). If \(v^{\prime\prime}\) is adjacent to neither, then \(v,v^{\prime\prime},v^{\prime},w^{\prime},w\) form an induced \(5\)-cycle. If \(v^{\prime\prime}\) is adjacent to \(w\), then we can infect \(v^{\prime\prime},w^{\prime}\), which in turn infect \(v^{\prime},w\) and \(H^{\prime}=H\cup\{v^{\prime},v^{\prime\prime}\}\) forms a \(2\)-BG, \(2\)-connected subgraph containing \(H\). If \(v^{\prime\prime}\) is adjacent to \(w^{\prime}\), then the situation is similar except that \(H^{\prime}=H\cup\{v,v^{\prime\prime}\}\). This is shown in Figure 2. Cases 2a and 2b both lead to contradictions, so we conclude that there can be no such \(H\) and \(G\) must be \(2\)-BG. ## 3. Connectivity and Bootstrap Percolation Let \(r\in\mathbb{Z}^{+}\). Similar to the definition of \(2\)-BG, if a graph \(G\) contains at least one set of \(r\) vertices which percolate, then \(G\) is \(r\)-bootstrap good or \(r\)-BG. A graph \(G\) is _\(k\)-connected_ if it has at least \(k+1\) vertices and does not contain a cut set of size \(k-1\). Recall that a block is a maximal induced \(2\)-connected subgraph of \(G\). A graph is \(1\)-BG if and only if it is connected. In [9], the following result is Lemma 2.1: **Lemma 3.1**.: _If a graph is \(2\)-BG, then it has at most two blocks._ In this section, we seek to expand on the result in Lemma 3.1 by investigating the effect of percolating sets of size \(r\), where \(r\geq 3\), on the connectivity of graphs. This topic was investigated independently by Bushaw et al. [8], who showed that \(3\)-BG graphs have at most three leaf blocks (a block that is a leaf in a block-cut graph). A natural first question is, "what is the maximum number of blocks of an \(r\)-BG graph?" Before answering this question, we present the following lemma: **Lemma 3.2**.: _Let \(G\) be an \(r\)-BG graph with at least \(r+1\) vertices and \(A_{0}\) be a cardinality \(r\) percolating set of \(G\). If \(X\) is a cut set of \(G\) with \(|X|<r\) and \(K\) is the set of components of \(G-X\) which are not contained in \(A_{0}\), then \(|V(C)\cap A_{0}|\geq r-|X|\) for each \(C\in K\). Moreover, \(|K|\leq\lfloor r/(r-|X|)\rfloor\) and if \(|K|\geq 2\), then \(r/2\leq|X|\leq r-1\)._ Proof.: The first part of the second statement implies the second part of the second statement since \(|X|<r/2\) implies \(r-|X|>r/2\) and thus \(|K|\leq r/(r-|X|)<2\). Figure 2. Case 2a on the left and Case 2b on the right. Suppose \(C\) is a component of \(G-X\) and \(C\in K\). Since \(C\in K\), there is some vertex, \(v\), in \(C\) which is not initially infected. Without loss of generality, we may let \(v\) be the earliest infected vertex of \(C\) which is not initially infected (it is possible that there are multiple choices for \(v\)). Since \(v\) is the earliest infected vertex, \(v\) cannot be infected by other vertices of \(C\) and in fact can only be infected by vertices of \(A_{0}\) or \(X\), i.e., \(|N(v)\cap(X\cup A_{0})|\geq r\). Let \(i:=r-|X|\). Since \(N(v)\subseteq V(C)\cup X\) and \(|X|=r-i\) we must have \(|A_{0}\cap V(C)|\geq i\). No two components of \(G-X\) have any vertices in common, so \(|K|\cdot i\leq|A_{0}|=r\), which implies that \(K\leq r/i\). Throughout this section, we will use the notation from the above lemma: \(G\) is a graph, \(A_{0}\) is a percolating set of \(G\), and \(X\) is a cut set of \(G\). For simplicity, we will use the term component to refer to a subgraph of \(G\) induced by a component of \(G-X\). Observe that a cut set \(X\), when \(|X|<r\), separates any percolating set of size \(r\). If all vertices of \(A_{0}\) are in the same component of \(G-X\), then no other component of \(G-X\) can become infected. Likewise, no component can have zero vertices of \(A_{0}\), otherwise no vertices of the component would be able to become infected. Since each component must have at least one vertex of \(A_{0}\), we can have at most \(r\) components of \(G-X\). By Lemma 3.2, this can only occur when \(|X|=r-1\). In fact, this bound is sharp. Here is one family of graphs which attains the bound: let \(G\) be a graph with \(r\) disjoint nonempty complete subgraphs \(H_{1},H_{2},...,H_{r}\) and let \(X\) be a set of \(r-1\) vertices each adjacent to every vertex in every \(H_{i}\). Then, select one vertex from each \(H_{i}\) to be initially infected. These \(r\) vertices infect \(X\). Then each \(H_{i}\) is infected by \(X\) together with its single infected vertex. See Figure 3 for an example when \(r=3\). We require one more lemma before determining the maximum number of blocks in an \(r\)-BG graph. Figure 3. A 3-BG graph with 3 components and a cut set of size 2. The gray vertices are a percolating set. **Lemma 3.3**.: _Let \(G\) be an \(r\)-BG graph with at least \(r+1\) vertices and \(A_{0}\) be a cardinality \(r\) percolating set of \(G\). If \(X\) is a cut set of \(G\) with \(|X|<r\), then at least one vertex of \(X\) is adjacent to every vertex of \(A_{0}\)._ Proof.: Since \(|G|>r\) we have \(V(G)\not\subseteq A_{0}\). We further claim that \(X\not\subseteq A_{0}\). Since \(X\) is a cut set, \(G-X\) contains at least two components. In the proof of Lemma 3.2 it is shown that each component \(C\) of \(G-X\) that is not completely contained in \(A_{0}\) contains at least \(r-|X|\) vertices of \(A_{0}\). So \(X\subseteq A_{0}\) would imply that \(X\) together with such a \(C\) would contain all of \(A_{0}\) (a \(C\) must exist since \(|G|>r\)). But since each other component must have at least one vertex of \(A_{0}\), this is a contradiction. So \(X\not\subseteq A_{0}\). We consider two cases. Case 1: Every component of \(G-X\) is contained in \(A_{0}\). In this case, the only vertices which remain to be infected are the vertices of \(X\) which are not contained in \(A_{0}\). Since \(A_{0}\) is a percolating set, these vertices become infected at some point. Hence, at least one such vertex is infected in the second round, i.e., is adjacent to all vertices of \(A_{0}\). Case 2: Some component of \(G-X\) is not contained in \(A_{0}\). In this case uninfected vertices occur in \(X\) as well as in any component \(C\), where \(C\not\subseteq A_{0}\). No component contains every vertex of \(A_{0}\) and \(X\) is not a subset of \(A_{0}\), so after the initial round, the number of infected vertices in \(C\cup X\) is less than \(r\), for any such \(C\). Vertices of \(C-A_{0}\) can only become infected from vertices in \(C\) or \(X\), so before any such vertex can become infected, at least one vertex of \(X\) must be infected. Hence, at least one vertex of \(X\) is infected in the second round, i.e., is adjacent to all vertices of \(A_{0}\). From these two lemmas, we have the following result that generalizes Lemma 3.1: **Theorem 3.4**.: _Let \(r\geq 2\). If \(G\) is an \(r\)-BG graph with at least \(r+1\) vertices, then \(G\) contains at most \(r\) blocks. Moreover, \(r\) blocks is only achieved by \(G=K_{1,r}\) when \(r\geq 3\)._ Proof.: Note that blocks are separated by a cut vertex. Let \(X=\{v\}\) be a cut vertex of \(G\). By Lemma 3.3, \(v\) is adjacent to every vertex of \(A_{0}\). We claim that there cannot be more than a single cut vertex in an \(r\)-BG graph. Suppose for contradiction we have a second cut vertex \(u\). Each component of \(G-\{u\}\) must contain at least one vertex of \(A_{0}\). But then, \(u\) cannot be a cut vertex because these components are still connected by \(v\). Thus \(v\) is the only cut vertex in \(G\) and the number of blocks is exactly the number of components of \(G-X\). By Lemma 3.2, the largest number of components of \(G-X\) in general is \(r\). If \(r\geq 3\) Lemma 3.2 implies at most one component of \(G-X\) is not contained in \(A_{0}\) since \(|X|<r/2\). So the largest number of blocks only occurs when \(G=K_{1,r}\) and every leaf is contained in \(A_{0}\) Lemma 3.2 also allows us to analyze the structure of \(r\)-BG graphs with cut sets of size less than \(r\). As an example, we will examine \(3\)-BG graphs. We know that \(1\leq|X|\leq 2\) and components of \(G-X\) can either be contained in \(A_{0}\) or not. We also know from Lemma 3.2 that components not contained in \(A_{0}\) must contain at least \(3-|X|\) vertices of \(A_{0}\). Suppose \(|X|=1\). If every component is contained in \(A_{0}\), then we can have either \(2\) or \(3\) components. These possibilities are shown by the leftmost and middle graphs in Figure 4 (the gray vertices are the vertices of \(A_{0}\)). It is also possible that one component of \(G-X\) is not contained in \(A_{0}\). Since such a component must contain at least \(3-|X|=2\) vertices of \(A_{0}\), we can only have one such component and the other must be entirely contained within \(A_{0}\), i.e. a leaf. One example is shown by the rightmost graph in Figure 4. Suppose \(|X|=2\). When every component is contained within \(A_{0}\) then we have the same possibilities as before except vertices of \(A_{0}\) now must be adjacent to both vertices of \(X\). The leftmost and middle graphs of Figure 5 provide examples (it also possible that the vertices of \(X\) are adjacent). If some components of \(G-X\) are not contained in \(A_{0}\), then because \(|X|=2\), each such component needs to contain at least one vertex of \(A_{0}\). Hence we may form such a graph by replacing any of the single vertex components with a connected graph of order \(2\) or more. The rightmost graph of Figure 5 provides such an example, where every vertex of the \(K_{3}\) is joined to \(X\). Figure 3 provides an example where all three components are not subsets of \(A_{0}\). In addition to the structure of the components, we have the following result concerning the structure of cut sets in an \(r\)-BG graph with a cut set of size less than \(r\). **Theorem 3.5**.: _Let \(G\) be an \(r\)-BG graph with at least \(r+1\) vertices and \(A_{0}\) be a cardinality \(r\) percolating set of \(G\). If \(X\) is a cut set of \(G\) with \(|X|<r\), then there is no cut set \(Y\) where \(|Y|<r\) and \(Y\cap X=\emptyset\)._ Proof.: By Lemma 3.3, there is some vertex \(v\in X\) such that \(v\) is adjacent to every vertex of \(A_{0}\). Since \(X\cap Y=\emptyset\), it must be that \(X\) is contained within the components of \(G-Y\), thus \(v\) is in a component of \(G-Y\). Each component of \(G-Y\) must contain at least one Figure 4. Some cases when \(G\) is \(3\)-BG and \(G\) has a cut set of size \(1\). Vertices in the cut set are white, and vertices in \(A_{0}\) are gray. vertex from \(A_{0}\) since \(|Y|<r\). Since \(v\) is adjacent to all of \(A_{0}\) we have \(G-Y\) is connected, a contradiction. Another way to extend Lemma 3.1 is by generalizing the notion of a block: a \(k\)_-block_ of a graph \(G\) is a maximal induced subgraph of \(G\) that is \(k\)-connected. With this notion, a \(2\)-block is just an ordinary block. Matula [18] and Karpov [17] have studied \(k\)-blocks. In the case of an ordinary block, we are not concerned with the exact connectivity of the block, only that it is at least \(2\)-connected. We wish to be more precise with \(k\)-blocks. Matula refers to a \(k\)-block which is not contained in a \(k+1\)-block as a \(k\)-ultrablock, while Karpov simply defines a \(k\)-block as a maximal induced subgraph that is not contained in a \(k+1\)-block. In this section, for clarity, we will use Karpov's definition of a \(k\)-block. We ask, what is the greatest number of \(r\)-blocks contained in an \(r\)-BG graph with a cut set of size less than \(r\)? When \(r=2\), this is answered by Lemma 3.1, but for higher \(r\), we have the following lower bound: **Theorem 3.6**.: _Let \(G\) be an \(r\)-BG graph with at least \(r+1\) vertices. If \(X\) is a cut set of \(G\) with \(|X|<r\), then the maximum number of \(r\)-blocks that \(G\) contains is at least \(r(r-1)\)._ Proof.: We present a construction which contains \(r\) components of \(G-X\) and where the number of \(r\)-blocks in each component is \(r-1\). Let \(X\) be a cut set of \(G\) containing \(r-1\) vertices. Furthermore, let \(X\) form an independent set of \(G\). Construct each of the \(r\) component of \(G-X\) as follows: each component contains a copy of \(K_{r-1}\). Call this an axis. Join the axis to every vertex of \(X\) and also join an independent set of \(r-1\) vertices, \(S\), to every vertex of the axis. Join each of the \(r-1\) vertices in \(S\) to a distinct vertex of \(X\). The axis together with each vertex of \(S\) and its adjacent vertex of \(X\) forms a copy of \(K_{r+1}\). We now show that \(G\) is \(r\)-BG and that each copy of \(K_{r+1}\) is indeed an \(r\)-block. Take a distinct vertex from the axis of each component and from these vertices form \(A_{0}\). Since we have \(r\) components, this set of vertices then infects \(X\). The vertices of \(X\) together with Figure 5. Some cases when \(G\) is \(3\)-BG and \(G\) has a cut set of size \(2\). Vertices in the cut set are white, and vertices in \(A_{0}\) are gray. then infect the other \(r-2\) vertices of each axis. Lastly, \(X\) and the axes of each component infect the remaining vertices. Recall that an \(r\)-block is a maximal \(r\)-connected subgraph of \(G\) which is not contained in an \(r+1\)-connected subgraph of \(G\). A copy of \(K_{r+1}\) is indeed \(r\)-connected. Furthermore, if we expand beyond any copy of \(K_{r+1}\), the resulting subgraph is no longer \(r\)-connected. A \(K_{r+1}\) together with an additional vertex of \(X\) is disconneced by removing the \(r-1\) vertices of the axis, since \(X\) is an independent set and each vertex of \(S\) is adjacent only to a single vertex of \(X\). If we expand by including a second vertex of \(S\), this is also disconnected by removing the axis. Lastly, if we include multiple components of \(G-X\), these are disconneced by removing the \(r-1\) vertices of \(X\). Figure 6 contains an example of this construction when \(r=3\). The white vertices are the vertices of \(X\), the gray vertices are the vertices of the axes and the black vertices are the remaining vertices of \(G\). ## 4. Minimum number of rounds to percolation **Theorem 4.1**.: _Let \(G\) be a connected graph with diameter \(d\). Suppose \(G\) contains a set of vertices, \(A_{0}\), which percolates with threshold \(r\) in \(k\) rounds and \(|A_{0}|\leq 2r-1\). Furthermore, assume that every vertex in \(A_{0}\) infects some vertex in round 2, i.e., every vertex in \(A_{0}\) is adjacent to at least one vertex in round 2. Then \(k\geq\lceil d/2\rceil+1\) and this bound is sharp._ Proof.: When numbering the rounds, we refer to the initial round as round 1. Partition \(V(G)\) into sets \(S_{1},S_{2},...,S_{k}\), where \(S_{1}=A_{0}\) and for reach \(i\), \(S_{i}\) is the collection of vertices newly infected in round \(i\). Let \(p\) be a vertex infected in round \(q\), where \(q\neq 1\). Observe that \(p\) is adjacent to no more than \(r-1\) vertices in \(S_{1}\) through \(S_{q-2}\) (otherwise, \(p\) would have become Figure 6. Six 3-blocks in a 3-BG graph infected in some round from \(2\) to \(q-1\)). Since \(p\) is adjacent to at least \(r\) vertices in \(S_{1}\) through \(S_{q-1}\), then we know that \(p\) is adjacent to at least one vertex in \(S_{q-1}\). By iterating this reasoning, we can find a path from a vertex in any round to some other vertex in any previous round. Let \(u,v\) be two vertices in \(G\) where \(u\in S_{i}\) and \(v\in S_{j}\), where \(i,j\neq 1\). If \(j\geq i\), then by the above observation, we can form a path \(v,v_{j-1},...,v_{i}\), where the index on each vertex is the round in which it was newly infected. If \(v_{i}=u\), then we have a \(u-v\) path. If not, then we can continue our path starting with \(v\) and begin a new path starting with \(u\) as follows: \(v,v_{j-1},...,v_{i},v_{i-1},...,v_{2}\) and \(u,u_{i-1},...,u_{2}\). If in some round \(\ell\) we have \(v_{\ell}=u_{\ell}\), then we have a \(u-v\) path. On the other hand, if it is never the case that \(v_{\ell}=u_{\ell}\), \(\ell\geq 2\), we extend the path to the initial round. Every vertex in \(S_{2}\) must be adjacent to at least \(r\) vertices in \(S_{1}\), which implies that in particular, \(v_{2}\) and \(u_{2}\) are each adjacent to at least \(r\) vertices in \(S_{1}\). Since \(|A_{0}|=|S_{1}|\leq 2r-1\), by the pigeonhole principle, these two sets of \(r\) vertices cannot be disjoint. Hence, we can choose some \(v_{1}=u_{1}\) and we form a \(u-v\) path. A diagram of this process is shown in Figure 7. Now, suppose that both \(u\) and \(v\) are in \(S_{1}\). Since we assumed that every vertex in \(S_{1}\) infects at least one vertex in \(S_{2}\), both \(u\) and \(v\) are adjacent to a vertex in \(S_{2}\). If both are adjacent to the same vertex, the we have a \(u-v\) path of length \(2\). If \(u\) and \(v\) are not adjacent to the same vertex, then we have two paths \(u,u_{2}\) and \(v,v_{2}\). Since \(v_{2}\) and \(u_{2}\) are each adjacent to \(r\) vertices in \(S_{1}\), by the pigeonhole principle, \(v_{2},u_{2}\) are mutually adjacent to some \(w\in S_{1}\) and so we have \(v,v_{2},w,u_{2},u\), a \(u-v\) path of length \(4\). If only \(u\) is in \(S_{1}\), then by similar reasoning, we either have a \(u-v\) path of length \(j-1\) or a \(u-v\) path of length \(j+1\). Since we can use this method to construct a path between any two vertices in \(G\), the diameter of \(G\) cannot be any longer than the longest possible such path. This occurs when both \(u\) and \(v\) are infected in the final round. Since it takes \(k-1\) steps to go from the \(k^{th}\) round to the \(1^{st}\) round, we can write \(d\leq 2k-2\). Solving for \(k\) yields \(d/2+1\leq k\) and since the number of rounds must be an integer, we have \(\lceil d/2\rceil+1\leq k\). Without the additional assumption that every vertex in \(S_{1}\) infects some vertex in \(S_{2}\), it is possible that at most \(r-1\) vertices in \(S_{1}\) are adjacent to no vertices in \(S_{2}\). In which case, a path from a vertex in \(S_{k}\) to a vertex in \(S_{1}\) can have length at most \(k+r-2\) and then our lower bound depends on both \(r\) and \(d\) rather than \(d\) alone. This bound is sharp. Consider the following class of graphs. Begin with \(P_{n}\) and replace every vertex by a set of \(r\) independent vertices. Label these sets \(B_{1},...,B_{n}\), where \(B_{i}\) corresponds with vertex \(i\) of \(P_{n}\), and vertices labelled from left to right. Join every vertex in \(B_{1}\) to every vertex of \(B_{2}\), and in general, every vertex in \(B_{i}\) to every vertex in adjacent sets. We denote a member of this family of graphs by \(P_{n,r}\). Figure 8 shows this construction for \(P_{5,2}\) and \(P_{6,2}\). The diameter of graphs in this family is \(n-1\). If we initially infect the middle set of vertices (for a graph where \(n\) is odd), then the entire graph is infected when \(B_{1}\) and \(B_{n}\) become infected, which occurs after \(\frac{n-1}{2}+1\) rounds. If \(n\) is even, then if we initially infect either of the two centermost sets, the infection percolates when either \(B_{1}\) or \(B_{n}\) becomes infected (whichever is furthest from our starting set). This requires \(\frac{n}{2}+1=\lceil\frac{n-1}{2}\rceil+1\) rounds. In either case, we can see that the lower bound of \(\lceil d/2\rceil+1\) rounds is attained. **Theorem 4.2**.: _Let \(G\) be a connected graph with a set of vertices \(A_{0}\), which percolates in \(k\) rounds with percolation threshold \(r\). If \(|A_{0}|=r\), then \(k\geq rad(G)+1\) and this bound is sharp._ Proof.: Let \(x\) be a vertex in \(S_{1}\) and \(y\) be a vertex in \(S_{i}\), \(1<i\leq k\). Using the same method as in Theorem 3.1, we can form a path \(y,y_{i-1},...,y_{2}\), where \(y_{j}\in S_{j}\). Since \(A_{0}\) contains exactly \(r\) vertices, every vertex in \(S_{2}\) must be adjacent to every vertex in \(A_{0}\). Hence, \(y_{2}\) is adjacent Figure 7. Finding a \(u-v\) path. to \(x\) and \(y,y_{i-1},...,y_{2},x\) is an \(x-y\) path of length \(i-1\). If \(y\in S_{1}\), then we can construct \(y,y_{2},x\), an \(x-y\) path of length \(2\). The greatest length of such a path is \(k-1\). Since we can form a path from every vertex in \(G\) to \(x\in S_{1}\), we know that the eccentricity of \(x\), \(e(x)\), is at most \(k-1\). We then have the following inequality: \(rad(G)\leq e(x)\leq k-1\). Hence, \(k\geq rad(G)+1\). This inequality is sharp because each \(P_{n,r}\) contains a set of vertices which percolates in \(rad(G)+1\) rounds. ## 5. Maximum number of rounds to percolation In this section, we construct a family of graphs which show that given percolation threshold \(r\) and diameter \(d\), the number of rounds before the infection percolates is not bounded above. We first construct a family of graphs with diameter \(2\) and with threshold \(r=2\) and then generalize the construction for arbitrary diameter and percolation threshold. We begin constructing \(G\) by selecting an independent set of vertices. Call this set \(A_{0}\). This set must contain at least \(r\) vertices, but other than this there is no restriction on the cardinality of this set. Next, join every vertex of \(A_{0}\) to a vertex \(x_{1}\). After this, construct a path of length \(s\) and denote the vertices \(y_{1},y_{2},...,y_{s}\) from left to right. Join every vertex of the path to \(x_{1}\) and join \(y_{1}\) to exactly one vertex in \(A_{0}\). An example of this construction for \(r=2,s=5\) is shown in Figure 9. If we select \(A_{0}\) as our initial set of infected vertices, then the infection percolates in \(s+2\) rounds. This is because each vertex of the path cannot become infected until the previous vertex of the path is infected and \(y_{1}\) cannot become infected until after \(x_{1}\) is infected. Since \(x_{1}\) is a dominating vertex, our graph is diameter \(2\). We generalize the construction as follows. First, we construct \(P_{(d-1),r}\). We then join every vertex of the the \(d-1^{st}\) set (the last one on the right) to \(x_{1},x_{2},...,x_{r-1}\). Lastly, we form a path on \(s\) vertices \(y_{1},y_{2},...,y_{s}\) and join every vertex of the path to \(x_{1},x_{2},...,x_{r-1}\). Next, we join \(x_{1}\) to a single vertex in the \(d-1^{st}\) set. The set of vertices \(\{x_{1},...,x_{r-1}\}\) ensures that every vertex of the path \(y_{1},...,y_{s}\) is within distance \(d\) of our other vertices. An example of this construction with diameter \(4\) and percolation threshold \(3\) is shown in Figure 10. Figure 9. A diameter \(2\) graph which percolates in \(7\) rounds with percolation threshold \(2\) If we select the leftmost set of \(r\) vertices of \(P_{(d-1),r}\) as our initial set, such a graph percolates in \(d+s\) rounds. First the infection percolates through the \(d-1\) sets. After this, \(x_{1},...,x_{r-1}\) become infected. Next, \(y_{1}\) becomes infected and then each \(y_{i}\) becomes infected in turn. Although diameter is insufficient for an upper bound on the number of rounds, an upper bound using a different graph invariant is possible. The _detour distance_ between two vertices \(u,v\), denoted \(D(u,v)\) is the length of the longest path between \(u\) and \(v\). The _detour eccentricity_, of a vertex, \(v\), denoted \(e_{D}(v)\), is the longest detour distance from \(v\) to any other vertex. The _detour diameter_ of a graph, \(G\), denoted \(diam_{D}(G)\) is the largest detour eccentricity among vertices in \(G\). Observe that the detour diameter is the length of the longest path in \(G\). These definitions and other facts about the detour distance are stated in Chartrand et. al. [11]. **Theorem 5.1**.: _If \(G\) is a connected graph containing a percolates set which \(r\)-percolates in \(k\) rounds, then \(k\leq diam_{D}(G)\) + 1._ Proof.: Using the same process as in in the proof of Theorem 3.1, we form a path from a vertex in the \(k^{th}\) round to a vertex in the initial round, where each vertex of the path is in a different round. Such a path has length \(k-1\). Since \(diam_{D}(G)\) is the length of the longest path in \(G\), we know that \(k-1\leq diam_{D}(G)\). Hence, \(k\leq diam_{D}(G)+1\). The above theorem gives an upper bound of \(diam_{D}(G)+1\), based on the idea that a path from the last round to the first round is in fact the longest path in the graph. But when \(r\geq 2\), we were able to extend such a path by at least one vertex in all the examples we examined. Hence, we conjecture that when \(r\geq 2\) the upper bound is in fact, \(diam_{D}(G)\). **Conjecture 5.2**.: _If \(G\) is a connected graph containing a percolating set which \(r\)-percolates in \(k\) rounds and \(r\geq 2\), then \(k\leq diam_{D}(G)\)._ For every \(r\) and every value of \(diam_{D}(G)\), it is possible to find a graph which percolates in \(diam_{D}(G)\) rounds with threshold \(r\). This family of graphs are all caterpillars. A _caterpillar Figure 10. A diameter 4 graph which percolates in 9 rounds with percolation threshold 3 is a tree which consists of a central path, each vertex of which has some number of leaves (possibly \(0\)). We construct these graphs as follows. Form a caterpillar with a central path of length \(diam_{D}(G)-2\) and where all vertices of the path except the leftmost endpoint have \(r-1\) leaves. The leftmost endpoint has \(r\) leaves. The longest path in such a graph is formed by moving from a leaf of the leftmost endpoint along the central path to a leaf of the rightmost endpoint. If we begin the percolation process by infecting the leaves of the path, then such a graph percolates in \(diam_{D}(G)\) rounds. Figure 11 shows an example of such a graph for \(r=3\) and detour diameter \(7\). ## 6. Open Questions 1. The two lower bounds on the number of rounds to percolation given in this paper are based on radius and diameter. It would be interesting to see lower bounds based on other graph invariants, or bounds for specific graph classes. 2. Likewise the upper bound given in this paper is unconditional, but it is likely that the actual largest number of rounds for most graphs is substantially smaller than the length of the longest path. Given other assumptions, what is the maximum number of rounds to percolation? 3. What is the largest number of \(r\)-blocks contained in an \(r\)-BG graph with a cut set of size less than \(r\)? Other results on the structure of cut sets of an \(r\)-BG graph would also be interesting. ## 7. Acknowledgements The authors wish to thank Craig Larson for discussions which initiated and improved this paper and Ghidewon Abay-Asmerom for suggesting the concept of detour distance. RI was partially supported by NSF DMS-2204148 and by The Thomas F. and Kate Miller Jeffress Memorial Trust, Bank of America, Trustee. Figure 11. A graph which \(3\)-percolates in \(diam_{D}(G)\) rounds when \(A_{0}\) is the set of gray vertices
2310.00027
Out-Of-Domain Unlabeled Data Improves Generalization
We propose a novel framework for incorporating unlabeled data into semi-supervised classification problems, where scenarios involving the minimization of either i) adversarially robust or ii) non-robust loss functions have been considered. Notably, we allow the unlabeled samples to deviate slightly (in total variation sense) from the in-domain distribution. The core idea behind our framework is to combine Distributionally Robust Optimization (DRO) with self-supervised training. As a result, we also leverage efficient polynomial-time algorithms for the training stage. From a theoretical standpoint, we apply our framework on the classification problem of a mixture of two Gaussians in $\mathbb{R}^d$, where in addition to the $m$ independent and labeled samples from the true distribution, a set of $n$ (usually with $n\gg m$) out of domain and unlabeled samples are given as well. Using only the labeled data, it is known that the generalization error can be bounded by $\propto\left(d/m\right)^{1/2}$. However, using our method on both isotropic and non-isotropic Gaussian mixture models, one can derive a new set of analytically explicit and non-asymptotic bounds which show substantial improvement on the generalization error compared to ERM. Our results underscore two significant insights: 1) out-of-domain samples, even when unlabeled, can be harnessed to narrow the generalization gap, provided that the true data distribution adheres to a form of the ``cluster assumption", and 2) the semi-supervised learning paradigm can be regarded as a special case of our framework when there are no distributional shifts. We validate our claims through experiments conducted on a variety of synthetic and real-world datasets.
Amir Hossein Saberi, Amir Najafi, Alireza Heidari, Mohammad Hosein Movasaghinia, Abolfazl Motahari, Babak H. Khalaj
2023-09-29T02:00:03Z
http://arxiv.org/abs/2310.00027v2
# Unlabeled Out-Of-Domain Data Improves Generalization ###### Abstract We propose a novel framework for incorporating unlabeled data into semi-supervised classification problems, where scenarios involving the minimization of either i) adversarially robust or ii) non-robust loss functions have been considered. Notably, we allow the unlabeled samples to deviate slightly (in total variation sense) from the in-domain distribution. The core idea behind our framework is to combine Distributionally Robust Optimization (DRO) with self-supervised training. As a result, we also leverage efficient polynomial-time algorithms for the training stage. From a theoretical standpoint, we apply our framework on the classification problem of a mixture of two Gaussians in \(\mathbb{R}^{d}\), where in addition to the \(m\) independent and labeled samples from the true distribution, a set of \(n\) (usually with \(n\gg m\)) out of domain and unlabeled samples are given as well. Using only the labeled data, it is known that the generalization error can be bounded by \(\propto\left(d/m\right)^{1/2}\). However, using our method on both isotropic and non-isotropic Gaussian mixture models, one can derive a new set of analytically explicit and non-asymptotic bounds which show substantial improvement on the generalization error compared ERM. Our results underscore two significant insights: 1) out-of-domain samples, even when unlabeled, can be harnessed to narrow the generalization gap, provided that the true data distribution adheres to a form of the "cluster assumption", and 2) the semi-supervised learning paradigm can be regarded as a special case of our framework when there are no distributional shifts. We validate our claims through experiments conducted on a variety of synthetic and real-world datasets. + Footnote †: All correspondence should be addressed to Abolfazl Motahari (Email: [email protected]). ## 1 introduction Semi-supervised learning has long been a focal point in the machine learning literature, primarily due to the cost-effectiveness of utilizing unlabeled data compared to labeled counterparts. However, unlabeled data in various domains, such as medicine, genetics, imaging, and audio processing, often originates from diverse sources and technologies, leading to distributional differences between labeled and unlabeled samples. Concurrently, the development of robust classifiers against adversarial attacks has emerged as a vibrant research area, driven by the rise of large-scale neural networks [1, 2]. While the primary objective of these methods is to reduce model sensitivity to minor adversarial perturbations, recent observations suggest that enhancing adversarial robustness may also improve the utilization of unlabeled samples [3, 4]. This paper aims to demonstrate the efficacy of incorporating out-of-domain unlabeled samples to decrease the reliance on labeled in-domain data. To achieve this, we propose a novel framework inspired by a fusion of concepts from adversarial robustness and self-training. Specifically, we introduce a unique constraint to the conventional Empirical Risk Minimization (ERM) procedure, focusing exclusively on the unlabeled part of the dataset. Our theoretical and experimental analyses show that the inclusion of unlabeled data reduces the generalization gap for both robust and non-robust loss functions. Importantly, our alternative optimization criteria are computationally efficient and can be solved in polynomial time. We have implemented and validated the effectiveness of our method on various synthetic and real-world datasets. From a theoretical standpoint, akin to prior research [5, 6, 7, 8], we also address the binary classification problem involving two Gaussian models in \(\mathbb{R}^{d}\). This problem has been the center of attention in several recent works on theoretical analysis of both semi-supervised and/or adversarially robust learning paradigms. Despite several recent theoretical investigations, the precise trade-off between the sizes of labeled (\(m\)) and unlabeled (\(n\)) data, even in this specific case, remains incomplete. A number of works have bounded the labeled sample complexity under the assumption of an asymptotically large \(n\)[9], while another series of papers have analyzed this task from a completely unsupervised viewpoint. We endeavor to fill this gap by providing the first empirical trade-off between \(m\) and \(n\), even when unlabeled data originates from a slightly perturbed distribution. We derive explicit bounds for both robust and non-robust losses of linear classifiers in this scenario. Our results show that as long as \(n\geq\Omega\left(m^{2}/d\right)\), our proposed algorithm surpasses traditional techniques that solely rely on labeled data. We also consider the more general case of non-isotropic Gaussian models, as explored in previous studies. The remainder of this paper is structured as follows: Section 1.1 provides an overview of related works in distributionally robust optimization and semi-supervised learning. Section 1.2 introduces our notation and definitions. In Section 3, we present our novel method, followed by a theoretical analysis in Section 4. Section 5 showcases our experimental validations, further supporting our theoretical findings. Finally, we draw conclusions in Section 6. ### prior works One of the challenges in adversarially robust learning is that increasing the _robust_ accuracy is considerably more difficult compared to achieving a high accuracy in non-robust scenarios [10]. In a study by [5], authors argued that this difficulty might be attributed to the larger sample complexity of learning robust classifiers in general. In particular, they showcased a simple model for which a good classifier with high standard (non-robust) accuracy is achievable using only a single sample, while a significantly larger training set is required to find a classifier with high robust accuracy. In a set of recent works [6, 7, 8], it has been shown that the gap between the sample complexity of robust and standard learning in the setting of [5] (a two-component Gaussian mixture model) can be filled with unlabeled samples. In other words, unlabeled samples can be utilized to mitigate the classification error even when the test samples are perturbed by an adversary. In a similar study, [3] obtained a similar result using a different definition of adversarial robustness and a more comprehensive data generation model. In [11], authors showed that in the setting of [5], out-of-domain unlabeled samples improve adversarial robustness. Defense mechanisms against adversarial attacks usually consider two types of adversaries: i) point-wise attacks similar to [4, 12, 13], and ii) distributional attacks [14, 15, 16], where in the case of the latter adversary can change the distribution of data up to a predefined budget. It has been shown that Distributionally Robust Learning (DRL) achieves a superior robustness compared to point-wise methods [14]. [17] utilized DRL in order to achieve a balance between the bias and variance of classifier's error, leading to faster rates of convergence compared to empirical risk minimization even in the _non-robust_ case. In DRL, the learner typically aims to minimize the loss while allowing the data distribution to vary within an uncertainty neighborhood. The central idea used by [17] was to regulate the diameter of this uncertainty neighborhood based on the number of samples. [18] achieved similar results in DRL while utilizing the _Wasserstein_ metric to define the perturbation budget for data distribution. Based on the above arguments, we have also utilized DRL is the main tool in developing our proposed framework. ### notation and definitions Let us denote the feature space by \(\mathcal{X}\subseteq\mathbb{R}^{d}\), and assume \(\mathcal{H}\) as a class of binary classifiers parameterized by the parameter set \(\Theta\): for each \(\theta\in\Theta\), we have a classifier \(h_{\theta}\in\mathcal{H}\) where \(h_{\theta}:\mathcal{X}\rightarrow\{-1,1\}\). Assume a positive function \(\ell:(\mathcal{X}\times\{-1,1\}\times\Theta)\rightarrow\mathbb{R}_{\geq 0}\) as the loss function. Also, let \(P\) be the unknown data distribution over \(\mathcal{X}\times\{-1,1\}\), and \(S=\{(\mathbf{X}_{i},y_{i})\}_{i=1}^{m}\) for \(m\in\mathbb{N}\) be a set of i.i.d. samples drawn from \(P\). Then, for all \(\theta\in\Theta\) the true risk \(R\) and the empirical risk \(\hat{R}\) of a classifier w.r.t. \(P\) can be defined as follows: \[R\left(\theta,P\right)=\mathbb{E}_{P}\left[\ell\left(\mathbf{X},y;\theta\right) \right]\quad,\quad\hat{R}\left(\theta,S\right)=\mathbb{E}_{\hat{P}_{S}^{m}} \left[\ell\left(\mathbf{X},y;\theta\right)\right]\triangleq\frac{1}{m}\sum_{i=1}^ {m}\ell\left(\mathbf{X}_{i},y_{i};\theta\right), \tag{1}\] where \(\hat{P}_{S}^{m}\) denotes an empirical estimate of \(P\) based on the \(m\) samples in \(S\). We also need a way to measure the distance between various distributions that are supported over \(\mathcal{X}\). A well-known candidate for this goal is the _Wasserstein_ distance: **Definition 1.1** (Wasserstein Distance).: Consider two probability distributions \(P\) and \(Q\) supported on \(\mathcal{X}\), and assume cost function \(c:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}_{+}\) is a non-negative lower semi-continuous function satisfying \(c(\mathbf{X},\mathbf{X})=0\) for all \(\mathbf{X}\in\mathcal{X}\). Then, the Wasserstein distance between \(P\) and \(Q\) w.r.t. \(c\), denoted as \(\mathcal{W}_{c}\left(P,Q\right)\), is defined as \[\mathcal{W}_{c}\left(P,Q\right)=\inf_{\mu\in\Gamma(\mathcal{X}^{2})}\mathbb{E }_{\mathbf{X},\mathbf{X}^{\prime}\sim\mu}\left[c(\mathbf{X},\mathbf{X}^{\prime})\right],\ \ \text{ subject to}\quad\mu\left(\mathbf{X},\cdot\right)=P\,\ \mu\left( \cdot,\mathbf{X}^{\prime}\right)=Q, \tag{2}\] where \(\Gamma\left(\mathcal{X}^{2}\right)\) denotes the set of all couplings over \(\mathcal{X}\times\mathcal{X}\). **Definition 1.2** (\(\epsilon\)-neighborhood of a Distribution).: The \(\epsilon\)-neighborhood of a distribution \(P\) is defined as the set of all distributions that have a Wasserstein distance less than \(\epsilon\) from \(P\). Mathematically, it can be represented as: \[\mathcal{B}_{\epsilon}^{c}\left(P\right)=\left\{Q:\mathcal{W}_{c}\left(P,Q \right)\leq\epsilon\right\}. \tag{3}\] It should be noted that throughout this paper, the Wasserstein distance between any two distributions supported over \(\mathcal{X}\times\{\pm 1\}\) is defined as the distance between their respective marginals on \(\mathcal{X}\). The ultimate goal of classical learning is to find the parameter \(\theta^{*}\in\Theta\) such that with high probability, \(R\left(\theta^{*}\right)\) is sufficiently close to \(\min_{\theta}R\left(\theta\right)\). A well-known approach to achieve this goal is Empirical Risk Minimization (ERM) algorithm, formally defined as follows: \[\hat{\theta}^{\text{ERM}}\left(S\right)\triangleq\underset{\theta\in\Theta}{ \operatorname{argmin}}\ \mathbb{E}_{\hat{P}_{S}^{m}}\left[\ell\left(\theta;\mathbf{X},y\right)\right]= \underset{\theta\in\Theta}{\operatorname{argmin}}\ \frac{1}{m}\sum_{i=1}^{m}\ell\left(\theta;\mathbf{X}_{i},y_{i}\right). \tag{4}\] A recent variant of ERM, which has gained huge popularity in both theory and practice, is the so-called Distributionally Robust Learning (DRL) which is formulated as follows: **Definition 1.3** (Distributionally Robust Learning(DRL)).: DRL aims at training a classifier which is robust against adversarial attacks on data distribution. In this regard, the _learner_ attempts to find a classifier with a small robust risk, denoted as \(R^{\text{robust}}\left(\theta,P\right)\), which is defined as \[R^{\text{robust}}_{\epsilon,c}\left(\theta,P\right)=\sup_{P^{\prime}\in \mathcal{B}_{\epsilon}^{*}\left(P\right)}R\left(\theta,P^{\prime}\right), \tag{5}\] for all \(\theta\in\Theta\) and any \(\epsilon\geq 0\). Therefore, DRL solves the following optimization problem: \[\hat{\theta}^{\text{DRL}}_{\epsilon,c}\left(S\right)\triangleq\underset{ \theta\in\Theta}{\operatorname{argmin}}\ \ R^{\text{robust}}_{\epsilon,c}\left(\theta,\hat{P}_{S}^{m}\right). \tag{6}\] Surprisingly, the sophisticated minimax optimization problem of (6) which takes place in the infinite-dimensional Hilbert space of constrained probability measures, can be substantially simplified when is re-written in the dual format: **Lemma 1.4** (From [19]).: _For a sufficiently small \(\epsilon>0\), the minimax optimization problem of (6) has the following dual form:_ \[\inf_{\theta\in\Theta}\sup_{P^{\prime}\in\mathcal{B}_{\epsilon}^{*}\left( \hat{P}_{S}^{m}\right)}R\left(\theta,P^{\prime}\right)=\inf_{\gamma\geq 0}\left\{ \gamma\epsilon+\inf_{\theta\in\Theta}\frac{1}{m}\sum_{i=1}^{m}\sup_{\mathbf{Z}\in \mathcal{X}}\ \ell\left(\mathbf{Z},y_{i};\theta\right)-\gamma c\left(\mathbf{Z},\mathbf{X}_{i}\right) \right\}, \tag{7}\] _where \(\gamma\) and \(\epsilon\) are dual parameters, and there is a bijective and reciprocal relation between the \(\epsilon\) and \(\gamma^{*}\), i.e., the optimal value which minimizes the r.h.s._ As suggested by [20], the \(\inf_{\gamma\geq 0}\) in the r.h.s. part in the above optimization problem can be removed by fixing a user-defined value for \(\gamma\). This also means that if one attempts to find the optimal value for \(\theta\), the additive term \(\gamma\epsilon\) is ineffective and can be removed as well. It should be noted that this also fixes an (unknown) value for \(\epsilon\). In practice, the appropriate value for \(\epsilon\) is not known beforehand and thus can be usually found through a cross-validation stage, while the same procedure can be applied to its dual counterpart, i.e., \(\gamma\). In other words, the above-mentioned strategy keeps the generality of the problem intact. For the sake of simplicity in relations, throughout the rest of the paper we work with the dual formulation in (7) and let \(\gamma\) be a fixed and arbitrary value. ## 2 problem definition At this point, we can formally define our problem. Let \(\mathcal{X}\subseteq\mathbb{R}^{d}\), and assume \(P_{0}\) be an unknown and arbitrary distribution supported on \(\mathcal{X}\times\left\{\pm 1\right\}\), i.e., \(P_{0}\) produces feature-label pairs. For a valid cost function \(c:\mathcal{X}^{2}\rightarrow\mathbb{R}_{\geq 0}\), let \(P_{1}\) represent a shifted version of \(P_{0}\) such that the following two conditions hold: i) \(P_{0}\left(y|\mathbf{X}\right)\) and \(P_{1}\left(y|\mathbf{X}\right)\) are the same. In other words, the _labeling law_ is left unaltered. ii) the marginal distributions of \(P_{0}\) and \(P_{1}\) on \(\mathcal{X}\) are shifted such that \(\mathcal{W}_{c}\left(P_{0,X},P_{1,X}\right)=\alpha\) for some \(\alpha>0\). Here, the subscript \(X\) implies the marginal distribution on \(\mathcal{X}\). Let us consider the following two sets of samples: \[S_{0}=\left\{\left(\mathbf{X}_{i},y_{i}\right)\right\}_{i=1}^{m}\sim P_{0}^{m}\quad,\quad S_{1}=\left\{\mathbf{X}_{i}^{\prime}\right\}_{i=1}^{n}\sim P_{1,X}^{n},\] where \(S_{0}\) indicates the labeled set and \(S_{1}\) represents the unlabeled out-of-domain data. A classical result from VC-theory states that the generalization gap in learning from only \(S_{0}\) (with high probability) can be bounded as \[R\left(\hat{\theta}^{\mathrm{ERM}},P_{0}\right)\leq\min_{\theta\in\Theta}R \left(\theta,P_{0}\right)+\mathcal{O}\left(\sqrt{\mathrm{VCdim}\left(\mathcal{ H}\right)/m}\right)+\sqrt{\mathcal{O}(1)/m}, \tag{8}\] where \(\mathrm{VCdim}\left(\mathcal{H}\right)\) denotes the VC-dimension of hypothesis class \(\mathcal{H}\)[21]. This bound can be prohibitively large when \(\mathrm{VCdim}\left(\mathcal{H}\right)\) grows uncontrollably, e.g., the case of linear classifiers in very high dimensions (\(d\gg 1\)). We aim to propose a general framework that leverages both \(S_{0}\) and \(S_{1}\) concurrently, and outputs (in polynomial time) an estimator, denoted by \(\hat{\theta}^{\mathrm{RSS}}\), such that the second term in the r.h.s. of (8) would decay faster as one increases both \(m\) and \(n\). We are specially interested in cases where \(n\gg m\). In the next step, we apply our method on a simplified theoretical example in order to give explicit bounds. Similar to [5, 6, 7, 8], we fully focus the binary classification problem of a high-dimensional Gaussian mixture model with two components using linear classifiers. Mathematically speaking, for some \(\sigma_{0}\geq 0\) and \(\mathbf{\mu}_{0}\in\mathbb{R}^{d}\), let \(P_{0}\) be the feature-label joint distribution over \(\mathbb{R}^{d}\times\left\{-1,1\right\}\) as follows: \[P_{0}\left(y=1\right)=\frac{1}{2},\quad P_{0}\left(\mathbf{X}|y\right)=\mathcal{N} \left(y\mathbf{\mu}_{0},\sigma_{0}^{2}\mathbf{I}\right). \tag{9}\] Also, suppose a shifted version of \(P_{0}\), denoted by \(P_{1}\) with \(P_{1}\left(\cdot|y\right)=\mathcal{N}\left(y\mathbf{\mu}_{1},\sigma_{1}^{2}\mathbf{I}\right)\), where \(\left\|\mathbf{\mu}_{0}-\mathbf{\mu}_{1}\right\|\leq\mathcal{O}\left(\alpha\right)\) and \(\left|\sigma_{1}-\sigma_{0}\right|\leq\mathcal{O}\left(\alpha\right)\)1. Given the two sample sets \(S_{0}\) and \(S_{1}\) in this configuration, the problem is to estimate the optimal linear classifier which achieves the minimum error rate. Footnote 1: Having a Wasserstein distance of \(\alpha\) between two high-dimensional Gaussian distributions implies that both mean vectors \(\mathbf{\mu}_{0},\mathbf{\mu}_{1}\) and variances \(\sigma_{0},\sigma_{1}\) are within a fraction of at most \(\mathcal{O}\left(\alpha\right)\) from each other. ## 3 proposed method: Robust Self Supervised (RSS) training We propose a solution that combines two generally independent paradigms in machine learning: self-training [22, 23], and distributionally robust learning in (6). The essence of self-training is to use the currently learned model in order to induce artificial labels on the unlabeled data. Thus, for an unlabeled sample \(\mathbf{X}_{j}^{{}^{\prime}}\) and any given model parameter \(\theta\in\Theta\), one can temporarily consider a pseudo label given by \(h_{\theta}\left(\mathbf{X}_{j}^{\prime}\right)\). In this regard, the proposed solution denoted by \(\hat{\theta}^{\mathrm{RSS}}=\hat{\theta}^{\mathrm{RSS}}\left(S_{0},S_{1}\right)\) can be defined as follows: **Definition 3.1** (Robust Self-Supervised (RSS) Training).: The essence of RSS training is to add a penalty term to the robust version of the original ERM formulation, which is solely evaluated from the out-of-domain unlabeled samples in \(S_{1}\). Mathematically speaking, for a cost function \(c\) and parameter \(\gamma\geq 0\), let us define the _robust loss_\(\phi_{\gamma}:\mathcal{X}\times\left\{\pm 1\right\}\times\Theta\rightarrow\mathbb{R}\) as \[\phi_{\gamma}\left(\mathbf{X},y;\theta\right)\triangleq\sup_{\mathbf{Z}\in\mathcal{X} }\ \ell\left(\mathbf{Z},y;\theta\right)-\gamma c\left(\mathbf{Z},\mathbf{X}\right). \tag{10}\] In this regard, for a given set of parameters \(\gamma,\gamma^{\prime},\lambda\in\mathbb{R}_{\geq 0}\), the proposed RSS estimator is defined as \[\hat{\theta}^{\text{RSS}}\triangleq\underset{\theta\in\Theta}{\text{argmin}} \ \left\{\frac{1}{m}\sum_{i=1}^{m}\phi_{\gamma}\left(\mathbf{X}_{i},y_{i};\theta \right)+\frac{\lambda}{n}\sum_{j=1}^{n}\phi_{\gamma^{\prime}}\left(\mathbf{X}_{j} ^{\prime},h_{\theta}\left(\mathbf{X}_{j}^{\prime}\right);\theta\right)\right\}. \tag{11}\] The proposed RSS loss in (11) comprises of two main terms. The first term attempts to minimize the empirical robust risk over the labeled data in \(S_{0}\), where an adversary can alter the distribution of samples within a Wasserstein radius characterized by \(\gamma\). In the proceeding sections, we show that \(\gamma\) can become asymptotically large (radius becomes infinitesimally small) as \(m\rightarrow\infty\) which is similar to [18]. In fact, a small (but non-zero) budget for the adversary can control the generalization. The second term works only on the unlabeled data which are artificially labeled by \(h_{\theta}\). It can be shown that this term regulates the classifier by forcing it to avoid _crowded_ areas. The sensitivity of such regularization is controlled by both \(\lambda\) and also \(\gamma^{\prime}\). ### model optimization: algorithm and theoretical guarantees It can be shown that the for a convex loss function \(\ell\), convex cost function \(c\), and sufficiently large \(\gamma\) and \(\gamma^{\prime}\) (i.e., sufficiently small Wasserstein radii), the optimization problem of (11) is convex and can be solved up to an arbitrarily high precision in polynomial time. Moreover, if \(\ell\) is not convex, e.g., \(\mathcal{H}\) is the set of all neural networks, a simple Stochastic Gradient Descent (SGD) algorithm is still guaranteed to reach to at least a local minimum of (11). More specifically, (11) is a minimax optimization problem and consists of an inner maximization (formulated in (10)) followed by an outer minimization. As long as the cost function \(c\) is strictly convex and \(\gamma\) or \(\gamma^{\prime}\) are chosen sufficiently large, the inner maximization problem of (10) becomes strictly concave [3, 20]. This interesting property holds regardless the convexity of \(\ell\), which is of paramount importance since \(\ell\) is not convex in most practical situations. On the other hand, cost function candidates for \(c\) which are considered in this paper are \(\left\|\cdot\right\|_{2}\) and \(\left\|\cdot\right\|_{2}^{2}\), which are strictly convex. Hence, (10) can be optimally solved in polynomial time. The outer minimization problem of (11) is also differentiable as long as \(\ell\) is sufficiently smooth (again, convexity is not needed). This means the gradient of (11) exists and can be efficiently computed using the _Envelope Theorem_. Explicit bounds on the maximum number of steps in a simple SGD algorithm (with a mini-batch size of 1) in order to reach to an \(\varepsilon\)-neighborhood of the global maximum of (10), and a local minimum of (11) are given by [20]. Also, formulating the gradient of minimax loss functions such as (11) using the envelope theorem has been carried out, for example, in [3, 20]. We have also used the same gradient formulation for the numerical optimization of our model parameters in Section 5, where experimental results on real data using neural networks have been illustrated. In the next section, we derive theoretical guarantees for \(\hat{\theta}^{\text{RSS}}\) and show that it leads to improved generalization bounds when \(n\) is sufficiently large and \(\alpha\) is controlled. ## 4 theoretical guarantees and generalization bounds In this section, we discuss the theoretical aspects of using the RSS training method, specially for the classification of a two-component Gaussian mixture model using linear classifiers, i.e., \(\left\{\mathrm{sign}\left(\left(\mathbf{\theta},\cdot\right)\right):\mathbb{R}^{d} \rightarrow\left\{\pm 1\right\}|\ \mathbf{\theta}\in\mathbb{R}^{d}\right\}\). For the sake of simplicity in results, let us define the loss function \(\ell\) as the zero-one loss: \[\ell\left(\mathbf{X},y;\theta\right)=\mathbf{1}\left(y\langle\theta,\mathbf{X}\rangle\geq 0 \right). \tag{12}\] However, extension of the theoretical guarantees in this work to other types of loss functions is straightforward. The following theorem shows that the proposed RSS estimator in 11 can potentially improve the generalization bound in a _robust_ learning scenario. **Theorem 4.1**.: _Consider the setup described in Section 2 for the sample generation process (GMM assumption), and the loss function defined in (12). Using RSS training with \(m\) labeled and \(n\) unlabeled samples in \(S_{0}\) and \(S_{1}\), respectively, and for any \(\gamma,\delta>0\), there exist \(\lambda\) and \(\gamma^{\prime}\) which can be calculated solely based on input samples such that the following holds with probability at least \(1-\delta\):_ \[\mathbb{E}_{P_{0}}\left[\phi_{\gamma}\left(\mathbf{X},y;\hat{\theta} ^{\mathrm{RSS}}\right)\right]\leq\ \min_{\theta\in\Theta}\ \mathbb{E}_{P_{0}}\left[\phi_{\gamma}\left(\mathbf{X},y;\theta\right)\right] \tag{13}\] \[\qquad\qquad\qquad+\ \mathcal{O}\left(\gamma\sqrt{\frac{2d}{m} \left(\alpha\left(\|\mathbf{\mu}_{0}\|_{2}^{2}+\sigma_{0}^{2}\right)+\sqrt{\frac{2 d}{2n+m}}+\sqrt{\frac{2\log\left(1/\delta\right)}{2n+m}}\right)}+\sqrt{\frac{2 \log\left(1/\delta\right)}{m}}\right).\] The proof, as well as how to calculate \(\lambda\) and \(\gamma^{\prime}\) can be found in Appendix A. Theorem 4.1 presents a generalization bound for the proposed estimator when one considers the robust loss under an adversarial budget, which is characterized by \(\gamma\). Larger values of \(\gamma\) correspond to smaller Wasserstein radii for the distributional adversary of (5). The residual term in the r.h.s. of (13) converges to zero with a faster rate compared to that of (8), given \(n\) is sufficiently large and \(\alpha\) is sufficiently small. We derive explicit conditions regarding this event in Corollary 4.4. Before that, let us show that for fixed \(m\), as one increases the number of unlabeled samples \(n\), the _non-robust excess risk_ of the RSS-trained classifier decreases as well: **Theorem 4.2**.: _Consider the setting described in Theorem 4.1. Then, the estimator \(\hat{\theta}^{\mathrm{RSS}}\) of (11) using respectively \(m\) labeled and \(n\) unlabeled samples, along with specific values of \(\gamma\), \(\gamma^{\prime}\), and \(\lambda\) which can be calculated solely from the input samples, satisfies the following non-robust generalization bound with probability at least \(1-\delta\):_ \[R\left(\hat{\theta}^{\mathrm{RSS}},P\right)- \min_{\theta\in\Theta}R\left(\theta,P\right) \tag{14}\] \[\leq\ \mathcal{O}\left(\frac{e^{\frac{-|\mathbf{\mu}_{0}\|_{2}^{2}}{4 \sigma_{0}^{2}}}}{\sqrt{2\sigma_{0}}\sqrt{2\pi}}\left(\left(\|\mathbf{\mu}_{1}\|_{ 2}^{2}+\sigma_{1}^{2}\right)\frac{2d\alpha}{m}+\frac{4d}{m}\sqrt{\frac{2d+2 \log\frac{1}{\delta}}{2n+m}}\right)^{1/4}+\sqrt{\frac{2\log\frac{1}{\delta}}{ m}}\right).\] Again, the proof and the procedure for calculating \(\gamma,\gamma^{\prime}\), and \(\lambda\) are discussed in Appendix A. _Remark 4.3_.: Dependence of the generalization gap, a.k.a. excess risk, in Theorem 4.2 on dimension is \(\mathcal{O}\left(d^{3/8}\right)\), which shows improvement over \(\mathcal{O}\left(d^{1/2}\right)\) of the conventional ERM. Based on the previous results, the following corollary showcases a number of surprising non-asymptotic conditions under which our generalization bound becomes superior to conventional approaches. **Corollary 4.4**.: _Consider the setting described in Theorem 4.2. Then, \(\hat{\theta}^{\mathrm{RSS}}\) of (11) with \(m\) labeled and \(n\) unlabeled samples has an advantage over the traditional ERM, if:_ \[\alpha\leq\mathcal{O}\left(d/m\right)\quad,\quad n\geq\Omega\left(m^{2}/d \right). \tag{15}\] _Also, the following conditions are sufficient to make the minimum required \(m\) (for a given error bound) independent of the dimension \(d\):_ \[\alpha\leq\mathcal{O}\left(d^{-1}\right)\quad,\quad n\geq\Omega\left(d^{3} \right). \tag{16}\] Proof is given in Appendix. Finally, Theorem 4.2 also implies that if unlabeled samples are drawn from the same distribution as that of the labeled ones, i.e., \(\alpha=0\), then the excess risk of RSS-training satisfies the following inequality with probability at least \(1-\delta\): \[R\left(\hat{\theta}^{\mathrm{RSS}},P\right)-\min_{\theta\in\Theta}R\left( \theta,P\right)\leq\mathcal{O}\left(\left(\frac{d^{3}\log 1/\delta}{m^{2} \left(2n+m\right)}\right)^{1/8}+\sqrt{\frac{\log 1/\delta}{m}}\right), \tag{17}\] which again shows the previously-mentioned improvements when all samples are in-domain. The assumption of an _isotropic_ GMM with two components has been already studied in the literature (see Section 1). Next, we present a more general case of Theorem 4.2 where each Gaussian component can have a non-diagonal covariance matrix. Mathematically speaking, suppose that \(P_{0}\) and \(P_{1}\) are defined as follows: \[P_{0}\left(y=1\right)=1/2\quad,\quad P_{0}\left(\mathbf{X}|y\right)= \mathcal{N}\left(y\mathbf{\mu_{0}},\Sigma_{0}\right),\] \[P_{1\mathbf{X}}=\frac{1}{2}\mathcal{N}\left(\mathbf{\mu_{1}},\Sigma_{1} \right)+\frac{1}{2}\mathcal{N}\left(-\mathbf{\mu_{1}},\Sigma_{1}\right), \tag{18}\] where \(\|\mathbf{\mu_{1}}-\mathbf{\mu_{0}}\|\leq\mathcal{O}\left(\alpha\right)\) and \(\|\Sigma_{1}-\Sigma_{0}\|_{2}\leq\mathcal{O}\left(\alpha\right)\). Assume a set of \(m\) labeled samples \(S_{0}\sim P_{0}^{m}\), and a set of \(n\) unlabeled samples \(S_{1}\sim P_{1\mathbf{X}}^{n}\). **Theorem 4.5** (Generalization Bound for General Gaussian Mixture Models).: _Consider the setting described in (18). Using algorithm in (11) with \(m\) labeled and \(n\) unlabeled samples, there exists a set of parameters \(\gamma,\gamma^{\prime},\lambda\) for which the following holds with probability at least \(1-\delta\):_ \[R\left(\hat{\theta}^{\mathrm{RSS}},P\right)-\min_{\theta\in \Theta}R\left(\theta,P\right)\leq \tag{19}\] \[\mathcal{O}\left(e^{\phi^{2}}\left(\sqrt{\frac{\|\mathbf{\mu_{1}}\|_{ 2}^{2}+\mathrm{Tr}\left(\Sigma_{1}\right)}{m}}\left(C\alpha+\sqrt{\frac{\log \frac{1}{\delta}}{2n+m}}\right)\frac{d\kappa_{1}\kappa_{1}^{\prime}}{\Delta \left(\Sigma\right)}\right)^{1/2}+\sqrt{\frac{\log 1/\delta}{m}}\right),\] _where_ \[\vartheta =|\mathbf{\mu}_{1}\Sigma_{1}^{-1}\mathbf{\mu}_{1}-\mathbf{\mu}_{0}\Sigma_{0} ^{-1}\mathbf{\mu}_{0}|, C=\left(\frac{\|\mu_{0}\|^{2}+\lambda_{\min}\left(\Sigma_{1} \right)\|\mu_{0}\|_{2}}{\lambda_{\min}^{2}}\right),\] \[\kappa_{1} =\frac{\lambda_{\max}\left(\Sigma_{1}\right)}{\lambda_{\min} \left(\Sigma_{1}\right)}, \kappa_{1}^{\prime}=\frac{\lambda_{\max}\left(\Sigma_{1}\right)}{ \Delta\left(\Sigma_{1}\right)},\] \[\Delta\left(\Sigma_{1}\right) =\min\left\{\lambda_{i}\left(\Sigma_{1}\right)-\lambda_{j}\left( \Sigma_{1}\right)\right\}, \forall i,j:\lambda_{i}\left(\Sigma_{1}\right)\neq\lambda_{j}\left(\Sigma_{1} \right), \tag{20}\] _and \(\lambda_{i}\left(\Sigma\right)\) is the \(i\)(th) eigenvalue of \(\Sigma\)._ Proof can be found in Appendix. One important difference to note between Theorem 4.5 and Theorem 4.2 is the choice of \(\gamma^{\prime}\), which controls the adversarial budget for unlabeled (and out-of-domain) part of the dataset. In the setting of Theorem 4.2, we prefer to choose \(\gamma^{\prime}\) as small as possible. However, in the setting of Theorem 4.5, we consider the eigenvectors and eigenvalues of \(\Sigma_{1}\) and \(\Sigma_{0}\), as well as the direction of \(\mathbf{\mu}_{1}\) and \(\mathbf{\mu}_{0}\) in order to find the optimal value for the adversarial budget. In fact, there are cases in which selecting a large \(\gamma^{\prime}\) (less freedom for the adversary) may actually be the optimal choice. ## 5 experimental results The effectiveness of the proposed method has been assessed through experimenting on various datasets, including simulated data, and real-world datasets of histopathology images. Each experiment has been divided into two parts: i) cases in which both labeled and unlabeled data are sampled from the same distribution, and ii) the scenarios where the unlabeled data differs in distribution from the labeled ones. First, let us specify the datasets used in our experiments: 1. **Simulated data** consists of binary-labeled data points with a dimension of \(d=200\), generated according to the setting described in Section 2. 2. **NCT-CRC-HE-100K** consists of 100,000 histopathology images of colon tissue [24]. The images have dimensions of \(224\times 224\) and were captured at 20x magnification. The dataset is labeled with 9 distinct classes. 3. **PatchCamelyon** is a widely used benchmark dataset for medical image analysis. It consists of a large collection of 327,680 color histopathology images from lymph node, each with dimensions \(96\times 96\). The dataset has binary labels for presence/absence of metastatic tissue. ### Experiment of simulated data To evaluate the effectiveness of our method on simulated data, we first find the optimal classifier using only labeled samples. Then, we apply our method with a varying number of unlabeled samples. The results (see Table 1) show that our proposed method achieves accuracy improvements comparable to models trained only on labeled samples. Moreover, results indicate that our method is more effective when labeled and unlabeled data come from the same distribution. However, it still demonstrates significant improvement even when the unlabeled samples undergo a distribution shift. ### Experiment of Histopathology Data The processing pipeline over the real-world dataset of histopathology images is based on using a ResNet50 encoder pre-trained on ImageNet [25, 26], which extracts and stores \(1\times 1024\) embeddings from input images. Such embeddings are then used to train a deep neural network with four layers of size \(2048\) and one output layer for the class id. Also, we have used a LeakyReLU activation function. Experimental results in this part are shown in Table 2. Under the "same distribution" setting, both labeled and unlabeled data have been taken from the NCT-CRC-HE-100K dataset. On the other hand, "different distributions" setting implies that labeled data comes from the NCT-CRC-HE-100K dataset (labels are either "Normal" or "Tumor"), while the PatchCamelyon dataset was used for the unlabeled data. As a result, the final labeling is binary. The experimental results demonstrate that increasing the number of unlabeled data leads to an improvement in accuracy for both the'same' and 'different' distribution settings. ## 6 Conclusion In this work, we have studied both the robust and non-robust classification problem in the presence of a limited labeled dataset and a relatively larger collection of unlabeled samples. We further assume that unlabeled data have come from a slightly perturbed distribution compared to the original data distribution. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multicolumn{3}{c}{} & \multicolumn{2}{c}{Same distribution} & \multicolumn{4}{c}{Different distribution} \\ \hline \#Labeled & Acc & \#Unlabeled & Acc & \#Labeled & Acc & \#Unlabeled & Acc \\ \hline \multirow{3}{*}{10} & & 10 & 0.63 & & & 10 & 0.61 \\ & 0.59 & 100 & 0.66 & 10 & 0.59 & 100 & 0.65 \\ & & 1,000 & 0.79 & & & 1,000 & 0.78 \\ & & 10,000 & **0.82** & & & 10,000 & **0.81** \\ \hline \multirow{3}{*}{20} & & 20 & 0.64 & & & 20 & 0.65 \\ & 0.62 & 200 & 0.69 & 20 & 0.62 & 200 & 0.65 \\ & & 2,000 & 0.80 & & & 2,000 & 0.79 \\ & & 10,000 & **0.82** & & & 10,000 & **0.80** \\ \hline \multirow{3}{*}{40} & & 40 & 0.65 & & & 40 & 0.65 \\ & 0.65 & 400 & 0.71 & 40 & 0.65 & 400 & 0.73 \\ \cline{1-1} & & 4,000 & 0.81 & & & 4,000 & 0.78 \\ \cline{1-1} & & 10,000 & **0.82** & & & 10,000 & **0.80** \\ \hline \hline 10,000 & **0.83** & - & - & 10,000 & **0.83** & - & - \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracy of the model trained on labeled datasets of sizes \(10\), \(20\), \(40\), and \(10,000\) with varying amounts of unlabeled data from the same distribution with \(\alpha=0\) (**left**), and different distribution with \(\alpha=0.5\|\mathbf{\mu_{0}}\|_{2}\) (**right**). To the best of our knowledge, we present the first non-asymptotic tradeoff between the sizes of the labeled and unlabeled samples, denoted respectively by \(m\) and \(n\), in the case of learning a two-component Gaussian mixture model. In particular, we show when \(n\geq\Omega\left(m^{2}/d\right)\), the generalization bound improves compared to the case when only labeled data are used even if unlabeled data points are slightly out-of-domain. More sophisticated results for the generalization error of the robust and non-robust cases have been derived as well. Our technique is based on optimizing a robust loss (with an asymptotically decreasing radius for the uncertainty ball) in addition to a robust regularization term which forces the classifier to avoid crowded and dense areas. As a result, we have utilized many existing tools from self-training, distributionally robust learning and optimal transport to build our framework. We have validated our theoretical findings through a number of experiments on both synthetic and public real-world datasets. Our experiments show the classification accuracy (even for non-Gaussian cases) gets improved through feeding our method with out-of-domain unlabeled samples. Our proposed methodology is based on two key ideas: 1) Leveraging unlabeled out-of-domain data to improve the robust accuracy of a classifier, and 2) Adapting the radius of the uncertainty neighborhood based on the quantity of labeled and unlabeled samples to achieve a good trade-off between bias and variance in classification error. For future works, one can consider improving the bounds and trying to relax the conditions under which the utilization of unlabeled data becomes helpful. Also, deriving error lower-bounds and impossibility results seem to be another interesting research direction. Finally, we have restricted the level of distribution shift for the out-of-domain samples in order to achieve theoretical guarantees. Relaxing such restrictions seems to be another interesting problem to tackle.
2309.12540
Quantifying Harmony between Direct and Indirect Pathways in The Basal Ganglia; Healthy and Parkinsonian States
The basal ganglia (BG) show a variety of functions for motor and cognition. There are two competitive pathways in the BG; direct pathway (DP) which facilitates movement and indirect pathway (IP) which suppresses movement. It is well known that diverse functions of the BG may be made through "balance" between DP and IP. But, to the best of our knowledge, so far no quantitative analysis for such balance was done. In this paper, as a first time, we introduce the competition degree ${\cal C}_d$ between DP and IP. Then, by employing ${\cal C}_d$, we quantify their competitive harmony (i.e., competition and cooperative interplay), which could lead to improving our understanding of the traditional "balance" so clearly and quantitatively. We first consider the case of normal dopamine (DA) level of $\phi^*=0.3$. In the case of phasic cortical input (10 Hz), a healthy state with ${\cal C}_d^* = 2.82$ (i.e., DP is 2.82 times stronger than IP) appears. In this case, normal movement occurs via harmony between DP and IP. Next, we consider the case of decreased DA level, $\phi = \phi^*(=0.3)~x_{DA}$ ($1 > x_{DA} \geq 0$). With decreasing $x_{DA}$ from 1, the competition degree ${\cal C}_d$ between DP and IP decreases monotonically from ${\cal C}_d^*$, which results in appearance of a pathological Parkinsonian state with reduced ${\cal C}_d$. In this Parkinsonian state, strength of IP is much increased than that in the case of normal healthy state, leading to disharmony between DP and IP. Due to such break-up of harmony between DP and IP, impaired movement occurs. Finally, we also study treatment of the pathological Parkinsonian state via recovery of harmony between DP and IP.
Sang-Yoon Kim, Woochang Lim
2023-09-21T23:41:32Z
http://arxiv.org/abs/2309.12540v3
Ying-Yang Competitive Harmony between Direct and Indirect Pathways in A Spiking Neural Network of The Basal Ganglia ###### Abstract The basal ganglia (BG) in the brain exhibit a variety of functions for motor and cognition. There are two competing pathways in the BG; direct pathway (DP), facilitating movement and indirect pathway (IP), suppressing movement. It is well known that diverse functions of the BG could be done via "balance" between DP and IP. But, to the best of our knowledge, no quantitative analysis for such balance was done. In this paper, we consider a spiking neural network of the BG and make quantitative analysis for competitive harmony (i.e., competition and cooperative interplay) between DP and IP by introducing their competition degree \(\mathcal{C}_{d}\), given by the ratio of strength of DP (\(S_{DP}\)) to strength of IP (\(S_{IP}\)) (i.e., \(\mathcal{C}_{d}=\mathcal{S}_{DP}/\mathcal{S}_{IP}\)). We first consider the case of normal dopamine (DA) level of \(\phi^{*}=0.3\). In the case of phase cortical input (10 Hz) in the phasically-active state, a healthy state with \(\mathcal{C}_{d}^{*}=2.82\) (i.e., DP is \(2.82\) times stronger than IP) appears. In this case, normal movement occurs via harmony between DP and IP. Next, we consider the case of decreased DA level, \(\phi=\phi^{*}(=0.3)\;x_{DA}\) (\(1>x_{DA}\geq 0\)). With decreasing \(x_{DA}\) from 1, the competition degree \(\mathcal{C}_{d}\) between DP and IP decreases monotonically from \(\mathcal{C}_{d}^{*}\), which results in appearance of a pathological state (e.g., Parkinson's disease) with decreased competition degree. In this pathological state, strength of IP (\(\mathcal{S}_{IP}\)) is much increased than that in the case of normal healthy state, leading to disharmony between DP and IP. Due to such break-up of harmony between DP and IP, impaired movement occurs. Finally, we also study treatment of the pathological state via recovery of harmony between DP and IP. Basal ganglia, Direct pathway (DP), Indirect pathways(IP), Competitive harmony between DP and IP, Competition degree pacs: 87.19.lj, 87.19.lu, 87.19.rs ## I Introduction The basal ganglia (BG) in the brain are a group of subcortical deep-lying nuclei, receive cortical inputs from most regions of cortex, and provide output to the thalamus/brainstem [1; 2; 3; 4]. Their main function is motor control (e.g., initiation and execution of movement) [1; 2; 3; 4]. They also play an important role in cognitive processes (e.g., action selection) [5; 6; 7; 8; 9; 10]. Dysfunction in the BG is associated with a number of movement disorders, such as Parkinson's disease (PD), as well as cognitive disorders. As is well known, patients with PD show motor deficits such as slowed movement (bradykinesia), rigidity, and (resting) tremor, and they may also develop cognitive deficits such as dementia [11; 12; 13; 14]. Our spiking neural network (SNN) of the BG is based on anatomical and physiological data derived from rat-based studies [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38]. Hence, we use rat-brain terminology. The BG receive input from cortex through the input nuclei [stratum (Str) and subthalamic nucleus (STN)] and project output through the output nucleus [substantia nigra pars reticulata (SNr)] to the thalamus/brainstem [7; 10]. Here, the principal input nucleus, Str, receives cortical inputs from all over the cortex and is the primary recipient of dopamine (DA), coming from the substantia nigra pars compacta (SNc). Within the Str, spine projection neurons (SPNs), comprising up to 95 % of the whole striatal population, are the only primary output neurons [39; 40]. There are two types of SPNs with D1 and D2 receptors for the DA. The DA modulates firing activity of the D1 and D2 SPNs in a different way [41; 42; 43]. Two competing pathways, direct pathway (DP) and indirect pathway (IP), exist in the BG [44; 45; 46; 47]. D1 SPNs in the Str project inhibition directly to the output nucleus, SNr, via DP, and then the thalamus is disinhibited. As a result, movement facilitation occurs. On the other hand, D2 SPNs are connected to the SNr via IP, crossing the intermediate control nucleus, GP (globus pallidus), and the STN. In this case of IP, the firing activity of the SNr becomes enhanced mainly due to excitatory input from the STN. Consequently, firing activity of the thalamus becomes reduced, leading to movement suppression. In the case of normal DA level, DP is more active than IP, and an action is initiated (i.e., "Go" behavior occurs). In contrast, for low DA level, IP could be more active than DP, and then the action is withheld (i.e., "No-Go" behavior takes place). In this way, DP and IP are also called the "Go" and "No-Go" pathways, respectively [48; 49; 50; 51]. As is well known, diverse functions of the BG could be done via "balance" between the "Go" DP and the "No-Go" IP, and such balance is regulated by the DA level. So far, diverse subjects for the BG have been investigated in many computational works [5; 6; 7; 8; 9; 10; 40; 41; 42; 43; 44; 45; 46; 47; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72]. But, to the best of our knowledge, no quantitative anal ysis for balance between DP and IP was made. To make clear the concept of such balance, we make quantitative analysis for competitive harmony (i.e., competition and cooperative interplay) between DP and IP in our SNN of the BG. To do so, we introduce the competition degree \(\mathcal{C}_{d}\) between DP and IP, given by the ratio of strength of DP (\(\mathcal{S}_{DP}\)) to strength of IP (\(\mathcal{S}_{IP}\)) (i.e., \(\mathcal{C}_{d}=\mathcal{S}_{DP}/\mathcal{S}_{IP}\)). Here, \(\mathcal{S}_{DP}\) (\(\mathcal{S}_{IP}\)) is given by the magnitude of the total time-averaged synaptic current into the output nucleus, SNr, via DP (IP). We first consider the case of normal DA level of \(\phi^{*}=0.3\). For the tonic cortical input (3 Hz) in the resting state, a default state with \(\mathcal{C}_{d}\simeq 1\) (i.e., DP and IP are balanced) appears. In this default state, the neurons in the output nucleus, SNr, fire actively with the frequency 25.5 Hz, resulting in the locked state of the BG gate to the thalamus. Consequently, no movement occurs. On the other hand, for the phasic cortical input (10 Hz) in the phasically-active state, a healthy state with \(\mathcal{C}_{d}^{*}=2.82\) (i.e., DP is 2.82 times stronger than IP) is found to appear. In this healthy state, the firing frequency of the SNr becomes much reduced to 5.5 Hz from 25.5 Hz (default state), which leads to the opened state of the BG gate to the thalamus. Through this kind of competitive harmony between DP and IP, normal movement occurs in the healthy state, in contrst to the case of default state. Next, we consider the case of reduced DA level, \(\phi=\phi^{*}(=0.3)\)\(x_{DA}\) (\(1>x_{DA}\geq 0\)). As \(x_{DA}\) (i.e., fraction of the DA level) is decreased from 1, the competition degree \(\mathcal{C}_{d}\) between DP and IP is found to decrease monotonically from \(\mathcal{C}_{d}^{*}\), which leads to appearance of a pathological state with reduced competition degree. For the pathological state, strength of IP (\(\mathcal{S}_{IP}\)) is much increased than that for the normal healthy state, resulting in disharmony between DP and IP. Because of such break-up of harmony between DP and IP, arising from deficiency in DA production in the neurons of the SNe [73; 74], PD with impaired movement occurs. Finally, we also investigate treatment of the pathological state through recovery of harmony between DP and IP. This paper is organized as follows. In Sec. II, we describe our SNN for the BG. Then, in the main Sec. III, we make quantitative analysis for competitive harmony between the "Go" DP and the "No-Go" IP in our SNN of the BG. Finally, we give summary and discussion in Sec. IV. ## II Spiking Neural Network of the Basal Ganglia In this section, we describe our SNN for the BG, and briefly present the governing equations for the population dynamics; for details, refer to Appendices A and B. Based on the anatomical and the physiological properties of the BG [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38], this BG SNN, composed of D1/D2 SPNs, STN neurons, GP neurons, and SNr neurons, is developed. For simplicity, within the Str, only the dominant D1/D2 SPNs are taken into consideration (without considering a minor subpopulation of fast spiking interneurons). In addition, we also consider the modulation effect of DA on D1/D2 SPN and afferent synapses into the D1/D2 SPNs, the STN, and the GP [41; 42; 43]. ### Architecture of The Spiking Neural Network Figure 1 shows a box diagram of major neurons and synaptic connections in our BG SNN. This BG SNN is composed of the input nuclei (Str and STN), the output nucleus (SNr), and the intermediate controller (GP). Here, STN is the only excitatory neuron in the BG, while all the other ones are inhibitory neurons. Particularly, we note that the SNr makes inhibitory output projections to the thalamus/brainstem, in contrast to the usual case of excitatory outputs. Both Str and STN receive inputs from the cortex. Cortical inputs are modeled in terms of 1,000 independent Poisson spike trains with firing rate \(f_{i}\) (\(i=1,\cdots,1000\)). In the case of tonic cortical input in the resting state, Figure 1: Box diagram of our spiking neural network for the basal ganglia (BG). Excitatory and inhibitory connections are represented by lines with triangles and circles, respectively, and dopamine-modulated cells and synaptic connections are denoted in blue color. There are two input nuclei to the BG, striatum and STN (subthalamic nucleus), receiving the excitatory cortical input. In the striatum (primary input nucleus), there are two types of inhibitory spine projection neurons (SPNs); SPNs with the D1 receptors (D1 SPNs) and SPNs with D2 receptors (D2 SPNs). The D1 SPNs project inhibition directly to the output nucleus SNr (substatin nigra pars reticulate) via the direct pathway (DP; green color). On the other hand, the D2 SPNs are connected to the SNr via the indirect pathway (IP; red color) crossing the GP (globus pallidus) and the STN. The inhibitory output from the SNr to the thalamus/brainstem is controlled via competition between DP and IP. \(f=3\) Hz, while for the phasic cortical input in the phasically-active state, \(f=10\) Hz, independently of \(i\)[7; 40; 43; 75; 64; 76; 77; 78; 43]. Also, the principal input nucleus, Str, is the primary recipient of the DA (coming from the SNc).Within the Str, there are two types of SPNs with D1 and D2 receptors for the DA, comprising up to 95 % of the whole striatal population; a minor subpopulation of fast spiking interneurons are not considered in our SNN [39; 40]. These D1 and D2 SPNs exhibit different firing activities due to DA modulation [41; 42; 43]. There are two competing pathways in the BG [44; 45; 46; 47; 48; 49; 50; 51]. The D1 SPNs make inhibitory projection to the output nucleus, SNr, directly via the "Go" DP (green color in Fig. 1). Then, the thalamus becomes disinhibited, leading to movement facilitation. In contrast, the D2 SPNs are connected to the SNr through the "No-Go" IP (red color in Fig. 1), crossing the GP an the STN. Here, the GP plays a role of intermediate controller to modulate the firing activity of the STN. In this case of IP, the firing activity of the SNr becomes increased mainly because of excitatory input from the STN. As a result, firing activity of the thalamus becomes decreased, resulting in movement suppression. In this way, the firing activity of the output nucleus, SNr, is controlled via competition between "Go" DP (green) and "No-Go" IP (red). Based on the anatomical information [17], we choose the numbers of the striatal neurons, the STN neurons, the SNr neurons, and the GP neurons in the BG. Here we develop a scaled-down SNN where the total number of striatal neurons is \(2,791\), corresponding to \(\frac{1}{1000}\) of the \(2,791\cdot 10^{3}\) striatal cells found in the rat BG. Thus, we make scaling down with ratio \(10^{-3}\) for all the BG neurons [61; 67]. The total numbers of the BG neurons are shown in Table 1. We note that 90-97 % of the whole striatal population corresponds to the major subpopulation of D1/D2 SPNs [61]; here, we choose 95 %. The remaining 5 % corresponds to a minor subpopulation of fast spiking interneurons (which are not considered in our SNN). From the outside of the BG, the cortex (Ctx) provides the external excitatory inputs randomly to the D1/D2 SPNs and the STN neurons with the connection probabilities, \(p_{c}^{(SPN,Ctx)}=0.084\) (8.4 %) and \(p_{c}^{(\text{STN,Ctx})}=0.03\) (3 %), respectively [43]. As shown in Fig. 1, we consider random synaptic connections between BG cells; random recurrent connections between GP neurons are also considered. Table 2 shows the synaptic connection probabilities \(p_{c}^{(T,S)}\) from a presynaptic neuron in the source population (\(S\)) to a postsynaptic neuron in the target population (\(T\)) in the BG [64]. ### Single Neuron Models, Synaptic Currents, and DA Effects As elements of our BG SNN, we use the Izhikevich spiking neuron model which is computationally efficient as well as biologically plausible [80; 81; 82; 83], as in our previous works for spike-timing-dependent plasticity [84; 85; 86]. The Izhikevich model matches neurodynamics by tuning its intrinsic parameters, instead of matching electrophysiological data, in contrast to the Hodgkin-Huxley-type conductance-based models. Our BG SNN is composed of 5 populations of D1 SPNs, D2 SPNs, STN neurons, GP neurons, and SNr neurons. The state of a neuron in each population is characterized by its membrane potential \(v\) and the slow recovery variable \(u\) in the Izhikevich neuron model. Time-evolution of \(v\) and \(u\) is governed by three types of currents into the neuron, \(I_{ext}\) (external current), \(I_{syn}\) (synaptic current), and \(I_{stim}\) (stimulation current). Here, \(I_{ext}\), \(I_{syn}\), and \(I_{stim}\) represent stochastic external excitatory input from the external region (i.e., corresponding to the background part not considered in the modeling), the synaptic current, and the injected stimulation DC current, respectively. As the membrane potential reaches its apex (i.e., spike cutoff value), the neuron fires, and then the membrane potential \(v\) and the recovery variable \(u\) are reset. Detailed explanations on the Izhikevich neuron models for the D1/D2 SPNs, the STN neuron, the GP neuron, and the SNr neuron are presented in Appendix A [41; 42; 43]. Each Izhikevich neuron model has 9 intrinsic parameters which are shown in Table 3 in Appendix A. These values are based on physiological properties of the D1/D2 SPNs, the STN neurons, the GP neurons, and the SNR neurons [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. Next, we consider the synaptic currents \(I_{syn}\) into the BG neurons. As in our previous works [87; 88; 89; 90; 91; 92], we follow the "canonical" formalism for the synaptic currents; for details, refer to Appendix B. There are two types of excitatory AMPA and NMDA receptor-mediated synaptic currents and one kind of inhibitory GABA receptor \begin{table} \begin{tabular}{|c|c|} \hline & \(p_{c}^{(T,S)}\) \\ \hline D1 SPN \(\rightarrow\) SNr & 0.033 \\ \hline D2 SPN \(\rightarrow\) GP & 0.033 \\ \hline STN \(\rightarrow\) GP & 0.3 \\ \hline GP \(\rightarrow\) GP & 0.1 \\ \hline GP \(\rightarrow\) STN & 0.1 \\ \hline STN \(\rightarrow\) SNr & 0.3 \\ \hline GP \(\rightarrow\) SNr & 0.1066 \\ \hline \end{tabular} \end{table} Table 2: Synaptic connection probabilities \(p_{c}^{(T,S)}\) from a presynaptic neuron in the source population (\(S\)) to a postsynaptic neuron in the target population (\(T\)). \begin{table} \begin{tabular}{|c|c|} \hline \(N_{\text{D1}}\) & 1,325 \\ \hline \(N_{\text{D2}}\) & 1,325 \\ \hline \(N_{\text{STN}}\) & 14 \\ \hline \(N_{\text{GP}}\) & 46 \\ \hline \(N_{\text{SNr}}\) & 26 \\ \hline \end{tabular} \end{table} Table 1: Numbers of BG cells, \(N_{\text{X}}\) [\(X=\) D1 (SPN), D2 (SPN), STN, GP, and SNr] in our spiking neural network. mediated synaptic current. Synaptic conductance for each synaptic current is provided by multiplication of maximum conductance per synapse, average number of afferent synapses, and fraction of open postsynaptic ion channels. We note that, postsynaptic ion channels are opened via binding of neurotransmitters to receptors in the target population. A sum of the exponential-decay functions (controlled by the synaptic decay time constant and the synaptic latency time constant) over presynaptic spikes provide temporal evolution of the fraction of open ion channels. The synaptic parameter values (based on the physiological properties of the BG neurons) for the maximum synaptic conductance, the synaptic decay time constant, the synaptic latency time constant, and the synaptic reversal potential for the synaptic currents are given in Table 6 in Appendix B [20; 30; 31; 32; 33; 34; 35; 36; 37; 38; 42; 43; 64]. Finally, we consider the DA effect on our BG SNN [41; 42; 43]. Figure 1 shows effects of DA modulation on D1/D2 SPNs and synaptic currents into the D1/D2 SPNs, the STN neurons, and the GP neurons (blue color). The DA effects on the D1/D2 SPNs are well shown in the current-frequency (f-I) curves in Fig. 2A of Ref. [41]. We note changes from the basic model (without DA; red) to the D1 (green) and the D2 (blue) SPN models. Such changes occur due to different DA effects, depending on the D1 and D2 SPNs. D1 receptor activation has two opposing effects. Due to a hyperpolarizing effect, activation threshold is increased in comparison to the bare case, while after threshold, the slope of the f-I curve increases rapidly because of another depolarizing effect. In contrast, in the case of D2 SPN, only the depolarizing effect occurs, leading to left-shift of the bare f-I curve. As a result of DA effects, excitatory cortical inputs into the D1 (D2) SPNs are upscaled (downscaled), as shown well in Fig. 2C of Ref. [41]. All the other synaptic currents into the STN neurons and the GP neurons become downscaled due to DA effects. More details on the DA effects on the SPNs and synaptic currents are given in Appendices A and B, respectively. ## III Quantitative analysis of competitive harmony between DP and IP In this section, we quantitatively analyze competitive harmony (i.e., competition and cooperative interplay) between DP and IP by introducing the competition degree \(\mathcal{C}_{d}\) between them. \(\mathcal{C}_{d}\) is given by the ratio of strength of DP (\(\mathcal{S}_{DP}\)) to strength of IP (\(\mathcal{S}_{IP}\)) (i.e., \(\mathcal{C}_{d}=\mathcal{S}_{DP}/\mathcal{S}_{IP}\)). We first consider the normal DA level of \(\phi=0.3\); \(\phi_{1}\) (DA level for the D1 SPNs) \(=\phi_{2}\) (DA level for the D2 SPNs) \(=\phi\). For the tonic cortical input (\(f=3\) Hz) in the resting state, a default state with \(\mathcal{C}_{d}\simeq 1\) (i.e., DP and IP are nearly balanced) appears. In this default state, the BG gate to the thalamus is locked due to active firing activity of the neurons in the output nucleus SNr, which results in no movement. On the other hand, for the phasic cortical input (10 Hz) in the phasically-active state, a healthy state with \(\mathcal{C}_{d}^{*}=2.82\) (i.e., DP is 2.82 times stronger than IP) appears. In this healthy state, the BG gate to the thalamus becomes opened because the firing activity of the SNr neurons is much reduced. Thus, normal movement occurs via competitive harmony between DP and IP. Next, we consider the case of decreased DA level, \(\phi=\phi^{*}(=0.3)\)\(x_{DA}\) (\(1>x_{DA}\geq 0\)). With reducing \(x_{DA}\) from 1, the competition degree \(\mathcal{C}_{d}\) between DP and IP decreases monotonically from \(\mathcal{C}_{d}^{*}\) (\(=2.82\)), which results in appearance of a pathological state with reduced competition degree. In the pathological state, strength of IP (\(\mathcal{S}_{IP}\)) is much increased than that for the normal healthy state, leading to disharmony between DP and IP. Due to break-up of harmony between DP and IP, arising from deficiency in DA production in the neurons of the SNc [73; 74], PD with impaired movement occurs. Finally, we also study treatment of the pathological state via recovery of harmony between DP and IP. Figure 2: Default basal ganglia state for the tonic cortical input (3 Hz) in the resting state and normal DA level \(\phi=0.3\). Colors: parts, associated with DP (green), while parts, related to IP (red). Populations: \(X=\) D1 (SPN), D2 (SPN), STN, GP, and SNr. Raster plots of spikes and IPSRs (instantaneous population spike rates) \(R_{X}(t)\) of (a1) D1 SPN, (a2) D2 SPN, (a3) STN, (a4) GP, and (a5) SNr neurons. (b) Population-averaged mean firing rates (MFRs) \(\langle f_{i}^{(X)}\rangle\) of D1 SPN, D2 SPN, STN, GP, and SNr) neurons. (c) Time-averaged synaptic currents for DP (\(\overline{I_{DP}}\)) and IP (\(\overline{I_{IP}}\)). Inset shows the excitatory and the inhibitory components of the IP current, \(\overline{I_{IP}^{(E)}}\) and \(\overline{I_{IP}^{(I)}}\). (d) Strengths of DP (\(\mathcal{S}_{DP}\)) and IP (\(\mathcal{S}_{IP}\)). The competition degree \(\mathcal{C}_{d}(=\mathcal{S}_{DP}/\mathcal{S}_{IP})=0.99\). ### Healthy BG States with Harmony between DP and IP We consider the case of normal DA level of \(\phi=0.3\) for the D1 and D2 SPNs. As explained in Sec. II.1, cortical inputs are modeled in terms of 1,000 independent Poisson spike trains with firing rate \(f\). We first consider the case of tonic cortical input with \(f=3\) Hz in the resting state [7; 43; 40; 75; 76; 77; 78; 79]. Population firing activity of BG neurons may be well visualized in the raster plot of spikes which is a collection of spike trains of individual BG neurons. Figures 2(a1)-2(a5) show the raster plots of spikes for D1 SPNs (green), D2 SPNs (red), STN neurons (red), GP neurons (red), and SNr neurons, respectively; color of D1 SPNs, associated with DP is green, while color of BG cells related to IP is red. As a collective quantity exhibiting population behaviors, we use an IPSR (instantaneous population spike rate) which could be obtained from the raster plot of spikes [93; 94; 95; 96; 97]. In this case, each spike in the raster plot is convoluted with a kernel function \(K_{h}(t)\) to obtain a smooth estimate of IPSR \(R_{X}(t)\) in the \(X\) population (\(X=\) D1 (SPN), D2 (SPN), STN, GP, and SNr) [98]: \[R_{X}(t)=\frac{1}{N_{X}}\sum_{i=1}^{N_{X}}\sum_{s=1}^{n_{(X)}^{(X)}}K_{h}(t-t_ {s,i}^{(X)}). \tag{1}\] Here, \(N_{X}\) is the number of the neurons, and \(n_{i}^{(X)}\) and \(t_{s,i}^{(X)}\) are the total number of spikes and the \(s\)th spiking time of the \(i\)th neuron, respectively. As the kernel function, we employ a Gaussian function of band width \(h\): \[K_{h}(t)=\frac{1}{\sqrt{2\pi}h}e^{-t^{2}/2h^{2}},\ \ \ \ -\infty<t<\infty, \tag{2}\] where the band width \(h\) of \(K_{h}(t)\) is 20 msec. The IPSRs \(R_{X}(t)\) for \(X=\) D1 (SPN), D2 (SPN), STN, GP, and SNr are also shown in Figs. 2(a1)-2(a5), respectively. As shown in Fig. 2(b), population-averaged mean firing rates (MFRs) of BG neurons, \(\langle f_{i}^{(X)}\rangle\), for the tonic case are 1.03, 0.97, 9.9, 29.9, and 25.5 Hz for \(X=\) D1 (SPN), D2 (SPN), STN, GP, and SNr, respectively [7; 43; 64]; \(f_{i}^{(X)}\) is the MFR of the \(i\)th neuron in the \(X\) population and \(\langle\cdots\rangle\) denotes the population average over all neurons. For details, refer to Table 5 in Appendix A. In this case of default BG state, the D1 and D2 SPNs in the input nucleus, Str, are nearly silent. On the other hand, the output SNr neurons fire very actively, and hence the BG gate to the thalamus becomes locked, leading to no movement. There are two types of synaptic currents into the (output) SNr neurons, \(I_{DP}\) and \(I_{IP}\), via DP (green) and IP (red) in Fig. 1, respectively. For details of synaptic currents, refer to Appendix B; refer to Eq. (10) for all the currents into the neuron. Here, the DP current, \(I_{DP}(t)\), is just the (inhibitory) synaptic current from the D1 SPNs to the SNr neurons: \[I_{DP}(t)=-I_{syn}^{(\text{SNr},\text{D1})}(t). \tag{3}\] The IP current, \(I_{IP}(t)\), consists of the excitatory component, \(I_{IP}^{(E)}(t)\), and the inhibitory component, \(I_{IP}^{(I)}(t)\) : \[I_{IP}(t)=I_{IP}^{(E)}(t)+I_{IP}^{(I)}(t). \tag{4}\] Here, \(I_{IP}^{(E)}(t)\) [\(I_{IP}^{(I)}(t)\)] is just the synaptic current from the STN (GP) to the SNr: \[I_{IP}^{(E)}(t)=-I_{syn}^{(\text{SNr},\text{STN})}(t)\ \ \text{and}\ \ I_{IP}^{(I)}(t)=-I_{syn}^{(\text{SNr},\text{GP})}(t). \tag{5}\] We note that, firing activity of the (output) SNr neurons is determined via competition between DP current [\(I_{DP}(t)\)] and IP current [\(I_{IP}(t)\)] into the SNr. The strengths of DP and IP, \(\mathcal{S}_{DP}\) and \(\mathcal{S}_{IP}\), are given by the magnitudes of their respective time-averaged synaptic currents: \[\mathcal{S}_{DP}=|\overline{I_{DP}(t)}|\ \ \ \text{and}\ \ \ \mathcal{S}_{IP}=| \overline{I_{IP}(t)}|, \tag{6}\] where the overline represents the time averaging and \(|\cdots|\) denotes the absolute magnitude. Then, we introduce the competition degree \(\mathcal{C}_{d}\) between DP and IP, given by the ratio of \(\mathcal{S}_{DP}\) to \(\mathcal{S}_{IP}\): \[\mathcal{C}_{d}=\frac{\mathcal{S}_{DP}}{\mathcal{S}_{IP}}. \tag{7}\] For \(\mathcal{C}_{d}=1\), DP and IP are balanced, and the SNr neurons fire actively with the MFR 25.5 Hz. Hence, the thalamic cells become silent, leading to no movement. In the case of \(\mathcal{C}_{d}>1\), DP is more active than IP, and hence, the firing activities of SNr neurons are suppressed than the balanced state with \(\mathcal{C}_{d}=1\). Thus, the BG gate to the thalamus becomes open, leading to movement facilitation. On the other hand, for \(\mathcal{C}_{d}<1\), IP is more active than DP, and hence, the firing activity of SNr neurons are enhanced than the balanced state with \(\mathcal{C}_{d}=1\). Thus, the BG gate to the thalamus becomes locked, resulting in movement suppression. Hereafter, we employ the above competition degree \(\mathcal{C}_{d}\) between DP and IP and make quantitative analysis for all the default, healthy, and pathological states occurring in the BG. Figure 2(c) shows the time-averaged DP (green) and IP (red) currents for the tonic cortical input, \(\overline{I_{DP}(t)}=-23.1\) and \(\overline{I_{IP}(t)}=23.4\); in the case of IP current, time-averaged values (blue) of their excitatory and inhibitory components are also given, \(\overline{I_{IP}^{(E)}(t)}=470.3\) and \(\overline{I_{IP}^{(I)}}(t)=-446.9\). Thus, the strengths of DP and IP become \(\mathcal{S}_{DP}=23.1\) and \(\mathcal{S}_{IP}=23.4\), respectively, as shown in Fig. 2(d). Consequently, the competition degree between DP and IP is \(\mathcal{C}_{d}=0.99\) (i.e., DP and IP are nearly balanced). In this way, a default state with \(\mathcal{C}_{d}\simeq 1\) appears for the tonic cortical input. In this case, the (output) SNr neurons fire very actively at \(\langle f_{i}^{(\text{SNr})}\rangle=25.5\) Hz and make strong inhibitory projections to the thalamic neurons. Thus, the BG gate to the thalamus is locked for the tonic cortical input, resulting in no movement. We are also concerned about activation and deactivation of neurons in the target population \(X\)[99; 100] which could be used for treatment of pathological states. Optogenetics is a technique that combines optics and genetics to control the activity of target neurons in living organisms, typically using light-sensitive proteins called opsins. The target neurons are genetically modified to express these opsins (i.e., fusion of the opsins into the target neurons). When the opsins are activated by specific wavelengths of light, variation in the intrinsic ionic currents of the neurons in the target population \(X\), \(\Delta I_{ion}^{(X)}\), occurs. When \(\Delta I_{ion}^{(X)}\) is positive (negative), firing activity of the target neurons is increased (decreased), leading to their activation (deactivation). The governing equations for evolution of dynamical states of individual Izhikevich neurons in the \(X\) population are given in Eqs. (10) and (11) in Appendix A. Time evolutions of the dynamical variables are governed by the current \(I_{i}^{(X)}(t)\) of Eq. (12) in Appendix A into the \(i\)th neuron in the \(X\) population. Here, to simulate the effect of optogenetics, in addition to the current \(I_{i}^{(X)}(t)\), we include variation of the intrinsic ionic currents of the target neurons via the light stimulation, \(\Delta I_{ion}^{(X)}(t)\) in Eq. (10). Light stimulation for optogenetics is applied on target neurons in the case of tonic cortical input (3 Hz). As target neurons, we first consider D1 SPNs. With increasing the intensity of light stimulation, magnitude of \(\Delta I_{ion}^{(\rm D1)}\) increases. As an example, Figs. 3(a)-3(c) show the effects of optogenetics for \(\Delta I_{ion}^{(\rm D1)}=120\) pA. The MFR \(\langle f_{i}^{(X)}\rangle\) of D1 SPNs, associated with DP, is much increased to 7.65 Hz from 1.03 Hz (default state); MFRs of other neurons (D2 SPNs, STN, GP), related to IP, remain unchanged (i.e., same as those for the default state) [Fig. 3(a)]. Thus, DP becomes activated via activation of D1 SPNs. Then, firing activities of the output SNr neurons are much suppressed; the MFR of SNr neurons, \(\langle f_{i}^{(\rm SNr)}\rangle\), is much reduced from 25.5 Hz (default state) to 7.1 Hz (down-arrow). In this case, strength of the DP, \(\mathcal{S}_{DP}\) is much increased to 171.5 from 23.1 (default state) [Figs. 3(b) and 3(c)]. Thus, the competition degree \(\mathcal{C}_{d}\) between DP and IP becomes 7.33 which is much larger than that (= 0.99) for the default state. Consequently, through activation of DP, the BG gate to thalamus becomes opened, leading to movement facilitation. Next, D2 SPNs are considered as target neurons for optogenetics. As an example, Figs. 3(d)-3(f) show the effects of optogenetics for \(\Delta I_{ion}^{(\rm D2)}=150\) pA. The MFRs \(\langle f_{i}^{(X)}\rangle\) of the neurons [\(X\) = D2 (SPN), GP, STN], associated with IP, are changed, while the MFR \(\langle f_{i}^{(\rm D1)}\rangle\) of D1 SPNs, related to DP, remains unchanged [Fig. 3(d)]. \(\langle f_{i}^{(\rm D2)}\rangle\) of D2 SPNs is increased to 9.35 Hz from 0.97 Hz (default state). Due to increased inhibitory projections from D2 SPNs, \(\langle f_{i}^{(\rm GP)}\rangle\) of GP neurons is decreased to 6.9 Hz from 29.9 Hz (default state). Because of reduced firing activity of GP neurons, \(\langle f_{i}^{(\rm STN)}\rangle\) of the STN neurons increases to 17.7 Hz from 9.9 Hz (default state). Thus, the strength of IP, \(\mathcal{S}_{IP}\), becomes much increased to 156.8 from 23.4 (default state)[Figs. 3(e) and 3(f)]. In this way, Figure 3: Activations of DP and IP. Colors: parts, associated with DP (green), while parts, related to IP (red). Populations: \(X\) = D1 (SPN), D2 (SPN), STN, GP, and SNr. (1) Activation of DP for \(\Delta I_{ion}^{(\rm D1)}=120\) pA: (a) Population-averaged MFRs \(\langle f_{i}^{(X)}\rangle\) of D1 SPN, D2 SPN, STN, GP, and SNr neurons. Dotted boxes for D1 SPN and SNr represent population-averaged MFRs for \(\Delta I_{ion}^{(\rm D1)}=0\) pA, respectively. (b) Time-averaged synaptic current for DP \(\overline{I_{DP}}\) and IP \(\overline{I_{IP}}\). Inset shows the excitatory and the inhibitory components of the IP current, \(\overline{I_{IP}^{(E)}}\) and \(\overline{I_{IP}^{(I)}}\). (c) Strengths of DP (\(\mathcal{S}_{DP}\)) and IP (\(\mathcal{S}_{IP}\)). The competition degree \(\mathcal{C}_{d}=7.33\). (2) Activation of IP for \(\Delta I_{ion}^{(\rm D2)}=150\) pA: (d) Population-averaged MFR \(\langle f_{i}^{(X)}\rangle\) of D1 SPN, D2 SPN, STN, GP, and SNr neurons. Dotted boxes for D2 SPN, STN, GP, and SNr represent population-averaged MFRs for \(\Delta I_{ion}^{(\rm D2)}=0\) pA, respectively. (e) Time-averaged synaptic current for DP \(\overline{I_{DP}}\) and IP \(\overline{I_{IP}}\). Inset shows the excitatory and the inhibitory components of the IP current, \(\overline{I_{IP}^{(E)}}\) and \(\overline{I_{IP}^{(I)}}\). (f) Strengths of DP (\(\mathcal{S}_{DP}\)) and IP (\(\mathcal{S}_{IP}\)). The competition degree \(\mathcal{C}_{d}=0.15\). (3) Competition between DP and IP for \(\Delta I_{ion}^{(\rm D1)}=120\) pA: (g) Plots of strengths of DP (\(\mathcal{S}_{DP}\)) and IP (\(\mathcal{S}_{IP}\)) versus \(\Delta I_{ion}^{(\rm D2)}\). (h) Plot of the competition degree \(\mathcal{C}_{d}\) versus \(\Delta I_{ion}^{(\rm D2)}\). Horizontal dashed line represents \(\mathcal{C}_{d}=1\). (i) Plot of population-averaged MFR of SNr \(\langle f_{i}^{(\rm SNr)}\rangle\) versus \(\Delta I_{ion}^{(\rm D2)}\). Horizontal dashed line represents \(\langle f_{i}^{(\rm SNr)}\rangle=25.5\) Hz for \(\Delta I_{ion}^{(\rm D1)}=\Delta I_{ion}^{(\rm D2)}=0\) pA. (j) Bar diagram for the competition between DP and IP. Green and red represent DP \(>\) IP and IP \(>\) DP, respectively. IP is activated. Then, the competition degree, \(\mathcal{C}_{d}\), between DP and IP becomes 0.15 which is much smaller than that (\(=0.99\)) for the default state. As a result, via activation of IP, the BG gate to thalamus is locked, resulting in movement suppression. As a 3rd case, we study competition between DP and IP via light stimulation on both D1 and D2 SPNs. For simplicity, activation of D1 SPNs is fixed for \(\Delta I_{ion}^{\rm(D1)}=120\) pA; in this case, strength of the DP, \(\mathcal{S}_{DP}\) is 171.5. By increasing \(\Delta I_{ion}^{\rm(D2)}\) from 0, competition between DP and IP is investigated. Figures 3(g)-3(i) show well the effects of optogenetics on their competition. As \(\Delta I_{ion}^{\rm(D2)}\) is increased from 0, the strength of IP, \(\mathcal{S}_{IP}\) is found to monotonically increase from 23.4 [Fig. 3(g)]. Due to monotonic increase in \(\mathcal{S}_{IP}\), the competition degree \(\mathcal{C}_{d}\) between DP and IP decreases monotonically from 7.33 [Fig. 3(h)], and the MFR of the (output) SNr neurons, \(\langle f_{i}^{\rm(SNr)}\rangle\), increases monotonically from 7.1 Hz [Fig. 3(i)]. We note that, when passing a threshold, \(\Delta I_{ion}^{\rm(D2*)}(\simeq 158\) pA), \(\mathcal{S}_{IP}\) becomes the same as \(\mathcal{S}_{DP}\). Figure 3(j) shows a diagram for competition between DP and IP. For \(\Delta I_{ion}^{\rm(D2)}<\Delta I_{ion}^{\rm(D2*)}\), \(\mathcal{S}_{DP}\) of DP is larger than \(\mathcal{S}_{IP}\) of IP (i.e., \(\mathcal{C}_{d}>1\)), and then the MFR of SNr neurons, \(\langle f_{i}^{\rm(SNr)}\rangle\), becomes smaller than that (\(=25.5\) Hz) for the default state. Consequently, the BG gate to thalamus is opened, leading to movement facilitation. On the other hand, for \(\Delta I_{ion}^{\rm(D2)}>\Delta I_{ion}^{\rm(D2*)}\), \(\mathcal{S}_{IP}\) of IP is larger than \(\mathcal{S}_{DP}\) of DP, and then the mean firing rate of SNr neurons, \(\langle f_{i}^{\rm(SNr)}\rangle\), becomes larger than that (\(=25.5\) Hz) for the default state. As a result, the BG gate to thalamus is locked, resulting to movement suppression. From now on, we consider the case of phasic cortical input with \(f=10\) Hz in the phasically-active state, in contrast to the above case of tonic cortical input with \(f=3\) Hz in the resting default state [40; 43; 64; 75; 76; 77; 78; 79]. Population firing behaviors of the BG neurons may be well seen in the raster plots of spikes and they may also be characterized well in terms of their IPSRs. Figures 4(a1)-4(a5) show the raster plots of spikes and the IPSRs \(R_{X}(t)\) for \(X=\) D1 SPN (green), D2 SPN (red), STN (red), GP (red), and SNr, respectively. As shown in Fig. 4(b), population-averaged MFRs of BG neurons, \(\langle f_{i}^{(X)}\rangle\), for the phasic case are 30.7, 24.1, 39.8, 7.3, and 5.5 Hz for \(X=\) D1 (SPN), D2 (SPN), STN, GP, and SNr, respectively. We note that \(\langle f_{i}^{\rm(D1)}\rangle\) and \(\langle f_{i}^{\rm(D2)}\rangle\) of D1 and D2 SPNs are much larger than those for the tonic default case with \(\langle f_{i}^{\rm(D1)}\rangle=1.03\) Hz and \(\langle f_{i}^{\rm(D2)}\rangle=0.97\) Hz. As a result of activation of both D1 SPNs and D2 SPNs, both DP and IP become activated. In the case of IP, \(\langle f_{i}^{\rm(GP)}\rangle\) of GP neurons is reduced from that (\(=29.9\) Hz) for the resting default state due to strong inhibition from the D2 SPNs, and \(\langle f_{i}^{\rm(STN)}\rangle\) of STN neurons is increased from that (\(=9.9\) Hz) for the default state because of reduced inhibition from the GP neurons. Through competition between DP and IP, the firing activities of the output SNr neurons are suppressed [i.e. their MFR, \(\langle f_{i}^{\rm(SNr)}\rangle\), is reduced to 5.5 Hz from 25.5 Hz (default state)]. Due to reduced activity of SNr neurons, the thalamus becomes disinhibited. Thus, the BG gate to the thalamus is opened, leading to movement facilitation. We make quantitative analysis of DP and IP currents, \(I_{DP}\) and \(I_{IP}\), into the SNr. The strengths of DP and IP, \(\mathcal{S}_{DP}\) and \(\mathcal{S}_{IP}\), given by the magnitudes of time-averaged DP current (\(I_{DP}\)) and IP current (\(I_{IP}\)), are 2309.7 and 815.6, respectively [Figs. 4(c) and 4(d)]. They are much increased from \(\mathcal{S}_{DP}\) (\(=23.1\)) and \(\mathcal{S}_{IP}\) (\(=23.4\)) in the default state. But, we note that, in the case of phasic cortical input (10 Hz), \(\mathcal{S}_{DP}\) is much more increased than \(\mathcal{S}_{IP}\). Hence, the competition degree \(\mathcal{C}_{d}^{*}\) between DP and IP, given by the ratio of \(\mathcal{S}_{DP}\) to \(\mathcal{S}_{IP}\), becomes 2.82 (i.e., DP is 2.82 times stronger than IP), in contrast to the default state with \(\mathcal{C}_{d}\simeq 1\) (i.e., DP and IP are nearly balanced). As a result of more activeness of DP, the MFR of the output SNr neurons, \(\langle f_{i}^{\rm(SNr)}\rangle\), becomes much decreased to 5.5 Hz from 25.5 Hz (default state). Consequently, in this healthy state with \(\mathcal{C}_{d}^{*}=2.82\), the BG gate to the thalamus becomes opened, leading to facilitation of normal movement, via competitive harmony (i.e., competition and cooperative interplay) between DP and IP. Figure 4: Healthy basal ganglia state for the phasic cortical input (10 Hz) in the phasically-active state and normal DA level \(\phi=0.3\). Colors: parts, associated with DP (green), while parts, related to IP (red). Populations: \(X=\) D1 (SPN), D2 (SPN), STN, GP, and SNr. Raster plots of spikes and IPSRs \(R_{X}(t)\) of (a1) D1 SPN, (a2) D2 SPN, (a3) STN, (a4) GP, and (a5) SNr neurons. (b) Population-averaged MFR of D1 SPN, D2 SPN, STN, GP, and SNr neurons. (c) Time-averaged synaptic current for DP (\(\overline{I_{DP}}\)) and IP (\(\overline{I_{IP}}\)). Inset shows the excitatory and the inhibitory components of the IP current, \(\overline{I_{IP}^{(E)}}\) and \(\overline{I_{IP}^{(I)}}\). (d) Strengths of DP (\(\mathcal{S}_{DP}\)) and IP (\(\mathcal{S}_{IP}\)). The competition degree \(\mathcal{C}_{d}^{*}(=\mathcal{S}_{DP}/\mathcal{S}_{IP})=2.82\). ### Pathological BG States with Disharmony between DP and IP In this subsection, we consider the case of reduced DA level, \(\phi=\phi^{*}\)(= 0.3) \(x_{DA}\) (\(1>x_{DA}\geq 0\)); \(\phi^{*}\) (=0.3) is the normal DA level [73; 74]. With decreasing the fraction of DA level, \(x_{DA}\), we make quantitative analysis of strengths of DP (\(\mathcal{S}_{DP}\)) and IP (\(\mathcal{S}_{IP}\)), their competition degree \(\mathcal{C}_{d}\), and (population-averaged) MFRs, \(\langle f_{i}^{(X)}\rangle\) of the BG neurons in the \(X\) populations [\(X\) = D1 (SPN), D2 (SPN), STN, GP, and SNr], in the case of phasic cortical input with \(f=10\) Hz in the phasically-active state. For D1 SPNs, raster plots of spikes and IPSRs are shown in Figs. 5(a1)-5(a4) for \(x_{DA}\)= 0.9, 0.6, 0.4, and 0.1, respectively. Their (population-averaged) MFR \(\langle f_{i}^{(\mathrm{D1})}\rangle\) is found to monotonically decrease from 30.7 Hz [Fig. 5(b)]. Thus, D1 SPNs are under-active due to loss of DA, leading to occurrence of under-active DP. In the case of D2 SPNs, Figs. 5(c1)-5(c4) show raster plots of spikes and IPSRs for \(x_{DA}\)= 0.9, 0.6, 0.4, and 0.1, respectively. In contrast to the case of D1 SPNs, their (population-averaged) MFR \(\langle f_{i}^{(\mathrm{D2})}\rangle\) is found to monotonically increase from 24.1 Hz [Fig. 5(d)]. Thus, D2 SPNs are over-active because of loss of DA, resulting in appearance of over-active IP. In the case of STN and GP, associated with IP, their population firing behaviors are shown in their raster plots of spikes and IPSRs for \(x_{DA}\) = 0.9, 0.6, 0.4, and 0.1 [see Figs. 5(e1)-5(e4) for STN and see Figs. 5(g1)-5(g4) for GP]. Due to over-active firing activity of the D2 SPNs, the (population-averaged) MFR \(\langle f_{i}^{(\mathrm{GP})}\rangle\) of GP neurons is found to monotonically decrease with \(x_{DA}\) from 7.3 Hz [Fig. 5(h)]. Also, because of reduced firing activity of the GP neurons, the (population-averaged) MFR \(\langle f_{i}^{(\mathrm{STN})}\rangle\) of STN neurons is found to monotonically increase with \(x_{DA}\) from 39.8 Hz [Fig. 5(f)]. Figure 5(i) shows the plot of strengths of DP (green) and IP (red), \(\mathcal{S}_{DP}\) and \(\mathcal{S}_{IP}\), versus \(x_{DA}\). We note that, with decreasing \(x_{DA}\) from 1, \(\mathcal{S}_{IP}\) increases rapidly (i.e., over-active IP), while \(\mathcal{S}_{dP}\) decreases very slowly (i.e., under-active DP). Then, the competition degree \(\mathcal{C}_{d}\) between DP and IP, given by the ratio of \(\mathcal{S}_{IP}\) to \(\mathcal{S}_{DP}\), is found to monotonically decrease from \(\mathcal{C}_{d}^{*}\) (=2.82), corresponding to that in the healthy state with harmony between DP and IP). When passing a threshold \(x_{DA}^{*}\) (\(\simeq\) 0.27), \(\mathcal{C}_{d}=1\) (i.e., DP and IP are balanced); for \(x_{DA}>x_{DA}^{*}\), \(\mathcal{C}_{d}>1\), while for for \(x_{DA}<x_{DA}^{*}\), \(\mathcal{C}_{d}<1\). Figures 5(k1)-5(k4) and 5(l) show population and individual firing behaviors of the output SNr neurons, respec Figure 5: Pathological basal ganglia state for the phasic cortical input (10 Hz) in the phasically-active state. Colors: parts, associated with DP (green), while parts, related to IP (red). (a1)-(a4) Raster plots of spikes and IPSRs \(R_{\mathrm{D1}}(t)\) of D1 SPNs when \(x_{DA}\) (fraction of DA level) is 0.9, 0.6, 0.4, and 0.1, respectively. (b) Population-averaged MFR \(\langle f_{i}^{(\mathrm{D1})}\rangle\) of D1 SPNs versus \(x_{DA}\). (c1)-(c4) Raster plots of spikes and IPSRs \(R_{\mathrm{D2}}(t)\) of D2 SPNs when \(x_{DA}\) is 0.9, 0.6, 0.4, and 0.1, respectively. (d) Population-averaged MFR \(\langle f_{i}^{(\mathrm{D2})}\rangle\) of D2 SPNs versus \(x_{DA}\). (e1)-(e4) Raster plots of spikes and IPSRs \(R_{\mathrm{STN}}(t)\) of STN neurons when \(x_{DA}\) is 0.9, 0.6, 0.4, and 0.1, respectively. (f) Population-averaged MFR \(\langle f_{i}^{(\mathrm{STN})}\rangle\) of STN cells versus \(x_{DA}\). (g1)-(g4) Raster plots of spikes and IPSRs \(R_{\mathrm{GP}}(t)\) of GP neurons when \(x_{DA}\) is 0.9, 0.6, 0.4, and 0.1, respectively. (h) Population-averaged MFR \(\langle f_{i}^{(\mathrm{GP})}\rangle\) of GP cells versus \(x_{DA}\). (i) Plots of strengths of DP (\(\mathcal{S}_{DP}\)) and IP (\(\mathcal{S}_{IP}\)) versus \(x_{DA}\). (j) Plot of the competition degree \(\mathcal{C}_{d}\)(= \(\mathcal{S}_{DP}/\mathcal{S}_{IP}\)) versus \(x_{DA}\). Horizontal dashed line represents \(\mathcal{C}_{d}=1\). (k1)-(k4) Raster plots of spikes and IPSRs \(R_{\mathrm{SNr}}(t)\) of SNr neurons when \(x_{DA}\) is 0.9, 0.6, 0.4, and 0.1, respectively. (l) Population-averaged MFR \(\langle f_{i}^{(\mathrm{SNr})}\rangle\) of SNr neurons versus \(x_{DA}\). Horizontal dashed line represents \(\langle f_{i}^{(\mathrm{SNr})}\rangle=25.5\) Hz for the default tonic state. tively. With decreasing \(x_{DA}\) from 1, their population-averaged MFR \(\langle f_{i}^{\rm(SNr)}\rangle\) is found to monotonically increase from 5.5 Hz (corresponding to that in the healthy state). When \(x_{DA}\) passes its threshold, \(x_{DA}^{*}\) (\(\simeq 0.27\)), \(\langle f_{i}^{\rm(SNr)}\rangle\) becomes larger than 25.5 Hz [corresponding to that in the default state with \(\mathcal{C}_{d}\simeq 1\), and represented by the horizontal dashed line in Fig. 5(l)]. Due to loss of DA (\(x_{DA}<1\)), IP becomes highly over-active, while DP becomes under-active, in comparison to the healthy state with \(x_{DA}=1\). For \(1>x_{DA}>x_{DA}^{*}\) (\(\simeq~{}0.27\)), \(\mathcal{C}_{d}^{*}(=2.82)>\mathcal{C}_{d}>1\). In this case, DP is still stronger than IP, and hence the BG gate to the thalamus is opened. But, the (population-averaged) MFR of SNr neurons, \(\langle f_{i}^{\rm(SNr)}\rangle\), is larger than that (\(=5.5\) Hz) for the healthy state with \(\mathcal{C}_{d}^{*}\) (\(=2.82\)). Hence, with decreasing \(x_{DA}\) from 1, the "opening" degree (of the BG gate to the thalamus) is gradually reduced (i.e., occurrence of break-up of harmony between DP and IP), resulting in appearance of a pathological state (e.g., PD showing abnormal impaired movement)) with disharmony between DP and IP. For \(x_{DA}<x_{DA}^{*}\), \(\mathcal{C}_{d}<1\) and \(\langle f_{i}^{\rm(SNr)}\rangle>25.5\) Hz. In this case, IP is stronger than DP, and hence the BG gate to the thalamus becomes locked, leading to no movement. As \(x_{DA}\) is decreased from \(x_{DA}^{*}\) the "locking" degree of the BG gate (to the thalamus) is increased. ### Treatment of Pathological States via Recovery of Harmony between DP and IP For the pathological state, IP is over-active, while DP is under-active, in comparison to the healthy state. In this way, harmony between DP and IP is broken up in the case of the pathological state (i.e. occurrence of disharmony between DP and IP). Here, we investigate treatment of the pathological state with reduced competition degree \(\mathcal{C}_{d}\) [\(<\mathcal{C}_{d}^{*}\) (\(=2.82\) for the healthy state)] via recovery of harmony between DP and IP. In Fig. 3, activation and deactivation of the target neurons via optogenetics are studied. When the light-sensitive proteins (called the opsins) are activated by specific light stimulation, variation in the intrinsic ionic currents of the neurons in the target population \(X\), \(\Delta I_{ion}^{(X)}\), occurs. When \(\Delta I_{ion}^{(X)}\) is positive (negative), firing activity of the target neurons is increased (decreased), resulting in their activation (deactivation) [99; 100]. As discussed there, we simulate the effects of optogenetics by including \(\Delta I_{ion}^{(X)}\) in Eq. (A1) (in Appendix A), in addition to Figure 6: Treatment of pathological states. Colors: parts, associated with DP (green), while parts, related to IP (red). (1) Strengthening DP by activation of D1 SPN. Plots of (a1) \(\mathcal{S}_{DP}\) (strength of DP) and \(\mathcal{S}_{IP}\) (strength of IP), (a2) \(\mathcal{C}_{d}\) (competition degree), and (a3) \(\langle f_{i}^{\rm(SNr)}\rangle\) (MFR of SNr neurons) versus \(\Delta I_{ion}^{(\rm D1)}\) for \(x_{DA}=0.6\). (b) Plot of \(\Delta I_{ion}^{(\rm D1)*}\) (threshold) versus \(x_{DA}\). (2) Weakening IP by deactivation of D2 SPN. Plots of (c1) \(\mathcal{S}_{DP}\) and \(\mathcal{S}_{IP}\), (c2) \(\mathcal{C}_{d}\), and (c3) \(\langle f_{i}^{\rm(SNr)}\rangle\) versus \(\Delta I_{ion}^{(\rm D2)}\) for \(x_{DA}=0.6\). (d) Plot of \(\Delta I_{ion}^{(\rm D2)*}\) (threshold) versus \(x_{DA}\). (3) Weakening IP by deactivation of STN. Plots of (e1) \(\mathcal{S}_{DP}\) and \(\mathcal{S}_{IP}\), (e2) \(\mathcal{C}_{d}\), and (e3) \(\langle f_{i}^{\rm(SNr)}\rangle\) versus \(\Delta I_{ion}^{(\rm STN)*}\) for \(x_{DA}=0.6\). (f) Plot of \(\Delta I_{ion}^{(\rm STN)*}\) (threshold) versus \(x_{DA}\). (4) Weakening IP by ablation of STN neurons. Plots of (g1) \(\mathcal{S}_{DP}\) and \(\mathcal{S}_{IP}\), (g2) \(\mathcal{C}_{d}\), and (g3) \(\langle f_{i}^{\rm(SNr)}\rangle\) versus \(x_{\rm STN}\) for \(x_{DA}=0.6\). (h) Plot of \(x_{\rm STN}^{*}\) (threshold) versus \(x_{DA}\). Horizontal dashed lines in (a2), (c2), (e2), and (g2) represent \(\mathcal{C}_{d}^{*}\) (\(=2.82\)) for the healthy state when \(x_{DA}=1\). Horizontal dashed lines in (a3), (c3), (e3), and (g3) represent \(\langle f_{i}^{\rm(SNr)}\rangle\) (\(=5.5\) Hz) for the healthy state when \(x_{DA}=1\). the current, \(I_{i}^{(X)}\), into the target \(X\) population. As the intensity of light stimulation is increased, the magnitude of \(\Delta I_{ion}^{(X)}\) also increases. As an example, we consider the pathological state with \(\mathcal{C}_{d}=1.71\) for \(x_{DA}=0.6\) where harmony between DP and IP is broken up. In this pathological state, DP is under-active. Hence, we first strengthen the DP via activation of the target D1 SPNs. Figure 6(a1) shows plots of \(\mathcal{S}_{DP}\) (strength of DP) and \(\mathcal{S}_{IP}\) (strength of IP) versus \(\Delta I_{ion}^{(\mathrm{D1})}\). \(\mathcal{S}_{DP}\) (green) increases rapidly from 2200, while \(\mathcal{S}_{IP}\) (red) remains unchanged (i.e., 1288.9). Thanks to the strengthened DP, the competition degree \(\mathcal{C}_{d}\) between DP and IP is found to increase from 1.71 [Fig. 6(a2)]. Also, the population-averaged MFR of the output SNr neurons, \(\langle f_{i}^{(\mathrm{SNr})}\rangle\), is found to decrease from 13 Hz [Fig. 6(a3)]. We note that, when \(\Delta I_{ion}^{(\mathrm{D1})}\) passes a threshold \(\Delta I_{ion}^{(\mathrm{D1})*}\) (= 51 pA), \(\mathcal{C}_{d}\ =\ \mathcal{C}_{d}^{*}\) (= 2.82) and \(\langle f_{i}^{(\mathrm{SNr})}\rangle\ =\ \langle f_{i}^{(\mathrm{SNr})*}\rangle\) (= 5.5 Hz); \(\mathcal{C}_{d}^{*}\) and \(\langle f_{i}^{(\mathrm{SNr})*}\rangle\) are those for the healthy state, and they are represented by the horizontal dashed lines in Figs. 6(a2) and 6(a3). Thus, for \(x_{DA}=0.6\), the pathological state with \(\mathcal{C}_{d}=1.71\) may have \(\mathcal{C}_{d}^{*}\) (= 2.82) via activation of D1 SPNs for the threshold, \(\Delta I_{ion}^{(\mathrm{D1})*}\) (= 51 pA); DP becomes 2.82 times stronger than IP, as in the case of healthy state. In this way, balance between DP and IP is recovered for \(\Delta I_{ion}^{(\mathrm{D1})*}=51\) pA. Figure 6(b) shows the plot of \(\Delta I_{ion}^{(\mathrm{D1})*}\) versus \(x_{DA}\). As \(x_{DA}\) is decreased from 1, the threshold \(\Delta I_{ion}^{(\mathrm{D1})*}\) is increased; with decreasing \(x_{DA}\), more \(\Delta I_{ion}^{(\mathrm{D1})*}\) is necessary for recovery between DP and IP. In the pathological state for \(x_{DA}=0.6\), IP is over-active. Hence, for recovery of harmony between DP and IP, we try to weaken the IP via deactivation of D2 SPNs or STN neurons; in the case of deactivation, \(\Delta I_{ion}^{(X)}\) [\(X\) = D2 (SPN) and STN] is negative, in contrast to the case of activation with \(\Delta I_{ion}^{(\mathrm{D1})}>0\). Figures 6(c1)- 6(c3) and 6(d) show the case of deactivation of D2 SPNs. As the magnitude of \(\Delta I_{ion}^{(\mathrm{D2})}\) is increased (i.e., more negative), strength of IP, \(\mathcal{S}_{IP}\) (red), is found to decrease from 1288.9, while \(\mathcal{S}_{DP}\) (green) remains constant (= 2200). Due to the weakened IP, the competition degree \(\mathcal{C}_{d}\) between DP and IP increases from 1.71 [Fig. 6(c2)], and the population-averaged MFR of the output SNr neurons, \(\langle f_{i}^{(\mathrm{SNr})}\rangle\), decreases from 13 Hz [Fig. 6(c3)]. When passing a threshold \(\Delta I_{ion}^{(\mathrm{D2})*}\) (= -65 pA), the competition degree \(\mathcal{C}_{d}\) and the population-averaged MFR \(\langle f_{i}^{(\mathrm{SNr})}\rangle\) recover their values for the healthy state, \(\mathcal{C}_{d}^{*}\) (= 2.82) and \(\langle f_{i}^{(\mathrm{SNr})*}\rangle\) (= 5.5 Hz), as in the above case of activation of D1 SPNs. Thus, balance between DP and IP becomes recovered for \(\Delta I_{ion}^{(\mathrm{D2})*}\) = -65 pA. Figure 6(d) shows the plot of \(\Delta I_{ion}^{(\mathrm{D2})*}\) versus \(x_{DA}\). With decreasing \(x_{DA}\) from 1, the threshold \(\Delta I_{ion}^{(\mathrm{D2})*}\) is decreased (i.e., its magnitude increases). As \(x_{DA}\) is decreased from 1, more negative \(\Delta I_{ion}^{(\mathrm{D2})*}\) is required for recovery between DP and IP. We also study the case of deactivation of STN to weaken the IP. Figures 6(e1)- 6(e3) and 6(f) show the case of deactivation of STN. We note that the process of deactivation for STN is similar to that for D2 SPNs. Thus, when \(\Delta I_{ion}^{(\mathrm{STN})}\) passes a threshold, \(\Delta I_{ion}^{(\mathrm{STN})*}\) (= -42 pA), balance between DP and IP becomes recovered (i.e., \(\mathcal{C}_{d}\) and \(\langle f_{i}^{(\mathrm{SNr})}\rangle\) have their values for the healthy state) [Figs. 6(e2) and 6(e3)]. As \(x_{DA}\) is decreased from 1, the threshold value of \(\Delta I_{ion}^{(\mathrm{STN})*}\) is found to decrease, and hence more negative \(\Delta I_{ion}^{(\mathrm{STN})*}\) is necessary to get recovery between DP and IP [Fig. 6(f)]. Finally, instead of the above activation/deactivation via optogenetics, we also consider ablation of STN neurons in the pathological state for \(x_{DA}=0.6\) to reduce the over-activity of STN neurons. In the case of ablation, the number of STN neurons, \(N_{\mathrm{STN}}\), is reduced to \(N_{\mathrm{STN}}^{(n)}\)\(x_{\mathrm{STN}}\) (\(1>x_{\mathrm{STN}}\geq 0\)), where \(N_{\mathrm{STN}}^{(n)}\) (= 14) is the normal number of STN neurons and \(x_{\mathrm{STN}}\) is the fraction of number of STN neurons. We note that, the effect of decreasing \(x_{\mathrm{STN}}\) via ablation is similar to that of deactivation of STN neurons via optogenetics. Figures 6(g1)- 6(g3) and 6(h) show the case of ablation of STN neurons. With decreasing \(x_{\mathrm{STN}}\) from 1, strength of IP, \(\mathcal{S}_{IP}\) (red), is found to decrease from 1288.9 (i.e., IP becomes weakened) [Fig. 6(g1)]. When passing a threshold, \(x_{\mathrm{STN}}^{*}\) (\(\simeq 0.51\)), balance between DP and IP becomes recovered; \(\mathcal{C}_{d}\) and \(\langle f_{i}^{(\mathrm{SNr})}\rangle\) have their values for the healthy state with the balanced DP and IP [Figs. 6(g2) and 6(g3)]. Figure 6(h) shows the plot of \(x_{\mathrm{STN}}^{*}\) versus \(x_{DA}\). As \(x_{DA}\) is decreased, \(x_{\mathrm{STN}}^{*}\) decreases; more ablation (i.e., smaller \(x_{\mathrm{STN}}\)) is necessary for balance between DP and IP. ## IV Summary and Discussion The BG exhibit diverse functions for motor and cognition. They control voluntary movement and make a crucial role in cognitive processes (e.g., action selection). Dysfunction in the BG is related to movement disorder (e.g., PD) and cognitive disorder. There are two competing pathways in the BG, "Go" DP (facilitating movement) and "No-Go" IP (suppressing movement) [44; 45; 46; 47; 48; 49; 50; 51]. A variety of functions of the BG have been known to be done via "balance" between DP and IP. However, so far, no quantitative analysis for such balance was made. For quantitative analysis, we introduced the competition degree, \(\mathcal{C}_{d}\), between DP and IP, given by the ratio of strength of DP (\(\mathcal{S}_{DP}\)) to strength of IP (\(\mathcal{S}_{IP}\)) (i.e., \(\mathcal{C}_{d}=\mathcal{S}_{DP}/\mathcal{S}_{IP}\)); \(\mathcal{S}_{DP}\) (\(\mathcal{S}_{IP}\)) is just the magnitude of time-averaged DP (IP) current into the SNr (output nucleus) [i.e., \(\mathcal{S}_{DP}\) (\(\mathcal{S}_{IP}\)) = \(\overline{|I_{DP}(t)|}\) (\(\overline{|I_{IP}(t)|}\)) (the overline represents time averaging)]. By employing \(\mathcal{C}_{d}\), we quantitatively analyzed competitive harmony (i.e., competition and cooperative interplay) between DP and IP. The case of normal DA level of \(\phi^{*}=0.3\) was first con sidered. A default BG state with \(\mathcal{C}_{d}\simeq 1\) (i.e., DP and IP are balanced) was found to appear for the tonic cortical input (3 Hz) in the resting state. In this default case, the firing activities of the output SNr neurons are very active with the firing frequency \(f=25.5\) Hz, leading to the locked state of the BG gate to the thalamus. As a result, no voluntary movement occurs. In contrast, for the phasic cortical input (10 Hz) in the phasically-active state, a healthy state with \(\mathcal{C}_{d}^{*}=2.82\) was found to appear. In this healthy case, DP is 2.82 times stronger than IP, in contrast to the default case with balanced DP and IP. Due to more activeness of DP, the firing frequency of the SNr neurons becomes much reduced to 5.5 Hz, resulting in the opened state of the BG gate to the thalamus. Consequently, normal movement occurs via competitive harmony between DP and IP. However, as the DA level, \(\phi=\phi^{*}(=0.3)\)\(x_{DA}\) (\(1>x_{DA}\geq 0\)), is reduced, the competition degree \(\mathcal{C}_{d}\) between DP and IP was found to monotonically decrease from \(\mathcal{C}_{d}^{*}\), resulting in appearance of a pathological state. In the case of the pathological state, strength of IP (\(\mathcal{S}_{IP}\)) was found to be much increased than that for the normal healthy state, which leads to disharmony between DP and IP. Due to break-up of harmony between DP and IP, generating from deficiency in DA production in the neurons of the SNc [73; 74], a pathological state (e.g., PD with impaired movement) occurs. In the case of the pathological state such as PD, DP is under-active, while IP is over-active, in comparison to the healthy state. We also investigated treatment of the pathological state via recovery of harmony between DP and IP. We included the effects of optogenetics, activating/deactivating the target neurons, in the governing equations of their states by adding \(\Delta I_{ion}^{(X)}\) (variation in the intrinsic ionic current of the target caused by the optogenetics). DP was found to be strengthened via activation of D1 SPNs, while IP was found to be weakened through deactivation of D2 SPNs or STN neurons. As a result of this kind of activation/deactivation, the competition degree (\(\mathcal{C}_{d}\)) and the population-averaged MFR (\(\langle f_{i}^{(\text{SNr})}\rangle\)) of the SNr neurons were found to have their ones for the healthy state, [i.e., \(\mathcal{C}_{d}^{*}=2.82\) and \(\langle f_{i}^{(\text{SNr})*}\rangle=5.5\) Hz]. In this way, treatment was done through recovery of harmony between DP and IP. Finally, we discuss limitations of our present work and future works. In addition to motor control, the BG plays an important role in cognitive processes such as action selection [67; 5; 67; 7; 8; 9; 10]. In this case, a BG network with parallel channels, representing different action requests, arising from the cortex, is usually considered. Saliency of a channel may be given by the firing frequency of its cortical input; the higher frequency denotes the higher saliency. Resolution of competition between the channels may be given by selection of a particular channel with the highest salience. Firing activities of the SNr neurons in the highest salient channel are suppressed below the tonic firing frequency (threshold), and hence action in this channel is selected. On the other hand, in the other neighboring channels, firing activities of the SNr neurons are enhanced above the tonic frequency, and hence actions in these channels are not selected. As a future work, we could apply our present approach, based on the competition degree \(\mathcal{C}_{d}\), to the case of action selection. Saliency of each channel may be given by its \(\mathcal{C}_{d}\). Then, action in the channel with the highest \(\mathcal{C}_{d}\) could be selected. Next, in future, we would like to consider more realistic SNN for the BG. In our present SNN, we consider only the D1/D2 SPNs (95 % major population) in the striatum (primary input nucleus in BG). But, the remaining minor population of fast interneurons (FSIs) are known to exert strong effects on firing activities of the D1/D2 SPNs [101; 40]. Hence, it is worth while to include the FSIs in the SNN for the BG. Of course, the effects of DA on the FSIs and their synaptic inputs must also be considered. In this way, to take into consideration the effects of the FSIs would be suitable for more complete SNN for the BG. Moreover, it would be desirable that, our present BG SNN with cortical inputs modelled by Poisson spike trains is extended to the cortico-BG-thalamo-cortical (CBGTC) loop by including the cortical and the thalamic neurons for more complete computational work [104; 52]. We also discuss application of the optogenetic techniques to human patients for treatment of a pathological state [102; 103]. In a pathological state with movement disorder (e.g., PD), harmony between DP and IP is broken up; DP is under-active, while IP is over-active, in comparison to the healthy case. As shown in Sec. III.3, such harmony between DP and IP could be recovered by strengthening DP or weakening IP. To this end, optogenetics may be used. Activation of D1 SPNs via optogenetics leads to strengthening DP and deactivation of D2 SPNs or STN neurons through optogenetics results in weakening IP. We hope that, in near future, safe clinical applications of optogenetic techniques to human patients could be successfully available through collaboration of researchers and clinicians. Then, it would take a substantial step forward for treatment of PD. ###### Acknowledgements. This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (Grant No. 20162007688). SYK thanks Profs. Sengor and Humphries for discussions on the basal ganglia at the initial stage of this work. ## Appendix A Izhikevich Spiking Neuron Models and DA Effects The Izhikevich neuron models are chosen as elements of our BG SNN [80; 81; 82; 83]. Evolution of dynamical states of individual neurons in the \(X\) population [\(X\) = D1 (SPN), D2 (SPN), STN, GP, and SNr] is governed by the following equations: \[C_{X}\frac{dv_{i}^{(X)}}{dt} = k_{X}(v_{i}^{(X)}-v_{r}^{(X)})(v_{i}^{(X)}-v_{t}^{(X)}) \tag{10}\] \[-u_{i}^{(X)}+I_{i}^{(X)},\] \[\frac{du_{i}^{(X)}}{dt} = a_{X}\left\{b_{X}(v_{i}^{(X)}-v_{r}^{(X)})-u_{i}^{(X)}\right\};\] (11) \[\qquad i=1,...,N_{X},\] with the auxiliary after-spike resetting: \[\text{if }v_{i}^{(X)}\geq v_{peak}^{(X)},\text{ then }v_{i}^{(X)}\gets c_{X} \text{ and }u_{i}^{(X)}\gets u_{i}^{(X)}+d_{X}, \tag{12}\] where \(N_{X}\) and \(I_{i}^{(X)}(t)\) are the total number of neurons and the current into the \(i\)th neuron in the \(X\) population, respectively. In Eqs. (10) and (11), the dynamical state of the \(i\)th neuron in the \(X\) population at a time \(t\) (msec) is characterized by its membrane potential \(v_{i}^{(X)}(t)\) (mV) and the slow recovery variable \(u_{i}^{(X)}(t)\) (pA). When the membrane potential \(v_{i}^{(X)}(t)\) reaches its apex \(v_{peak}^{(X)}\) (i.e., spike cutoff value), the neuron fires, and then the membrane potential \(v_{i}^{(X)}\) and the recovery variable \(u_{i}^{(X)}\) are reset according to the rules of Eq. (12). There are 9 intrinsic parameters in each \(X\) population; \(C_{X}\) (pF): membrane capacitance, \(v_{r}^{(X)}\) (mV): resting membrane potential, \(v_{t}^{(X)}\) (mV): instantaneous threshold potential, \(k_{X}\) (nS/mV): parameter associated with the neuron's rheobase, \(a_{X}\) (msec\({}^{-1}\)): recovery time constant, \(b_{X}\) (nS): parameter associated with the input resistance, \(c_{X}\) (mV): after-spike reset value of \(v_{i}^{(X)}\), \(d_{X}\) (pA): after-spike jump value of \(u_{i}^{(X)}\), and \(v_{peak}^{(X)}\) (mV): spike cutoff value. Table 3 shows the 9 intrinsic parameter values of D1 SPN, D2 SPN, STN, GP, and SNr; in addition to the parameter values of the D1/D2 SPNs given in [41; 42], we get the parameter values of the other neurons (STN, GP, SNr), based on the work in [43]. In the case of GP and STN, we consider the major subpopulations of high frequency pause (85 %) and short rebound bursts (60 %), respectively. Also, we use the standard 2-variable Izhikevich neuron model for the STN, instead of the 3-variable Izhikevich neuron model in [43]; these two models give nearly the same results for the STN. We also consider the effects of DA modulation on the D1 and D2 SPNs [41; 42; 43]. D1 receptors activation has two opposing effects on intrinsic ion channels. It enhances the inward-rectifying potassium current (KIR), leading to hyperpolarization of the D1 SPN. In contrast, it lowers the activation threshold of the L type Ca\({}^{2+}\) current, resulting in depolarization of the D1 SPN. These two hyperpolarization and depolarization effects are modelled via changes in intrinsic parameters of the D1 SPN: \[v_{r} \leftarrow v_{r}(1+\beta_{1}^{\text{(D1)}}\phi_{1}), \tag{13}\] \[d \leftarrow d(1-\beta_{2}^{\text{(D1)}}\phi_{1}). \tag{14}\] Here, Eq. (13) models the hyperpolarizing effect of the increasing KIR by upscaling \(v_{r}\), while Eq. (14) models enhanced depolarizing effect of the L type Ca\({}^{2+}\) current by downscaling \(d\). The parameters \(\beta_{1}^{(D1)}\) and \(\beta_{2}^{(D1)}\) denote the amplitudes of their respective effects, and \(\phi_{1}\) is the DA level (i.e., fraction of active DA receptors) for the D1 SPNs. Next, D2 receptors activation has small inhibitory effect on the slow A-type potassium current, leading to decrease in the neuron's rheobase current. This depolarizing effect is well modelled by downscaling the parameter, \(k\): \[k\gets k(1-\beta_{1}^{\text{(D2)}}\phi_{2}), \tag{15}\] \begin{table} \begin{tabular}{|l|l|l|} \hline D1 SPN & \(\begin{array}{l}v_{r}\gets v_{r}(1+\beta_{1}^{\text{(D1)}}\phi_{1}) \\ \end{array}\) & \(\begin{array}{l}\beta_{1}^{\text{(D1)}}=0.0289\\ \beta_{2}^{\text{(D1)}}=0.331\\ \end{array}\) \\ \hline D2 SPN & \(k\gets k(1-\beta_{1}^{\text{(D2)}}\phi_{2})\) & \(\beta_{2}^{\text{(D2)}}=0.032\) \\ \hline \end{tabular} \end{table} Table 4: Effects of DA modulation on intrinsic parameters of the D1/D2 SPNs. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Parameters & D1/D2 SPN & STN & GP & SNr \\ \hline \(C_{X}\) & 16.1 & 23.0 & 68.0 & 172.1 \\ \hline \(v_{r}^{(X)}\) & -80.0 & -56.2 & -53.0 & -64.58 \\ \hline \(v_{r}^{(X)}\) & -29.3 & -41.4 & -44.0 & -51.8 \\ \hline \(k_{X}\) & 1 & 0.439 & 0.943 & 0.7836 \\ \hline \(a_{X}\) & 0.01 & 0.021 & 0.0045 & 0.113 \\ \hline \(b_{X}\) & -20 & 4 & 3.895 & 11.057 \\ \hline \(c_{X}\) & -55 & -47.7 & -58.36 & -62.7 \\ \hline \(d_{X}\) & 84.2 & 17.1 & 0.353 & 138.4 \\ \hline \(v_{peak}^{(X)}\) & 40 & 15.4 & 25 & 9.8 \\ \hline \end{tabular} \end{table} Table 3: Intrinsic parameter values for each BG cell in the \(X\) (= D1 (SPN), D2 (SPN), STN, GP, SNr) population. where \(\phi_{2}\) is the DA level for the D2 SPNs, and the parameter \(\beta^{(D2)}\) represents the downscaling degree in \(k\). Table 4 shows DA modulation on the intrinsic parameters of the D1/D2 SPNs where the parameter values of \(\beta_{1}^{(D1)}\), \(\beta_{2}^{(D1)}\), and \(\beta^{(D2)}\) are given [41; 42; 43]. In this paper, we consider the case of \(\phi_{1}=\phi_{2}=\phi\). Time-evolution of \(v_{i}^{(X)}(t)\) and \(u_{i}^{(X)}(t)\) in Eqs. (19) and (20) is governed by the current \(I_{i}^{(X)}(t)\) into the \(i\)th neuron in the \(X\) population, given by: \[I_{i}^{(X)}(t)=I_{ext,i}^{(X)}(t)-I_{syn,i}^{(X)}(t)+I_{stim}^{(X)}(t). \tag{21}\] Here, \(I_{ext,i}^{(X)}\), \(I_{syn,i}^{(X)}(t)\), and \(I_{stim}^{(X)}(t)\) denote the external current from the external background region (not considered in the modeling), the synaptic current, and the injected stimulation current, respectively. In our BG SNN, we consider the case of no injected stimulation DC current (i.e., \(I_{sym}=0\)). The external current \(I_{ext,i}^{(X)}(t)\) may be modeled in terms of \(I_{spon,i}^{(X)}\) [spontaneous current for spontaneous firing activity, corresponding to time average of \(I_{ext,i}^{(X)}(t)\)] and \(I_{back,i}^{(X)}(t)\) [random background input, corresponding to fluctuation from time average of \(I_{ext,i}^{(X)}(t)\)]. In the BG population, \(I_{spon}^{(X)}\) (independent of \(i\)) is just the spontaneous in-vivo current, \(I_{vivo}^{(X)}\), to get the spontaneous in-vivo firing rate \(f_{vivo}^{(X)}\) in the presence of synaptic inputs in the resting state (in-vivo recording in awake resting state with tonic cortical input). The random background current \(I_{back,i}^{(X)}(t)\) is given by: \[I_{back,i}^{(X)}(t)=D_{X}\cdot\xi_{i}^{(X)}(t). \tag{22}\] Here, \(D_{X}\) is the parameter controlling the noise intensity and \(\xi_{i}^{(X)}\) is the Gaussian white noise, satisfying the zero mean and the unit variance [84; 85; 86]: \[\langle\xi_{i}^{(X)}(t)\rangle=0\text{ and }\langle\xi_{i}^{(X)}(t)\xi_{j}^{(X) }(t^{\prime})\rangle=\delta_{ij}\delta(t-t^{\prime}). \tag{23}\] Table 5 shows in-vivo firing activities of BG neurons in awake resting state with tonic cortical input for the normal DA level of \(\phi=0.3\); spontaneous in-vivo currents \(I_{vivo}^{(X)}\), in-vivo firing rates \(f_{vivo}^{(X)}\), and random background inputs \(D_{X}^{*}\) for [7; 43; 64] are given. ## Appendix B Synaptic Currents and DA Effects We explain the synaptic current \(I_{syn,i}^{(X)}(t)\) in Eq. (21). There are two kinds of excitatory synaptic currents, \(I_{\text{AMPA},i}^{(X,Y)}(t)\) and \(I_{\text{NMDA},i}^{(X,Y)}(t)\), which is are the AMPA (\(\alpha\)-amino-3-hydroxy-5-methyl-4-isoxazole-receptor-mediated and NMDA (\(N\)-methyl-\(D\)-aspartate) receptor-mediated currents from the presynaptic source \(Y\) population to the postsynaptic \(i\)th neuron in the target \(X\) population, respectively. In addition to these excitatory synaptic currents, there exists another inhibitory synaptic current, \(I_{\text{GABA},i}^{(X,Z)}(t)\), which is the GABA\({}_{\text{A}}\) (\(\gamma\)-aminobutyric acid type A) receptor-mediated current from the presynaptic source \(Z\) population to the postsynaptic \(i\)th neuron in the target \(X\) population. Here, we follow the "canonical" formalism for the synaptic currents, as in our previous works in the cerebellum [87; 88] and the hippocampus [89; 90; 91; 92]. The synaptic current \(I_{R,i}^{(T,S)}(t)\)\(R\) (= AMPA, NMDA, or GABA) from the presynaptic source \(S\) population to the \(i\)th postsynaptic neuron in the target \(T\) population obeys the following equation: \[I_{R,i}^{(T,S)}(t)=g_{R,i}^{(T,S)}(t)\text{ }(v_{i}^{(T)}(t)-V_{R}^{(S)}). \tag{24}\] Here, \(g_{(R,i)}^{(T,S)}(t)\) and \(V_{R}^{(S)}\) are synaptic conductance and synaptic reversal potential, respectively. The synaptic conductance \(g_{R,i}^{(T,S)}(t)\) is given by: \[g_{R,i}^{(T,S)}(t)=\widetilde{g}_{max,R}^{(T,S)}\sum_{j=1}^{N_{S}}w_{ij}^{(T,S )}\text{ }s_{j}^{(T,S)}(t), \tag{25}\] \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Parameters & D1/D2 SPN & STN & GP & SNr \\ \hline \(I_{vivo}^{(X)}\) & 0 & 56.5 & 84.0 & 292.0 \\ \hline \(f_{vivo}^{(X)}\) & 1 & 9.9 & 29.9 & 25.5 \\ \hline \(D_{X}^{*}\) & 246 & 11.9 & 274 & 942 \\ \hline \end{tabular} \end{table} Table 5: Spontaneous in-vivo current \(I_{vivo}^{(X)}\), in-vivo firing rates \(f_{vivo}^{(X)}\), and random background input \(D_{X}^{*}\) for in-vivo firing activities of BG cells in awake resting state with tonic cortical input (3 Hz) for the normal DA level of \(\phi=0.3\); \(X\) = D1 (SPN), D2 (SPN), STN, GP, and SNr \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \(S\to T\) & \(R\) & \(\tilde{g}_{max,R}^{(T,S)}\) & \(\tau_{R,d}^{(T,S)}\) & \(\tau_{R,l}^{(T,S)}\) & \(V_{R}^{(S)}\) \\ \hline \multirow{2}{*}{\(\text{Ctx}\rightarrow\) D1/D2 SPN} & AMPA & 0.6 & 6 & 10 & 0 \\ \cline{2-6} & NMDA & 0.3 & 160 & 10 & 0 \\ \hline \multirow{2}{*}{\(\text{Ctx}\rightarrow\) STN} & AMPA & 0.388 & 2 & 2.5 & 0 \\ \cline{2-6} & NMDA & 0.233 & 100 & 2.5 & 0 \\ \hline D1 SPN \(\rightarrow\) SNr & GABA & 4.5 & 5.2 & 4 & -80 \\ \hline D2 SPN \(\rightarrow\) GP & GABA & 3.0 & 6 & 5 & -65 \\ \hline \multirow{2}{*}{\(\text{STN}\rightarrow\) GP} & AMPA & 1.29 & 2 & 2 & 0 \\ \cline{2-6} & NMDA & 0.4644 & 100 & 2 & 0 \\ \hline GP \(\leftrightarrow\) GP & GABA & 0.765 & 5 & 1 & -65 \\ \hline GP \(\rightarrow\) STN & GABA & 0.518 & 8 & 4 & -84 \\ \hline \multirow{2}{*}{\(\text{STN}\rightarrow\) SNr} & AMPA & 12 & 2 & 1.5 & 0 \\ \cline{2-6} & NMDA & 5.04 & 100 & 1.5 & 0 \\ \hline GP \(\rightarrow\) SNr & GABA & 73 & 2.1 & 3 & -80 \\ \hline \end{tabular} \end{table} Table 6: Parameters for the synaptic currents from the source population (\(S\)) to the target population (\(T\)): Maximum synaptic conductances \(\tilde{g}_{max,R}^{(T,S)}\), synaptic decay times \(\tau_{R,d}^{(T,S)}\), synaptic delay times \(\tau_{R,d}^{(T,S)}\), and synaptic reversal potential \(V_{R}^{(S)}\). where \(\widetilde{g}^{(T,S)}_{max,R}\) and \(N_{S}\) are the maximum synaptic conductance and the number of neurons in the source population \(S\). Here, the connection weight \(w^{(T,S)}_{ij}\) is 1 when the \(j\)th presynaptic neuron is connected to the \(i\)th postsynaptic neuron; otherwise (i.e., in the absence of such synaptic connection), \(w^{(T,S)}_{ij}=0\). We note that, \(s^{(T,S)}(t)\) in Eq. (2) denote fraction of open postsynaptic ion channels which are opened through binding of neurotransmitter (emitted from the source population \(S\)). A sum of exponential-decay functions \(E^{(T,S)}_{R}(t-t^{(j)}_{f}-\tau^{(T,S)}_{R,l})\) provides time evolution of \(s^{(T,S)}_{j}(t)\) of the \(j\)th cell in the source \(S\) population: \[s^{(T,S)}_{j}(t)=\sum_{f=1}^{F^{(S)}_{j}}E^{(T,S)}_{R}(t-t^{(j)}_{f}-\tau^{(T,S )}_{R,l}), \tag{3}\] where \(F^{(S)}_{j}\), \(t^{(j)}_{f}\), and \(\tau^{(T,S)}_{R,l}\) are the total number of spikes and the \(f\)th spike time of the \(j\)th neuron, and the synaptic latency time constant, respectively. Similar to our previous works in the cerebellum [87; 88], we use the exponential-decay function \(E^{(T,S)}_{R}(t)\) (for contribution of a presynaptic spike occurring at \(t=0\) in the absence of synaptic latency): \[E^{(T,S)}_{R}(t)=e^{-t/\tau^{(T,S)}_{R,d}}\cdot\Theta(t). \tag{4}\] Here, \(\tau^{(T,S)}_{R,d}\) is the synaptic decay time constant and the Heaviside step function satisfies \(\Theta(t)=1\) for \(t\geq 0\) and 0 for \(t<0\). We also note that, in the case of NMDA-receptor, the positive magnesium ions Mg\({}^{2+}\) block some of the postsynaptic NMDA channels. In this case, fraction of non-blocked NMDA channels is given by a sigmoidal function \(f(v^{(T)})\)[41; 43; 105], \[f(v^{(T)}(t))=\frac{1}{1+0.28\cdot[\text{Mg}^{2+}]\cdot e^{-0.062v^{(T)}(t)}}, \tag{5}\] where \(v^{(T)}\) is the membrane potential of a neuron in the target population \(T\) and \([\text{Mg}^{2+}]\) is the equilibrium concentration of magnesium ions (\([\text{Mg}^{2+}]=1\) mM). Thus, the synaptic current into the \(i\)th neuron in the target \(X\) population becomes \[I^{(X)}_{syn,i}(t)=I^{(X,Y)}_{\text{AMPA},i}(t)+f(v^{(X)}_{i}(t))\cdot I^{(X, Y)}_{\text{NMDA},i}(t)+I^{(X,Z)}_{\text{GABA},i}(t). \tag{6}\] Table 6 shows the synaptic parameters of the synaptic currents from the source population \(S\) to the target population \(T\): maximum synaptic conductance \(\tilde{g}^{(T,S)}_{max,R}\), synaptic decay time \(\tau^{(T,S)}_{R,d}\), synaptic delay time \(\tau^{(T,S)}_{R,l}\), and synaptic reversal potential \(V^{(S)}_{R}\). We also consider the effect of DA modulation on the synaptic currents into D1 SPN, D2 SPN, STN, and GP neurons in Fig. 1[41; 42; 43]. In the case of synaptic currents into the D1 SPNs, DA modulation effect is modelled by upscaling the NMDA receptor-mediated current \(I_{\text{NMDA}}\) with the factor \(\beta^{(\text{D1})}\): \[I_{\text{NMDA}}\gets I_{\text{NMDA}}(1+\beta^{(\text{D1})}\phi_{1}), \tag{7}\] where \(\phi_{1}\) is the DA level for the D1 SPNs. (There is no DA effect on \(I_{\text{AMPA}}\) for the D1 SPNs.) On the other hand, in the case of synaptic currents into the D2 SPNs, DA modulation effect is modelled by downscaling the AMPA receptor-mediated current \(I_{\text{AMPA}}\) with the factor \(\beta^{(\text{D2})}\): \[I_{\text{AMPA}}\gets I_{\text{AMPA}}(1-\beta^{(\text{D2})}\phi_{2}), \tag{8}\] where \(\phi_{2}\) is the DA level for the D2 SPNs. (There is no DA effect on \(I_{\text{NMDA}}\) for the D2 SPNs.) The scaling factors \(\beta^{(\text{D1})}\) and \(\beta^{(\text{D2})}\) are given in Table 7. Also, effects of DA modulation on synaptic currents into STN neurons and GP neurons are well given in Table 7. In these cases, all excitatory and inhibitory synaptic currents, \(I_{\text{AMPA}}\), \(I_{\text{NMDA}}\), and \(I_{\text{GABA}}\), are downscaled with their scaling factors, depending on \(\phi_{2}\). Here, \(\phi_{1}=\phi_{2}=\phi\).
2309.11049
Localize, Retrieve and Fuse: A Generalized Framework for Free-Form Question Answering over Tables
Question answering on tabular data (a.k.a TableQA), which aims at generating answers to questions grounded on a provided table, has gained significant attention recently. Prior work primarily produces concise factual responses through information extraction from individual or limited table cells, lacking the ability to reason across diverse table cells. Yet, the realm of free-form TableQA, which demands intricate strategies for selecting relevant table cells and the sophisticated integration and inference of discrete data fragments, remains mostly unexplored. To this end, this paper proposes a generalized three-stage approach: Table-to- Graph conversion and cell localizing, external knowledge retrieval, and the fusion of table and text (called TAG-QA), to address the challenge of inferring long free-form answers in generative TableQA. In particular, TAG-QA (1) locates relevant table cells using a graph neural network to gather intersecting cells between relevant rows and columns, (2) leverages external knowledge from Wikipedia, and (3) generates answers by integrating both tabular data and natural linguistic information. Experiments showcase the superior capabilities of TAG-QA in generating sentences that are both faithful and coherent, particularly when compared to several state-of-the-art baselines. Notably, TAG-QA surpasses the robust pipeline-based baseline TAPAS by 17% and 14% in terms of BLEU-4 and PARENT F-score, respectively. Furthermore, TAG-QA outperforms the end-to-end model T5 by 16% and 12% on BLEU-4 and PARENT F-score, respectively.
Wenting Zhao, Ye Liu, Yao Wan, Yibo Wang, Zhongfen Deng, Philip S. Yu
2023-09-20T03:52:34Z
http://arxiv.org/abs/2309.11049v2
# Localize, Retrieve and Fuse: A Generalized Framework for Free-Form Question Answering over Tables ###### Abstract Question answering on tabular data (_a.k.a_ TableQA), which aims at generating answers to questions grounded on a provided table, has gained significant attention recently. Prior work primarily produces concise factual responses through information extraction from individual or limited table cells, lacking the ability to reason across diverse table cells. Yet, the realm of free-form TableQA, which demands intricate strategies for selecting relevant table cells and the sophisticated integration and inference of discrete data fragments, remains mostly unexplored. To this end, this paper proposes a generalized three-stage approach: _Table-to-Graph conversion and cell localizing, external knowledge retrieval, and the fusion of table and text (called_ TAG-QA), to address the challenge of inferring long free-form answers in generative TableQA. In particular, TAG-QA (1) locates relevant table cells using a graph neural network to gather intersecting cells between relevant rows and columns, (2) leverages external knowledge from Wikipedia, and (3) generates answers by integrating both tabular data and natural linguistic information. Experiments showcase the superior capabilities of TAG-QA in generating sentences that are both faithful and coherent, particularly when compared to several state-of-the-art baselines. Notably, TAG-QA surpasses the robust pipeline-based baseline TAPAS by 17% and 14% in terms of BLEU-4 and PARENT F-score, respectively. Furthermore, TAG-QA outperforms the end-to-end model T5 by 16% and 12% on BLEU-4 and PARENT F-score, respectively.1 Footnote 1: Source code will be released at [https://github.com/wentinghome/TAGQA](https://github.com/wentinghome/TAGQA). ## 1 Introduction Question answering is to generate precise answers by interacting efficiently with unstructured, structured, or heterogeneous contexts, such as paragraphs, knowledge bases, tables, images, and various combinations thereof Burke et al. (1997); Yao and Van Durme (2014); Talmor et al. (2021); Hao et al. (2017). Among these, question answering on tabular data (TableQA) is a challenging task that requires the understanding of table semantics, as well as the ability to reason and infer over relevant table cells Herzig et al. (2021); Chen et al. (2020, 2021). For the task of TableQA, from our investigation, most current studies are focusing on the factoid TableQA, in which the answer is in a few words or a phrase copied directly from relevant table cells. In particular, current works on factoid TableQA are mainly categorized into two groups: (1) pipeline-based methods consisting of two stages, i.e., cell retrieval and answer reader Zhu et al. (2021); Chen et al. (2020); and (2) end-to-end neural networks such as a paradigm of sequence-to-sequence model that takes the context of question answering (e.g., question and table cells) as input to generate natural-language answers Li et al. (2021); Pan et al. (2022); Herzig et al. (2021); Pan et al. (2021); Chen (2023). Despite much progress made on factoid TableQA, a contradiction between the factoid TableQA and TableQA exists in real scenarios. In factoid TableQA, the answers are always in a short Figure 1: A motivating example to show the insights of our proposed approach when comparing with several state-of-the-art methods. form with a few words directly copied from the relevant table cells. However, in real-world scenarios, the answers are expected to be long and informative sentences in a free form, motivating us to target the free-form TableQA in this paper. It is challenging to generate coherent and faithful free-form answers over tables. (1) _The well-preserved spatial structure of tables is critical for retrieving relevant table cells to the question._ Different from factoid TableQA, free-form TableQA with sophisticated question shares less semantic similarities to the table content, while depending more on the spatial structure of tables to infer multiple related cells such that the related cells may be located in a relatively connected area, e.g., from either a few selected rows or columns. (2) _The selected table cells, containing the key point, are insufficient for composing the entire coherent sentences_. To generate fluent natural-language sentences as answers, external information such as the relevant background knowledge about the question is necessary. (3) _It is expected to aggregate and reason from the question, retrieved table cells, and external knowledge to compose a reasonable answer._ Given the heterogeneous information, a practical model should be capable of aggregating the information efficiently and generating a coherent and fluent free-form answer. Figure 1 provides a motivating example to illustrate the insights of this paper. Given a table describing "_the 1983 Manx Grand Prix Newcomers Junior Race Results_" and a question "_Who won in the first three places of The Newcomers Manx Grand Prix race?_", the goal is to select relevant cells first and then generate a natural sentence as an answer. From this table, we can observe that the state-of-the-art model TAPAS and MATE only select the "_rider_" while missing the "_rank_" column, providing low cell selection coverage. For the overall generation quality, we can observe that both the end-to-end T5 (Raffel et al., 2020) and the pipeline-based TAPAS (Herzig et al., 2020) and MATE (Eisenschlos et al., 2021) are missing key information from the table by merely mentioning part of the three riders. In addition, the TAPAS introduces a hallucinated rider named "_Spaniard_". These observations motivate us to design a model that can select the relevant cells more accurately and generate faithful answers grounded on the table given a question. Based on the aforementioned insights, this paper designs a three-stage pipeline framework to tackle the problem of free-form TableQA. Even though the end-to-end TableQA models with high accuracy are prevalently ascribed to the suppression of error accumulated from one-stage training, the long table distracts the model from focusing on relevant table cells, resulting in irrelevant answers. On the other hand, the cell selection module provides a controllable and explainable perspective by extracting a small number of table cells as anchors for the model to generate answers. For the content selection stage, inspired by the recent success of graph models, we convert the table to a graph by Figure 2: An overview of TAG-QA. The input to TAG-QA is a combination of one table and question, while the output is an answer. The top box shows the content selection process which first converts the table to a graph and selects relevant nodes using GNN. The middle box shows the process of using the spare retrieval technique to retrieve relevant text as complementary information. The rightmost blue box is to integrate the selected cells and retrieved texts to generate the final answer. designing the node linking and applying a Graph Neural Network (GNN) to aggregate node information and classify whether the table cell is relevant or not. In addition, to generate informative free-form answers, we employ a spare retrieval technique to explore extra knowledge from Wikipedia. Consequently, both the extra knowledge and relevant cells are taken into account to calibrate the pre-trained language model bias. Lastly, we adopt a fusion layer in the decoder to generate the final answer. To summarize, the primary contributions of this paper are three-fold. (1) To the best of our knowledge, we are the first to convert a semi-structured table into a graph, and then design a graph neural network to retrieve relevant table cells. (2) External knowledge is leveraged to fill in the gap between the selected table cell and the long informative answer by providing background information. (3) Comprehensive experiments on a public dataset named FeTaQA (Nan et al., 2022) are performed to verify the effectiveness of TaG-QA. Experimental results show that TaG-QA outperforms the strong baseline TAPAS by \(17\%\) and \(14\%\), and outperforms the end-to-end T5 model by \(16\%\) and \(12\%\), in terms of BLEU-4 and PARENT F-score, respectively. ## 2 TaG-QA Approach In this section, we first formulate the problem of TableQA, and introduce the details of our proposed approach TaG-QA. ### Problem Formulation A free-form question-answering task is formulated as generating an answer \(a\) to a question \(q\) based on a semi-structured table \(T\) including table cell content and table meta information such as column, and row header. Different from the factoid table question answering task with a short answer, the free-form QA aims at generating informative and long answers. ### Overview Figure 2 illustrates the overall architecture of our proposed TaG-QA, which is composed of three stages, i.e., relevant table cell localization, relevant external knowledge retrieval, and table-text fusion. _(1) Relevant table cell localization._ We first propose a table-to-graph converter to transform a table into a graph which can preserve the table's spatial information. We think that the graph-based table representation can better assist in selecting relevant table cells. _(2) External knowledge retrieval._ We adopt the sparse retrieval technique to collect external information which can be complementary information for the final answer generation. _(3) Table-text fusion._ We employ the fusion-in-decoder model by taking both the selected table cells and the external sources into account to generate the answer. The above three steps enable our model to generate a faithful free-form answer for a question grounded on the table. ### Relevant Table Cell Localization The initial phase of TaG-QA involves table content selection, a pivotal step that serves as the foundation for subsequent stages. Notably, this stage is of utmost importance as it supplies essential input to the subsequent processes. FeTaQA presents a formidable challenge as a dataset, with a Median/Avg percentage of relevant table cells at \(10.7\%/16.2\%\). In order to enhance the precision of the content selection stage, we design a table-to-graph converter to preserve the inherent spatial structure of the tables. We employ GNN to effectively aggregate information at the cell level and subsequently perform a classification task on the table cells. Figure 3 shows an example of transforming a table into a graph. For the \(i\)-th row, we add an empty row header as \(rhi\) which reflects the entire row information. All the table cells from the same row are fully connected, and all the table cells from the same column are also fully connected. Besides, we design two types of relations for the table graph, i.e., "_of the same row_" and "_of the same column_" relations. In particular, "_of the same row_" relation captures the entity information, while "_of the same column_" relation reveals the connection of the same attribute. In addition, to incorporate the question node into the graph, we create a question node and assign a linking edge between the question and each table cell with the relation "_question to cell_". Taq-qa Content SelectionInspired by QA-GNN Yasunaga et al. (2021), we propose a content selection module (Taq-cs) that retrieves relevant table cells from the table-based graph. Taq-cs takes the converted table graph from Sec. 2.3 as input, and outputs the question-related table cells. Taq-cs reasons over the table cell level, and each graph node represents a table cell. To fully explore the table semantic and the spatial information, Taq-cs acquires the initial graph node embedding through a pre-trained LM e.g., BERT. Besides, the pre-trained LM and GNN are jointly trained to predict the selected cells. GNN ArchitectureWe use Graph Attention Network (GAT) Velickovic et al. (2017) which leverages masked self-attention layers and employs iterative message passing among neighbors is applied to predict the selected graph node. GAT follows Eq. 1 to update the \(i\)-th node feature \(h_{i}^{l}\in\mathbb{R}^{D}\) at layer \(l\) through gathering the weighted attention among its neighbors \(\mathcal{N}_{i}\). \[h_{i}^{l}=f_{g}\left(\sum_{sc\mathcal{N}_{t}\cup\{t\}}\alpha_{st}m_{st} \right)+h_{t}^{l-1} \tag{1}\] where \(\alpha_{st}\) and \(m_{st}\in\mathbb{R}^{N}\) are the self-attention weight and the message passed from source node \(s\) to target node \(t\) respectively, and \(f_{g}\) is a 2-layer Multi-Layer Perceptron (MLP) with batch normalization. The message \(m_{st}\in\mathbb{R}^{N}\) from node \(v_{s}\) to \(v_{t}\) is computed using Eq. 2. \[m_{st}=f_{m}(h_{s}^{l-1},u_{s},r_{st}) \tag{2}\] where \(u_{s}\in\mathbb{R}^{T/2}\) is the source node \(s\) feature linearly transformed from the one hot vector node type \(u_{t}\). \(r_{st}\in\mathbb{R}^{T}\) is the relation feature from source node \(s\) to target node \(t\) computed through a 2-layer MLP by taking relation type, source, and target node type into account. \(f_{m}\) is a linear transformation. The self-attention coefficient \(\alpha_{st}\) is updated in Eq. 3. Query and key vectors are linearly transformed by \(g_{q}\) and \(g_{k}\), as node, edge feature, and the previous layer hidden state provided. \[\alpha_{st}=\frac{exp(\gamma_{st})}{\sum_{t^{\prime}\epsilon N_{s}\cup\{s\}} exp(\gamma_{st^{\prime}}))},\gamma_{st}=\frac{Q_{s}^{T}K_{t}}{\sqrt{N}} \tag{3}\] \[Q_{s}=g_{q}(h_{s}^{l-1},u_{s},r_{st}) \tag{4}\] \[K_{t}=g_{k}(h_{s}^{l-1},u_{t},r_{st}) \tag{5}\] GNN Training and InferenceGiven a question \(q\) and a table \(T\), Taq-cs reasons over a graph containing both the table cell nodes and the question node by making predictions on the row and column level. We observe that relevant table cells tend to show up in a relatively connected area, thus we make predictions over row and column headers and choose the intersection area. Compared to predicting over the cell level which results in low recall, our method gains a higher chance to capture relevant table cells. For the training stage, Taq-cs maximizes the cross entropy to predict the row and column for relevant cells. ### External Knowledge Retrieval Taq-qa is the first attempt to leverage the external knowledge to address the table-based free-form QA task. Taq-qa adopts an effective and simple Spare Retrieval based on the TF/IDF approach to select a potentially relevant context from Wikipedia. Sparse RetrievalFor Taq-qa, the external knowledge is served as a complimentary background context for the next table and text fusion stage. We choose the spare retrieval method using BM25 Robertson and Zaragoza (2009) as a ranking function to retrieve the most relevant text as supplementary information. Given a query \(q\) with \(m\) keywords \(k_{1},k_{2},\ldots,k_{m}\), the BM25 ranking score \(p_{i}\) for document \(d_{i}\) is calculated by Eq. 6, \[p_{i}=\sum_{j=1}^{m}\frac{idf(q_{j})\times tf(q_{j},d_{i})\times(\alpha+1)}{tf (k_{j},d_{i})+\alpha(1-\beta+\beta\frac{|d_{i}|}{L_{D}})} \tag{6}\] where \(idf\) is the Inverse Document Frequency (IDF), \(tf(k_{j},d_{i})\) is the term frequency of the keyword \(k_{j}\) in document \(d_{i}\), and \(L_{D}\) is the average document length. ### Table-Text Fusion After obtaining the predicted highlighted table cells from the table as well as the support context from Wikipedia, TaG-QA aggregates and combines the two information sources through a sequence-to-sequence model Fusion-in-Decoder (FiD) [11]. FiD appends the question to each information source, encoding each component independently. It subsequently merges all source features and transmits them to the decoder. Fusion in DecoderFusion-in-Decoder based on T5 [13] architecture takes question, support context, and the retrieved semi-structured table cells as input. We flatten the highlighted cells as a natural sentence to fit with its pre-trained LM architecture. For the table example shown in Figure 1, the ground-truth selected cells from the first two columns "_Rank_" and "_Rider_" can be linearized as "_Rank is 1 [SEP] Rider is Northern Ireland Robert D [SEP] Rank is 2 [SEP] Rider is Scotland Steve Hislop [SEP] Rank is 3 [SEP] Rider is Wales Ian Louog._", where [SEP] is a special token to indicate the end of table slot value. ## 3 Experiments and Analysis In this section, we explore the following experimental questions: (1) Does proposed TaG-QA generate a more coherent and faithful answer compared with the baseline? (2) Is table cell selection, knowledge retrieval, and fusion necessary for the free-form TableQA? (3) Is it promising to keep enhancing the three modules of TaG-QA? ### Dataset This paper focuses on tackling the challenge of generating long free-form answers, rather than the short factoid responses. Consequently, we have opted for the utilization of the state-of-the-art dataset, _FetaQA_[22], as our testbed. The training dataset comprises 7,327 instances, while the development and test sets encompass 1,002 and 2,004 examples, respectively. ### Implementation Details TaG-CS)TaG-CS applies BERT checkpoint "bert-based-uncased" to learn the table cell representation. For the BERT model, we set the learning rate to \(1e\)-\(6\) and impose a maximum token length of 35 for each cell. Subsequently, the acquired table cell-level embeddings serve as input node features for our GNN. Within the TaG-CS framework, our GNN module comprises 3 layers, each with node features of 200 dimensions. Additionally, we apply a dropout rate of 0.2 to each layer for regularization. We train our model on the FeTaQA dataset, configuring it to run for a maximum of 50 epochs. We employ the RAdam optimizer [10] with a weight decay of 0.01, utilizing a powerful 24G memory Titan-RTX GPU. To optimize GPU memory usage, we set the maximum number of table cells as 200 and set the batch size as 1. The selection of the best checkpoint is based on the performance of the model on the development set, which is then used for decoding the test set. Additionally, to enhance efficiency, TaG-CS is employed to select intersection cells from the top 3 rows and 3 columns as the relevant cells, drawing upon our accumulated experience in this context. _Sparse Retrieval_) Our implementation relies on the PyTorch-based toolkit Pyserini, designed for reproducible information retrieval research using both sparse and dense representations. We utilize the question as the query to retrieve pertinent contextual information from Wikipedia, selecting the first sentence from the top results. We specifically employ the Lucene Indexes, denoted as "enwiki-paragraphs"2. Footnote 2: [https://github.com/castorini/pyserini](https://github.com/castorini/pyserini) _FiD_) In the context of FiD, TaG-QA employs the Adam optimizer with a learning rate of \(1e\)-\(5\). We select the best checkpoint for inference purposes. In the inference phase, we utilize beam search with a beam size of 3 and apply a length penalty of 1 when generating answers. ### Baselines To validate the effectiveness of TaG-QA, we choose two different types of methods as baselines, including end-to-end and pipeline-based models. \begin{table} \begin{tabular}{l c c} \hline \hline & **Precision** & **Recall** & **F-1** \\ \hline TAPAS [13] & **65.31** & 24.20 & 35.32 \\ MATE [14] & 56.93 & 22.21 & 31.95 \\ TAG-QA (Ours) & 47.60 & **43.06** & **45.22** \\ \hline \hline \end{tabular} \end{table} Table 1: Content selection results on FeTaQA dataset. Firstly, we compare TAG-QA with strong state-of-the-art end-to-end pre-trained generative LMs. UniLM Dong et al. (2019), BART Lewis et al. (2020), and T5 Radford et al. (2019). For the input format to the end-to-end model, we flatten the table by concatenating special token [SEP] in between different table cells, and concatenate with the question as a natural sentence, e.g. "_question [SEP] flattened table_". Furthermore, we compare the performance of our proposed model with pipeline-based methods which include two stages: content selection and answer generation. Content selection makes predictions of relevant cells. We choose two table-based pre-training models: TAPAS Herzig et al. (2020) and MATE Eisenschlos et al. (2021). Moreover, T5 is chosen as the baseline model's answer generation backbone due to the integration capacity for the table cell and retrieved knowledge. ### Automatic Evaluation Metrics We use various automatic metrics to evaluate the model performance. Due to the pipeline style of TAG-QA, we report two sets of metrics for content selection and answer generation stages respectively. Firstly, to evaluate the retrieval competency of the table semantic parser, we report Precision, Recall, and F1 scores. Besides, to evaluate the answer generation quality, we choose several automatic evaluation metrics, i.e., BLEU-4 Papineni et al. (2002), ROUGE-L Lin (2004) and METEOR Banerjee and Lavie (2005), to evaluate the n-gram match between the generated sentence and the reference answer. Considering the limitation that those metric fails to reflect the faithfulness answer to the fact from the table, we report PARENT Dhingra et al. (2019) and PARENT-T Wang et al. (2020) score. PARENT score takes the answer matching with both the reference answer and the table information into account, while PARENT-T focuses on the overlap between the generated answer with the corresponding table. \begin{table} \begin{tabular}{l c c} \hline \hline Method & Overall \\ \hline Reference & 4.94 \\ \hline UniLM [end-to-end] Dong et al. (2019) & 3.88 \\ BART [end-to-end] Lewis et al. (2020) & 3.67 \\ T5 [end-to-end] Raffel et al. (2020) & 3.81 \\ \hline Tapas [pipeline] Herzig et al. (2020) & 3.38 \\ MATE [pipeline] Eisenschlos et al. (2021) & 3.30 \\ TAG-QA [pipeline] & **3.93** \\ \hline \hline \end{tabular} \end{table} Table 3: Results of human evaluation for reference, end-to-end model and pipeline methods. TAG-QA outperforms the pipeline models by a large margin, and achieves performance on par with the strong end-to-end baseline model T5. \begin{table} \begin{tabular}{l l c c c c c} \hline \hline & **BLEU-4-0** & **METEOR** & **ROUGE-L** & **PARENT (P/R/F)** & **PARENT-T (P/R/F)** \\ \hline \multirow{4}{*}{UniLM} & Q-fullTab & 17.57 & 28.30 & 39.46 & 38.21/24.18/25.56 & 26.48/53.99/33.70 \\ & Q-Retrieve & 18.46 & 27.21 & 39.36 & 34.12/23.42/23.76 & 20.37/43.41/25.69 \\ & Q-Retrieve-fullTab & 18.89 & 26.86 & 38.86 & 35.29/23.17/23.72 & 22.07/44.83/27.44 \\ \hline \multirow{4}{*}{BART} & Q-fullTab & 7.62 & 25.70 & 25.76 & 39.64/19.68/22.62 & 25.77/39.53/28.78 \\ & Q-Retrieve & 12.20 & 25.15 & 28.27 & 35.55/20.67/22.37 & 18.13/31.07/20.94 \\ & Q-Retrieve-fullTab & 11.97 & 26.41 & 28.24 & 38.45/22.12/23.96 & 20.57/34.63/23.46 \\ \hline \multirow{4}{*}{T5} & **Q-fullTab*** & 15.66 & 21.80 & 35.48 & 38.88/14.83/18.01 & 25.11/33.62/26.17 \\ & Q-Retrieve & 25.17 & 24.87 & 39.89 & 33.54/20.32/21.68 & 17.35/31.21/20.13 \\ & Q-Retrieve-fullTab & 27.60 & 26.71 & 42.38 & 38.49/23.22/25.06 & 20.98/5.79/24.02 \\ \hline \multirow{2}{*}{**Oracle-T5**} & Q-OracleCell & 21.77 & 28.35 & 42.54 & 35.73/26.39/0.61 & 38.21/54.22/21.49 \\ & Q-Retrieve & 25.17 & 24.87 & 39.89 & 33.54/20.32/21.68 & 17.35/31.21/20.13 \\ & Q-Retrieve-OracleCell & 31.00 & 30.35 & 46.72 & 46.3/28.44/30.93 & 27.07/44.32/30.71 \\ \hline \multirow{4}{*}{TAPAS-T5} & **Pipeline** & & & & & & \\ \cline{2-2} & Q-PredCell & 14.50 & 21.18 & 35.51 & 39.14/12.34/15.67 & 25.19/29.47/24.38 \\ \cline{1-1} & Q-Retrieve-predCell & 26.81 & 26.92 & 42.59 & 39.23/21.96/24.15 & 21.43/34.54/23.61 \\ \hline \multirow{2}{*}{MATE-T5} & Q-predCell & 14.28 & 21.01 & 35.36 & 39.07/12.2/15.53 & 24.83/29.56/24.25 \\ \cline{1-1} & Q-Retrieve-predCell & 26.85 & 26.96 & 42.60 & 39.05/21.89/23.99 & 21.13/46.62/23.57 \\ \hline \multirow{2}{*}{TAGQA-T5} & Q-predCell & 17.08 & 23.22 & 38.38 & 41.84/16.63/20.1 & 27.11/37.03/28.45 \\ \cline{1-1} & Q-Retrieve-predCell & 28.01(\{\}\) & 27.91(\(\uparrow\) 1.20) & 44.16(\(\uparrow\) 1.78) & 41.35/23.87/26.2(\(\uparrow\) 1.14) & 22.89/37.25/25.64(\(\uparrow\) 1.64) \\ \hline \multirow{2}{*}{TAGQA-Field} & **Q-Retrieve-predCell*** & **31.84**(\(\uparrow\) 16.18) & **30.16**(\(\uparrow\) 8.36) & **49.39**(\(\uparrow\) 13.91) & **47.56**/**26.20**/**29.59**(\(\uparrow\) 11.58) & **25.44**/**39.11**/**28.26**(\(\uparrow\) 2.09) \\ \hline \hline \end{tabular} \end{table} Table 2: Results on FeTaQA dataset. “P/R/F” denotes the precision/recall/F score. We report end-to-end model UniLM, BART and T5, and the pipeline results. The results of various table cell selection strategies TAPAS, MATE and our proposed TAG with T5 as backbone generation model are noted as TAPAS-T5, MATE-T5 and TagQA-T5. To validate the effectiveness of proposed framework components, we test different combinations of source information to models where “Q” is question, “Retrieve” is the retrieved external knowledge, “fullTab” is full table, and “predCell” refers to the selected table cell. And the last row TAGQA-FiD is the proposed method. ### Results We first evaluate the TAG-CS content selection stage table semantic parsing results, as shown in Table 1. For the F-1 score, TAG-QA outperforms the strong baseline model TAPAS and MATE by 9.9% and 13.27%. For recall, TAG-QA achieves the best result, demonstrating that TAG-QA retrieves more relevant table cells. For precision, the baseline model outperforms TAG-QA by retrieving fewer cells which includes more relevant cells. However, the low precision and high recall are a trade-off since the relevant cells make a stronger impact on the overall answer generation quality. Thus, we can tolerate a small amount of irrelevant cells and keep the correct cells as many as possible. In addition, Table 2 shows the measurements of generated answer quality using TAG-QA compared to previous both end-to-end and pipeline-based state-of-the-art models. From overlapping-based metrics BLEU-4, METEOR, and ROUGE-L, TAG-QA outperforms all the end-to-end and pipeline-based models. Specifically, TAG-QA gains 14.27%/1.86%/9.93% more than the best end-to-end model UniLM in "Q-fullTab" while gains 14.76%/8.98%/13.88% in "Q-predCell" setting, more than the best pipeline-based model TAPAS. For faithfulness metric PARENT and PARENT-T, TAG-QA provides the best performance among the pipeline models by outperforming TAPAS on the "Q-predCell" setting by 13.92% and 3.88% on PARENT and PARENT-T. Compared with end-to-end models, TAG-QA gives the best PARENT score while UniLM shows the best result regarding PARENT-T. It's explainable because TAG-QA incorporates information outside of the table to generate answers, achieving a trade-off between being grounded on the table and synthesizing informative answers. Furthermore, to answer Question 2 "_Are three stages of the framework necessary to generate high-quality answer?_", we conduct an experiment in Table 2 by comparing the T5 model "Q-fullTab" with pipeline methods backend by T5 using "Q-predCell". The result shows proposed TAG for content selection TAGQA-T5 selecting 7% of table cell outperforms T5 with fullTab. This indicates the table cell selection is necessary since relevant cells provide an anchor to generate high answer generation. Moreover, to investigate the effect of retrieval knowledge, we show results in Table 2 by concatenating "Retrieval" to the input. The retrieval knowledge enhances model performance by providing background knowledge. The proposed model TAGQA-T5 provides the best result by integrating retrieval and informative selected cells. Lastly, our fusion module further enhanced the overall performance by aggregating tables and text efficiently. Last but not least, to answer the question "_Is there space to further enhance performance using this framework?_", we conduct an oracle experiment shown in "Oracle-T5". With the simple Retrieval technique, T5 backend generation, and oracle table cell, the BLEU-4 result is 31%, and PARENT, PARENT-T are over 30%. If a better retrieval and fusion model is used, the model performance can be further boosted. ### Analysis To further evaluate the quality of generated answer by various state-of-the-art models when compared to the ground-truth answer, we perform an additional human evaluation. Besides, we conduct an ablation study for TAG-QA to validate the three building blocks: jointly training of LM and GNN for TAG-CS, external context retrieved from Wikipedia, and FiD model. Furthermore, a case study is presented which shows different answer qualities produced by various models. Human EvaluationFollowing [14], we recruit three human annotators who pass the College English Test (CET-6)3 to judge the quality of the generated sentence. We randomly draw 100 samples from test examples in FeTaQA dataset and collect answers from TaG-QA and baseline models. Then, we present the generated answers to three human annotators without revealing the name of the model, thus reducing human variance. Footnote 3: A national English as a foreign language test in China. We provide instructions for human raters to evaluate the sentence quality from four aspects: faithfulness, fluency, correctness, and adequateness. For each aspect, an annotator is supposed to assign a score ranging from 1 (worst) to 5 (best) based on the answer quality. The "overall" column refers to \begin{table} \begin{tabular}{l|c c|c c} \hline \hline **Model** & **BLEU** & **METEOR** & **PARENT** & **PARENT-T** \\ \hline TAG-QA & **31.84** & **30.16** & **29.59** & **28.26** \\ \hline TAG-QA w/o JT & 31.35 & 29.65 & 28.93 & 27.48 \\ TAG-QA w/o SR & 18.93 & 24.95 & 21.57 & 27.95 \\ TAG-QA w/o FD & 21.51 & 24.03 & 22.46 & 25.40 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study of the proposed model. We examine the ablated mode by removing the Joint Training (JT) of TAG-CS, Sparse Retrieval (SR), and FiD. the average ranking of the model. First, for fluency, the annotator checks if an answer is natural and grammatical. Second, for correctness, we compare the answer with the ground truth by checking if the predicted answer contains the correct information. Third, adequacy reflects if an answer contains all the aspects that are asked. Finally, faithfulness evaluates faithfulness if an answer is faithful and grounded to the contents of the highlighted table region such that it covers all the relevant information from the table while not including other key information outside of the table. From Table 3, we can see TAG-QA ranked the top among all models. Ablation StudyTo figure out which building blocks are driving the improvements, we examine different ablated models to understand each component of TAG-QA, including joint training of BERT and GNN from TAG-CS, sparse retrieval, and FiD. Table 4 presents the ablation results under different evaluation metrics. We can see that the model performance drops when any component is removed. Especially, ablating the sparse retrieval module results in the most drop in BLEU-4 and PARENT scores, while removing FiD causes the most significant drop in PARENT-T. Case StudyTo inspect the effect of TAG-QA directly, we present a case study in Figure 4, where a sampled table, question, ground-truth relevant table cells (highlighted in blue), the predicted answers of models, as well as the reference are provided. First, we find that the end-to-end model generally contains more information than pipeline models due to the more abundant table information while they suffer from hallucination. For example, T5 and BART identify the ranking position of "_Leandro de Oliveira_" as "_17th_" while it should be "_73rd_" from the table. Second, for pipeline models, they tend to generate irrelevant information e.g. MATE mentions the duration and points instead of answering the ranking position and the event. Third, both the end-to-end and pipeline models (TAPAS) fail to cover all the relevant information from the table, e.g. UniLM did not capture the event 12km, and TAPAS fails to mention the position 73rd. By contrast, TAG-QA provides the highest table coverage while keeping the fluency of sentences. ## 4 Related Work In this section, we review the related work to ours from the perspectives of TableQA, GNN for natural language processing, and knowledge-grounded text generation. TableQAFeTaQA is the first TableQA dataset that addresses the significance of free-form answer generation, while most current research work including WikiTableQuestions Pasupat and Liang (2015), Spider Yu et al. (2018), HybridQA Chen et al. (2020), OTT-QA Chen et al. (2020), and TAT-QA Zhu et al. (2021) focuses on the short factoid answer generation. The early solution Zhong et al. (2017); Liang et al. (2017) of addressing the TableQA is to parse the natural question into a machine-executable meaning representations that can be used to query the table. To reduce the labor-intensive logical annotation, a semantic parser trained over weak supervision from denotations has been drawing attention. Plenty of Transformer-based table pre-traininig models demonstrate decent TableQA performance, e.g., TaPas Herzig et al. (2020), MATE Eisenschlos et al. (2021), TaBERT Yin et al. (2020), StruG Deng et al. (2021), GraPPa Yu et al. (2021), and TaPEx Liu et al. (2022). In addition, rather than explore table structure, RCI Glass et al. (2021) assumes the row and column are independent, and predicts the probability of containing the answer to a question in each row and column of a table individually. GNN for Natural Language ProcessingApart from the extensively renowned causal language Figure 4: A case study from FeTaQA for qualitative analysis. The highlighted cells are the ground-truth relevant table cells. “_RB_” refers to “_Representing Brazil_”. Hallucinated content from the predicted answer is marked in red and the correct content in blue. models that have showcased impressive results in various task [22, 23, 24, 25], a rich variety of language processing tasks gain improvements from exploiting the power of GNN [10]. Tasks such as semantic parsing [3], text classification [12], text generation [20], question answering [23, 22] can be expressed with a graph structure and handled with graph-based methods. In addition, researchers apply GNN to model the text generation from structured data tasks e.g. graph-to-sequence [19], and AMR-to-text [15]. Knowledge-Grounded Text GenerationEncoder-decoder-based models have been proposed to tackle the generation task by mapping the input to the output sequence. However, the input text is insufficient to provide knowledge to generate decent output due to the lack of commonsense, factual events, and semantic information. Knowledge-grounded text generation incorporating external knowledge such as linguistic features [12], knowledge graph [12, 13], knowledge base [14, 15, 16], and textual knowledge [12, 13] help to generate a more logical and informative answer. ## 5 Conclusion This paper presents a generalized pipeline-based framework TaG-QA for free-form long answer generation for TableQA. The core idea of TaG-QA is to divide the answer generation process into three stages: (1) transform the table into a graph and jointly reason over the question-table graph to select relevant cells; (2) retrieve contextual knowledge from Wikipedia using sparse retrieval, and (3) integrate the selected cells with the content knowledge to predict the final answer. Extensive experiments on a public dataset FeTaQA are conducted to verify the generated answer quality from both the fluency and faithfulness aspects. ## Limitations One limitation of TaG-CS, which accepts the entire table as input, arises when dealing with large tables, as training both BERT and the graph model simultaneously becomes challenging due to GPU memory constraints. Consequently, one promising avenue for future research involves the efficient modeling of large tables. Furthermore, it's worth noting that the availability of only one public dataset, FeTaQA, for free-form TableQA, has constrained our validation efforts to this single dataset. However, we are committed to expanding the scope of our research in the future by evaluating the performance of our pipeline model, TaG-QA, across multiple free-form TableQA datasets.
2309.09606
Managing rogue quantum amplitudes: a control perspective in quantum walks
We investigate the emergence of rogue quantum amplitudes in discrete-time quantum walks (DTQWs) influenced by phase disorder. Our study reveals the statistics of occupation probability amplitudes in space and time, uncovering optimal disorder regimes that favor rogue wave events. Through numerical simulations, we demonstrate that the probability of rogue waves increases with quantum coins close to the Pauli-Z choice, regardless the disorder degree. Conversely, for coins near Pauli-X rogue events are scarce, except under weak disorder. A monotonic threshold is observed between rare- and high-probability rogue wave regimes, depending on the quantum coin. We provide a comprehensive analysis of the coin-disorder interplay to rogue wave events. Our findings shed light on the possible control of extreme quantum amplitudes through quantum coins in disordered DTQWs.
A. R. C. Buarque, E. P. Raposo
2023-09-18T09:24:01Z
http://arxiv.org/abs/2309.09606v1
# Managing rogue quantum amplitudes: a control perspective in quantum walks ###### Abstract We investigate the emergence of rogue quantum amplitudes in discrete-time quantum walks (DTQWs) influenced by phase disorder. Our study reveals the statistics of occupation probability amplitudes in space and time, uncovering optimal disorder regimes that favor rogue wave events. Through numerical simulations, we demonstrate that the probability of rogue waves increases with quantum coins close to the Pauli-Z choice, regardless the disorder degree. Conversely, for coins near Pauli-X rogue events are scarce, except under weak disorder. A monotonic threshold is observed between rare- and high-probability rogue wave regimes, depending on the quantum coin. We provide a comprehensive analysis of the coin-disorder interplay to rogue wave events. Our findings shed light on the possible control of extreme quantum amplitudes through quantum coins in disordered DTQWs. ## I Introduction Understanding the emergence of unlikely extreme events in nature has long been a topic of great interest [1; 2; 3], with applications in fields as diverse as epidemics [4], photonics [5], and neurobiology [6], to name a few. One of such phenomena is known as rogue waves (RWs) [7; 8]. In the maritime community, RWs have been described as waves with amplitudes far exceeding what is expected for the prevailing sea state. In this context, RWs represent unpredictable oceanic waves with large amplitudes that seemingly materialize out of nowhere and vanish without a trace. The intriguing and often unpredictable occurrence of RWs has captivated the attention of researchers across various scientific domains, from oceanography [8] to optics [9] and many others (for comprehensive reviews, see, e.g, Refs. [10; 11]). In particular, the connection between the oceanic RW phenomenon and light propagation within optical fibers has gained prominence, specially in the framework of the nonlinear Schrodinger equation [9]. This link has spurred a surge of interest in wave phenomena exhibiting long-tailed statistical distributions, whose associated outlier events greatly surpass the predictions from Gaussian statistics. RWs have also been extensively explored in diverse other contexts, including linear and nonlinear optics [12; 13], plasmas [14], Bose-Einstein condensates [15], and even finance [16]. From the perspective of nonlinear dynamics related to the emergence of RWs, factors contributing to their occurrence include delayed feedback systems [17; 18], chaotic dynamics in low-dimensional systems [19], soliton collisions [20], spacetime chaos [21], vortex turbulence [22], and integrable turbulence [23]. One of the central issues in the study of RWs lies in the understanding of how these events arise, an objective closely related to the will of predicting and controlling extreme events. This challenge stems from the multifaceted processes involved in the RW formation. Extensive debates persist regarding whether RWs originate from linear [24] or nonlinear [25; 26; 27] processes, and the role of disorder in this narrative [28; 29]. In this context, nonlinear phenomena may further amplify the effects of extreme events that naturally arise from purely linear processes, in a way possibly related to modulational instability [11; 30]. RWs have also been explored in the quantum mechanical context. The emergence of RWs in quantum chains has introduced novel aspects to the understanding of a number of quantum dynamical regimes, particularly in disordered media [28; 29]. In a previous work [31], we investigated the role of randomness in the formation of anomalous amplitudes in the quantum wave function of a one-dimensional system described by a tight-binding Hamiltonian with correlated on-site disorder. We found that a specific effective degree of correlation is responsible for inducing the occurrence of much larger extreme amplitude events, particularly when compared to the case of uncorrelated disorder. We remark that similar phenomena involving correlation have been also reported in optical systems [12], with the identification of super RWs. Discrete-time quantum walks (DTQWs) have emerged as a powerful framework for modeling various physical systems and phenomena [32]. DTQWs have also been recently employed to investigate RW events in the form of sudden, highly-localized extreme wave amplitudes. In Ref. [33] the authors introduced the first model utilizing the approach of DTQWs to study RWs. By employing the Hadamard quantum walk with phase disorder, they observed the emergence of RWs in a purely linear system. This work also demonstrated that the competition between mobility and localization properties in an intermediate disordered regime is more conducive to the occurrence of extreme events [33]. However, some fundamental questions arise within the context of DTQWs that remain not addressed so far: what is the influence of applying different quantum coins (apart from the Hadamard one [33]) to the emergence of RWs? Also, can the occurrence of rogue quantum amplitudes be controlled? Previous works have shown how various quantum coins can alter fundamental properties of DTQWs, such as transport features in nonlinear [34] and aperiodic media [35], instability and self-focusing characteristics [36], diffusivity [37] and entanglement[38; 39] properties, to name a few. Thus, investigating the dynamics of the quantum walker under different quantum coins in chains with phase disorder becomes important to understand the emergence of RWs in DTQWs. This work aims at bridging between RWs and DTQWs by examining how the inherent wave-like characteristics of quantum walks can shed light on the emergence and dynamics of RWs. We demonstrate how the application of various quantum coins influences the occurrence of RWs within the context of one-dimensional DTQWs driven by random phase fluctuations. This investigation unveils the characteristic long-tailed statistical behavior of occupation probability, analogous to light intensity observed in optics [40], across the space-time domain. Our findings reveal multiple optimal regimes of disorder that maximize the occurrence of these extreme events. The RW phenomenon in DTQWs emerges due to a subtle balance between mobility and localization, in which the localization length significantly impacts the walk dynamics. We identify a monotonic threshold between quantum walks characterized by rare occurrence of RWs and those with high occurrence probability, displaying a direct dependence on the quantum coin employed in the system. Finally, we comprehensively map the relationship between the quantum coins and degree of disorder through a diagram featuring the regions that maximize the emergence of RW events. This article is organized as follows. In Section II we introduce the model and describe the general formalism. Results and discussions are presented in Section III. Lastly, final remarks and conclusions are left to Section IV. ## II Model and formalism We consider a quantum random walker propagating in a one-dimensional phase-disordered chain of \(N\) sites, with discrete positions indexed by integers \(n\,(=1,2,\ldots,N)\). The quantum walker is defined in a two-level space constituted by the coin space \(\mathcal{H}^{C}=\{[|\uparrow\rangle=(1,0)^{T}]\), \([|\downarrow\rangle=(0,1)^{T}]\}\), in which the superscript denotes the transpose, and the position space \(\mathcal{H}^{\mathcal{P}}=\{[n\rangle\}\). The Hilbert space is the tensor product \(\mathcal{H}=\mathcal{H}^{\mathcal{P}}\otimes\mathcal{H}^{\mathcal{C}}\). The initial state (\(t=0\)) of the quantum walker is a superposition of the coin and position states in the form \[|\Psi(t)\rangle=\sum_{n}[a_{n}(t)|\uparrow\rangle+b_{n}(t)|\downarrow\rangle ]\otimes|n\rangle, \tag{1}\] where \(a_{n}(t)\) and \(b_{n}(t)\) are the probability amplitudes for the up and down coin states at position \(n\), respectively. The normalization condition is given by \(\sum_{n}P_{n}(t)=\sum_{n}[|a_{n}(t)|^{2}+|b_{n}(t)|^{2}]=1\). The system evolution is obtained through \(|\psi(t)\rangle=\hat{U}^{t}|\Psi(0)\rangle\), where the time evolution operator \(\hat{U}=\hat{S}\hat{C}\hat{D}\) depends on both internal and spatial degrees of freedom of the walker and describes the simultaneous action of the quantum coin \(\hat{C}\), conditional displacement \(\hat{S}\), and phase-gain \(\hat{D}\) operators. Indeed, to account for the internal degrees of freedom a unitary operator \(\hat{C}\), known as quantum coin, is applied, which can be expressed as a SU(2) unitary matrix [41; 42], \[\hat{C}(\theta) = \cos\theta|\uparrow\rangle\langle\uparrow|+\sin\theta|\uparrow \rangle\langle\downarrow| \tag{2}\] \[+\sin\theta|\downarrow\rangle\langle\uparrow|-\cos\theta| \downarrow\rangle\langle\downarrow|,\] where the angle \(0\leq\theta\leq\pi/2\) drives the spatial bias of the quantum coin. For example, in the case of a fair coin, which selects both up and down states with equal probability, the choice \(\theta=\pi/4\) is adopted (Hadamard coin). On the other hand, in order to describe the \(N\)-cycle architecture, we add periodic boundary conditions to the conditional displacement operator that moves the walker by one lattice spacing at each unit time, \[\hat{S}=\sum_{n=1}^{N-1} |\uparrow\rangle \langle\uparrow|\otimes|n+1\rangle\langle n|+\sum_{n=2}^{N}| \downarrow\rangle\langle\downarrow|\otimes|n-1\rangle\langle n| \tag{3}\] \[+ |\uparrow\rangle\langle\downarrow|\otimes|1\rangle\langle N|+| \downarrow\rangle\langle\uparrow|\otimes|N\rangle\langle 1|.\] In addition, the phase-gain operator, defined as \[\hat{D}=\sum_{c}\sum_{n}e^{iF(c,n,t)}|c\rangle\langle c|\otimes|n\rangle \langle n|, \tag{4}\] also plays a relevant role, with \(F(c,n,t)\) representing an arbitrary real-valued function and \(c=\{\uparrow,\downarrow\}\). Actually, the versatility for the choice of \(F(c,n,t)\) allows the generation of different dynamic regimes, such as in the investigation of nonlinear and electric field effects in DTQWs [43; 36; 44]. For \(F=0\) and \(\theta=\pi/4\) the system exhibits the standard Hadamard quantum walk behavior, with the walker spreading out ballistically. However, Anderson localization can also emerge by introducing a static random phase modulation characterized by \(F(c,n,t)=F(c,n)=2\pi\nu\), where \(\nu\) is a number randomly distributed in the range \([-W,W]\) and \(W\) represents the width of the disorder. So it becomes important to investigate whether RWs can arise in DTQWs under suitable initial conditions and for proper choices of the noise level embedded in \(F(c,n)\) and disorder strength. ## III Results Our results were obtained by following the time evolution of a qubit with initial wave function evenly distributed across all sites of the chain, \[|\Psi_{0}\rangle=\frac{1}{\sqrt{2N}}\sum_{n=1}^{N}(|\uparrow\rangle+i|\downarrow \rangle)\otimes|n\rangle. \tag{5}\] We note that the choice of a completely delocalized initial state avoids ambiguity between RWs and the Anderson localization phenomenon, which would likely arise if the walker started with a initially localized wave function, so allowing just a few modes to act in the evolution of the wave packet and leading to narrow periodic beats over time. We begin our discussions by examining the time evolution of the probability density \(P_{n}\) of the quantum walker as a function of the position \(n\) on a chain with \(N=100\) sites, over a time period of \(t=100N\). Here we define RWs associated with quantum state configurations exhibiting probability amplitudes at some site \(n\) greater than twice the average probability of one-third of the largest amplitudes [12; 29; 31], i.e., for \(P_{n}\) above the threshold probability amplitude \(P_{\rm th}=2\overline{P}_{1/3}\), represented by the red horizontal line in Fig. 1. To understand the influence of different quantum coins on the emergence of RWs in DTQWs, we present in Fig. 1 snapshots of \(P_{n}\) at times when RWs occur. We consider three representative configurations of quantum coins: \(\theta=\pi/18\) in Figs. 1(a)-(b); \(\theta=\pi/4\) (Hadamard) in Figs. 1(c)-(d); and \(\theta=4\pi/9\) in Figs. 1(e)-(f), under two distinct disorder situations. The right panel in Fig. 1 represents quantum walks with weak disorder for \(W=0.1\), while the left one displays results for the strong disorder regime, \(W=0.5\). We notice that when disorder is weak RWs arise for all configurations of quantum coins. However, for strong disorder RWs are not present for quantum coins close to the \(\theta=\pi/2\) Pauli-X choice, such as \(\theta=4\pi/9\), as indicated in Fig. 1(f). For a statistical viewpoint, we display in Fig. 2 the probability density function (PDF) of values of \(P_{n}\) for the same configurations shown in Fig. 1. In fact, looking into these PDFs is relevant because they may exhibit another important signature of the occurrence of RWs, which is a non-Gaussian L-shape-like statistics [40; 12]. The threshold amplitude value \(P_{\rm th}\) is shown in red vertical line in Fig. 2. We notice that, consistent with the results in Fig. 1, all cases exhibit RW events with \(P_{n}\) surpassing the threshold limit, except for the quantum walk with coin parameter \(\theta=4\pi/9\) near Pauli-X in the strongly disordered regime, which shows a Gaussian-like profile. In Fig. 2(c) the Hadamard quantum walk in the Figure 2: Probability density functions (PDF) of \(P_{n}\) values for the same parameters of Fig. 1. The vertical red line marks the threshold \(P_{th}=2\overline{P}_{1/3}\) for the RW occurrence. More pronounced non-Gaussian PDFs occur close to the \(\theta=\pi/4\) Hadamard choice, while results for \(\theta=4\pi/9\) near the Pauli-X choice display Gaussian-like shape. Figure 1: Snapshots of the probability density \(P_{n}\) of the quantum walker in a chain with \(N=100\) sites after \(t=100N\) time steps, for three representative quantum coins: (a)-(b) \(\theta=\pi/18\), (c)-(d) \(\theta=\pi/4\) (Hadamard), and (e)-(f) \(\theta=4\pi/9\). Two degrees of disorder are considered: weak disorder in the left column (\(W=0.1\)) and strong disorder (\(W=0.5\)) in the right column. The red horizontal line depicts the probability threshold value \(P_{th}=2\overline{P}_{1/3}\) for the occurrence of RW events. weakly disordered regime (\(W=0.1\)) displays the PDF with largest number of RW events among all cases shown, thus suggesting that this regime favors the emergence of RWs. The results shown in Figs. 1 and 2 evidence that the emergence of RWs in DTQWs depends strongly on the interplay between quantum coins and the degree of disorder. Indeed, the amplitude of these rare and unpredictable extreme events varies according to the specific quantum coin applied to the dynamics of the quantum walker. In Fig. 3 we present the maximum probability amplitude \(\overline{P}_{\text{max}}\) averaged over \(10^{4}\) independent walk realizations, in a chain with \(N=100\) sites after \(t=100N\) time steps, for quantum coins in the whole range \(\theta\in[0,\pi/2]\) and considering five disorder strengths in the interval \(W\in[0.1,0.5]\). We identify in Fig. 3 three regions based on the combination of quantum coins and disorder degree, as follows. (I) For coins from the \(\theta=0\) Pauli-Z choice and up to \(\theta\approx 0.3\), the average amplitude of RWs tends to be higher for stronger degrees of disorder (\(W=0.5\) in Fig. 3). In this case the interaction with coins near Pauli-Z amplifies the effect of disorder, leading to more pronounced RW events. (II) In this region, quantum walks with intermediate disorder (\(W=0.2\)) exhibit larger probability amplitudes on average. This suggests that the occurrence of RWs in this regime is more likely for quantum coins away from both Pauli-Z and Pauli-X choices, \(0.3\lesssim\theta\lesssim 0.6\), and for moderate disorder strengths. (III) Finally, in the regime of weakly disordered quantum walks (\(W=0.1\)), the average maximum probability amplitude of RWs remains consistently higher when compared to other disorder degrees. This result indicates that for coins with \(\theta\gtrsim 0.6\), even for relatively low disorder, there is a higher probability of observing RW events. We now turn to the analysis of the relative fraction of Figure 3: Maximum probability amplitudes of RWs for quantum coins in the range \(\theta\in[0,\pi/2]\) and five disorder strengths, \(W=0.1,0.2,0.3,0.4,0.5\), averaged over \(10^{4}\) walk realizations, in a chain with \(N=100\) sites after \(t=100N\) time steps. Three regions are identified. (I) For coins close to \(\theta=0\) Pauli-Z, wavefunction amplitudes are higher for stronger degrees of disorder. (II) For coins with intermediate \(\theta\) above Pauli-Z and below \(\theta=\pi/4\) Hadamard coins, a moderate disorder strength (\(W=0.2\)) yields larger amplitudes. (III) Larger \(\theta\)-values towards the \(\theta=\pi/2\) Pauli-X choice lead weakly disordered systems to consistently exhibit higher RW amplitudes. Figure 4: Relative fraction of RW events as a function of the disorder strength \(W\) for \(N=50,100,200,400,800\) chain sizes, with three representative choices of quantum coins: (a) \(\theta=\pi/18\), (b) \(\theta=\pi/4\) (Hadamard), and (c) \(\theta=4\pi/9\). Averages were taken over \(10^{4}\) quantum walk realizations after \(t=100N\) time steps. RW events occurring for disorder strengths in the range \(W\in[0,0.5]\). Figure 4 portraits this quantity averaged over \(10^{4}\) walks after \(t=100N\) time steps, for chains with \(N=50,100,200,400\), and \(800\) sites, showing how the system size influences the statistics and localization behavior of RWs. We consider in Fig. 4(a) the quantum coin with \(\theta=\pi/18\). In this case, we observe that the minimum value of \(W\) in the weak disorder regime for the emergence of RWs changes with the system size, despite the high mobility of the quantum walker. The number of RW events always saturates on average at relative values no large than \(0.3\) for all chain sizes and sufficiently large disorder, a result directly related to the localization wavelength of the walker's wavefunction for quantum coins close to Pauli-Z. So, before reaching saturation the evolution of the wavepacket is generally characterized by sparse low-amplitude waves that hardly add up to produce rogue events. On the other hand, for the Hadamard quantum walk with \(\theta=\pi/4\), Fig. 4(b) shows that the minimum degree of disorder required for the emergence of RWs is reduced. In the strongly disordered regime the relative fraction of RW events is much smaller than the corresponding one for \(\theta=\pi/18\), Fig. 4(a). In addition, for quantum walks with coins tending to Pauli-X, \(\theta=4\pi/9\) in Fig. 4(c), no RWs arise for disorder strengths \(W\gtrsim 0.2\). In this regime, localization effects on the wave packet become more pronounced and the walker presents low degree of mobility. In order to deepen the understanding of these findings, we plot in Fig. 5(a) the minumum disorder strength \(W_{c}\) above which RWs can emerge as a function of the chain size \(N\), for the three previous values of \(\theta\). Regardless the coin choice, we notice that \(W_{c}\propto N^{-1/2}\), so that an increase in the chain size renders the quantum system to be more susceptible to RW events. This scaling behavior relates to the fact that the emergence of RWs in this context is induced by disorder, and so these events take place when the associated Anderson localization length \(\lambda\) decreases to a value smaller than the system size, \(\lambda<N\). In fact, in the regime of weak disorder the typical localization length of the eigenstates in quantum walks subjected to random phase shifts exhibits [45] a quadratic dependence on the inverse of the square disorder width, namely, \(\lambda=k/W^{2}\), where \(k\) is a constant. Hence the Figure 5: (a) Disorder strength \(W_{c}\) above which RWs emerge for three choices of quantum coins: \(\theta=\pi/18\) (black circles), \(\theta=\pi/4\) (red squares, Hadamard), and \(\theta=4\pi/9\) (blue triangles). The scaling behavior \(W_{c}\propto N^{-1/2}\) unveils that at \(W=W_{c}\) the localization length \(\lambda\propto 1/W^{2}\) is of the order of the chain size \(N\), independently of the quantum coin. (b) Colors blue and gray depict regions (I) and (II) respectively associated with the presence or not of RW events. Figure 6: Heatmap of the fraction of RW events in the parameter space \(W\in[0,0.5]\) and \(\theta\in[0,\pi/2]\), for a chain with \(N=100\) sites and \(t=100N\). One identifies regions in which the occurrence of RW events is very rare, such as near the points \((\theta,W)=(0,0)\) and \((\theta,W)=(\pi/2,0.5)\), as well as configurations of quantum coins and disorder strengths that promote the likely emergence of these events, as in the region \(\pi/4\lesssim\theta\lesssim\pi/2\) and \(0\lesssim W\lesssim 0.2\). condition for the emergence of RWs becomes \(k/W^{2}<N\), or \(W>(k/N)^{1/2}\), in agreement with Fig. 5(a). Figure 5(b) displays the dependence of \(W_{c}\) on the quantum coin value \(\theta\), for a chain with \(N=100\) sites and \(t=100N\). Colors blue and gray depict regions (I) and (II) respectively associated with the presence or not of RW events. We note that quantum coins near the \(\theta=0\) Pauli-Z choice are more effective in mitigating the occurrence of RWs. On the other hand, as one considers coins closer to \(\theta=\pi/2\) Pauli-X, a monotonic decay of \(W_{c}\) becomes apparent, underscoring the substantial influence of quantum coins on the occurrence of RW events in DTQWs. Finally, all the above findings can be summarized in the comprehensive mapping shown in Fig. 6 of the relative fraction of RW events in the parameter space defined by the quantum coin parameter (\(\theta\in[0,\pi/2]\)) and disorder strength (\(W\in[0,0.5]\)), for a chain with \(N=100\) sites and \(t=10N\). We first notice the absence of RWs in the weakly disordered regime, \(0\lesssim W\lesssim 0.2\), for quantum coins close to \(\theta=0\) Pauli-Z. Indeed, regardless the disorder degree the emergence of RWs is very rare in this regime. On the other hand, for quantum coins in the range \(\pi/4\lesssim\theta\lesssim\pi/2\), disorder strengths \(0\lesssim W\lesssim 0.2\) lead to the most likely occurrence of RW events, a trend that fades away as the \(\theta=\pi/2\) Pauli-X coin is approached. At last, in the intermediate and high disorder regimes, \(0.2\lesssim W\lesssim 0.5\), the relative fraction of RW events remains nearly constant for quantum coins \(0\lesssim\theta\lesssim 2\pi/9\), and then starts to decline until entering a less likely region, consistent with our previous results. ## IV Final remarks and conclusions In this work, we have delved into the intriguing phenomenon of rogue waves (RWs) within the discrete-time quantum walks (DTQW) protocol. Through a comprehensive analysis of different quantum coin configurations and degrees of disorder, we have shed light on the general properties of these rare and unpredictable events in DTQW chains. In this context, our investigation revealed a rich interplay between quantum dynamics, disorder, and coin parameters to the emergence of RW events. Notably, we have identified distinct regimes where RWs are more likely to manifest, influenced by factors such as quantum phase flucutations, disorder-induced localization, and spatial scaling. The transition from weak to strong disorder highlighted the evolution of the RW behavior, with certain coin configurations amplifying or dampening their occurrence. The identification of specific parameter ranges of disorder strength and quantum coins in which RWs are exceptionally rare or abundant contributes to the understanding of their controllability, with possible applications in various fields. In general lines, our study provides a comprehensive framework for exploring and manipulating RWs in DTQWs. The insights gained here not only enrich the overall knowledge of RWs but also offer potential avenues for harnessing their unique properties in future advances. ## V Acknowledgments This work was partially supported by the Brazilian agencies CNPq (Conselho Nacional de Desenvolvimento Cientifico e Tecnologico) and FACEPE (Fundacao de Amparo a Ciencia e Tecnologia do Estado de Pernambuco).
2309.06535
Automatic quantification of abdominal subcutaneous and visceral adipose tissue in children, through MRI study, using total intensity maps and Convolutional Neural Networks
Childhood overweight and obesity is one of the main health problems in the world since it is related to the early appearance of different diseases, in addition to being a risk factor for later developing obesity in adulthood with its health and economic consequences. Visceral abdominal tissue (VAT) is strongly related to the development of metabolic and cardiovascular diseases compared to abdominal subcutaneous adipose tissue (ASAT). Therefore, precise and automatic VAT and ASAT quantification methods would allow better diagnosis, monitoring and prevention of diseases caused by obesity at any stage of life. Currently, magnetic resonance imaging is the standard for fat quantification, with Dixon sequences being the most useful. Different semiautomatic and automatic ASAT and VAT quantification methodologies have been proposed. In particular, the semi-automated quantification methodology used commercially through the cloud-based service AMRA R Researcher stands out due to its extensive validation in different studies. In the present work, a database made up of Dixon MRI sequences, obtained from children between 7 and 9 years of age, was studied. Applying a preprocessing to obtain what we call total intensity maps, a convolutional neural network (CNN) was proposed for the automatic quantification of ASAT and VAT. The quantifications obtained from the proposed methodology were compared with quantifications previously made through AMRA R Researcher. For the comparison, correlation analysis, Bland-Altman graphs and non-parametric statistical tests were used. The results indicated a high correlation and similar precisions between the quantifications of this work and those of AMRA R Researcher. The final objective is that the proposed methodology can serve as an accessible and free tool for the diagnosis, monitoring and prevention of diseases related to childhood obesity.
José Gerardo Suárez-García, Po-Wah So, Javier Miguel Hernández-López, Silvia S. Hidalgo-Tobón, Pilar Dies-Suárez, Benito de Celis-Alonso
2023-09-12T19:19:47Z
http://arxiv.org/abs/2309.06535v1
###### Abstract ###### Abstract Childhood overweight and obesity is one of the main health problems in the world since it is related to the early appearance of different diseases, in addition to being a risk factor for later developing obesity in adulthood with its health and economic consequences. Visceral abdominal tissue (VAT) is strongly related to the development of metabolic and cardiovascular diseases compared to abdominal subcutaneous adipose tissue (ASAT). Therefore, precise and automatic VAT and ASAT quantification methods would allow better diagnosis, monitoring and prevention of diseases caused by obesity at any stage of life. Currently, magnetic resonance imaging (MRI) is the standard for fat quantification, with Dixon sequences being the most useful. Different semiautomatic and automatic ASAT and VAT quantification methodologies have been proposed. In particular, the semi-automated quantification methodology used commercially through the cloud-based service AMRA(r) Researcher (AMRA Medical AB, Linkoping, Sweden) stands out due to its extensive validation in different studies. In the present work, a database made up of Dixon MRI sequences, obtained from children between 7 and 9 years of age, was studied. Applying a preprocessing to obtain what we call total intensity maps, a convolutional neural network (CNN) was proposed for the automatic quantification of ASAT and VAT. The quantifications obtained from the proposed methodology were compared with quantifications previously made through AMRA(r) Researcher. For the comparison, correlation analysis, Bland-Altman graphs and non-parametric statistical tests were used. The results indicated a high correlation and similar precisions between the quantifications of this work and those of AMRA(r) Researcher. The final objective is that the proposed methodology can serve as an accessible and free tool for the diagnosis, monitoring and prevention of diseases related to childhood obesity. **Automatic quantification of abdominal subcutaneous and visceral adipose tissue in children, through MRI study, using total intensity maps and Convolutional Neural Networks** Jose Gerardo Suarez-Garcia\({}^{1}\)*, Po-Wah So\({}^{2}\), Javier Miguel Hernandez-Lopez\({}^{1}\), Silvia S. Hidalgo-Tobon\({}^{3,4}\), Pilar Dies-Suarez\({}^{3}\) and Benito de Celis-Alonso\({}^{1}\) Footnote *: The author JGSG was supported by the National Council of Sciences, Technologies and Humanities (CONAH-CYT) to carry out this work, through a posdoctoral scholarship. \({}^{1}\)Facultad de Ciencias Fisico-Matematicas, Benemerita Universidad Autonoma de Puebla, Puebla, Mexico \({}^{2}\)Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience, King's College London, United Kingdom \({}^{3}\)Departamento de Imagenologia, Hospital Infantil de Mexico Federico Gomez, Mexico City, Mexico \({}^{4}\)Departamento de Fisica, Universidad Autonoma de Mexico Iztapalapa, Mexico City, Mexico \({}^{a}\)[email protected] ## 1 Introduction Overweight and obesity in childhood is a global health problem. Between 2000 and 2016, the proportion of overweight children between the ages of 5 and 19 increased from 10% to almost 20%. Childhood overweight can lead to early onset of type 2 diabetes mellitus, as well as stigma and depression. In addition, childhood obesity is associated with an increased risk of obesity in adulthood, which has serious health and economic implications [1]. Mexico is one of the main countries in the world with the highest pravelencia values. Using the body mass index (BMI) as reference, the prevalence of overweight in children between 5 and 9 years old (BMI \(>\) 17.4) is equal to 18.8%, while for obesity (BMI \(>\) 19.8) it is equal to 18.6% [2]. However, for any BMI, each individual varies substantially in the distribution of body fat. This variation has important implications for the risk of developing different diseases [3]. It is well known that higher amounts of visceral adipose tissue (VAT) compared to the amount of abdominal subcutaneous adipose tissue (ASAT) increase cardiovascular risk, development of type 2 diabetes mellitus, liver disease, cancer, and contracting infections (such as COVID -19) [4]. Quantitative, precise and reproducible measurements of total body fat and its distribution are therefore important for the prevention, diagnosis and monitoring of diseases related to overweight and obesity both in childhood and in adulthood [5]. Dual-energy X-ray absorptiometry is a useful tool to accomplish this task. However, it makes modeling assumptions to differentiate VAT from ASAT, which produces errors in the quantifications. Also, it uses ionizing radiation and can only analyze 2D projections of the body [6]. On the other hand, Magnetic Resonance Imaging (MRI) uses non-ionizing radiation and directly measuring total body fat content and distribution, as well as skeletal tissue mass accurately and reliably [7]. Therefore, MRI is currently the gold standard for measuring body composition. In particular, the so-called Dixon technique is a rapid method that allows obtaining high-contrast images for soft tissue [8]. This type of image uses the slight differences that exist between the magnetic resonance frequencies of the protons bound to the fat and water molecules, in order to distinguish the signals coming from each one. The set of images obtained from the Dixon sequences include in-phase, out-of-phase, fat-only, and water-only images from a single acquisition. However, quantifying VAT and ASAT separately, both in children and adults, is still an interesting task to date, although the literature is still limited in studies of children [8]. Regarding semiautomatic analysis protocols, they have the disadvantage that they require the intervention of an operator or specialized personnel, resulting in a high cost, in addition to introducing variability depending on the analyst [5]. Different automatic VAT and SAT quantification methodologies have been proposed. Among them, those that apply Convolutional Neural Networks (CNNs) stand out both for the segmentation of the regions of interest, as well as for the quantification of fat deposits [9, 10, 11]. CNNs are created specifically for image analysis. Its design aims to mimic the mechanism of the visual cortex of mammals, which is assumed to be formed by groups of ordered and specialized neurons for object recognition, starting from the simplest features to the most complex patterns [12] One of the advantages of CNNs is that they automatically learn the necessary image features, without the need to be entered by the user. CNNs have been applied to solve different problems such as the classification of brain tumors [13], detection of skin lesions [14], detection of diabetes through images of heart rhythm frequencies [15], breast cancer detection [16], COVID-19 detection through X-ray images [17], among many others. Recently, for example, Schneider et al. [10] proposed a software for automatic VAT and ASAT quantification and segmentation by studying MRI of adults applying UNet-based FCN architectures and data augmentation techniques, reaching high correlation values. In another work, Devi et al. [11] developed a hybrid convolutional neural network, combining a conventional CNN and a texture layer, for VAT and ASAT segmentation of abdominal MRI images of adults, obtaining a performance that, according to the authors, exceeds the state-of-the-art methods. Regarding studies in children, Armstrong et al. [18] presented a paper in which they recognize that many conventional techniques applied in children to quantify body composition and liver fat have limitations, due to sensitivity to movement, mainly in the abdomen region due to breathing. Therefore, they developed a technique based on free-breathing radial and Cartesian MRI sequences to quantify body composition and hepatic proton-density fat fraction (PDFF) in children from 2 to 7 months of age, evaluating the feasibility for hepatic PDFF quantification using a scoring system made by a radiologist. Also Armstrong et al. [19], in another study they compared non-sedated free breathing multi echo 3D stack of radial MRI versus standard breath holding and spectroscopy techniques for fat quantification. They studied healthy and overweight children between 7 and 13 years of age with nonalcoholic fatty liver disease, evaluating the quantifications using image quality scores, linear regression and Bland Altman analysis, obtaining accurate and repeatable measurements. Kway et al. [20] developed and evaluated an automatic segmentation method for the identification of abdominal adipose tissue (AAT), deep subcutaneous adipose tissue (DSAT) and visceral adipose tissue (VAT) deposits in neonates (less than two weeks old) and children. (ages between 4.5 and 6 years). Their method was based on a CNN based on the architecture known as U-net, which was compared with manual segmentations made by an expert through the calculation of Dice scores and Bland-Altman plots. Among the semiautomatic quantification works, Peterli et al. [21], evaluated the distribution of visceral, subcutaneous, and liver fat in morbidly obese patients before and after bariatric surgery. In their work, they studied Dixon MRI sequences by applying automatic segmentation based on a statistical model (SSM), to later quantify ASAT, VAT and liver volumes through manual voxel counting. On the other hand, an outstanding semiautomatic methodology for quantifying fat and muscle compartments by studying DLXON sequences is the one used commercially through the cloud-based service AMRA(r) Researcher (AMRA Medical AB, Linkoping, Sweden). Its methodology has been described in detail and evaluated in terms of accuracy [22, 23, 24, 25, 26]. This basically consists of the following. Image calibration to fat referenced images. Atlases with ground truth labels for fat and muscle compartments are recorded to an acquired MRI data set. Quality control is performed by trained operators, who can interactively adjust and improve the final segmentation. Finally, the volumes of fat and muscle are quantified within the segmented regions [27]. Therefore, this methodology requires the intervention of an operator to perform quality control, and before performing the quantification, it is necessary to accurately segment the regions of interest. In addition, it is a commercial method, so an economic investment is necessary, making it not easily accessible to anyone. In the present work, a simple, economical and low computational methodology for the automatic quantification of VAT and ASAT was proposed. This was based on the study of Dixon sequences in phase, of male children between 7 and 9 years old, applying pre-processing techniques for the generation of what we call Total Intensity Maps. These maps included sufficient information on the regions of interest, and then, without the need to perform a precise segmentation, Convolutional Neural Networks (CNNs) proposed in two dimensions were applied to perform the quantifications. The reference standard were quantifications made previously through AMRA(r) Researcher, comparing these with those obtained in this work, using Bland-Altmann plots, regression analysis and non-parametric statistical tests. ## 2 Methodology ### Subjects In the present work, a proprietary database obtained from a collaborative project between researchers from institutions in Mexico and the United Kingdom was studied. This contained different MRI modalities of 78 mexican male children between 7 and 9 years of age, obtained at the Hospital Infantil de Mexico in 2018. Among the children studied, 3 were underweight (BMI percentile \(<\) 5), 42 normal weight (BMI percentile 5 - 85), 17 overweight (BMI percentile 85-95 ) and 16 obese (BMI percentile \(>\)95 ). ### MRI protocol All subjects were scanned using a Siemens 3T Skyra scanner (Syngo MR E11) (Siemens, Erlangen, Germany) with the dual-echo Dixon Vibe protocol, covering neck to knees. Subjects were scanned with five overlapping slabs of axial 3D spoiled gradient dual-echo images, in supine position with the arms along the sides and without localizer. Reconstruction of water-fat Dixon images was performed using the integrated scanner software. Common parameters for slabs one to three were: TR = 3.78 ms, TE = 1.23 ms, flip angle 10, bandwidth 123 Hz, 44 slices, voxel size 1.95\(\times\)1.95\(\times\)5 mm\({}^{3}\) and 256\(\times\)192 matrix, acquired during 17 seconds expiration breath-holds. Slabs four and five were acquired during free breathing with TR = 3.94 ms, TE = 2.49 ms, flip angle 10, bandwidth 123 Hz, 72 slices, voxel size 1.95\(\times\)1.95\(\times\)4 mm\({}^{3}\) and 256\(\times\)192 matrix. Viewed from the axial plane, each volume had dimensions of 192\(\times\)256\(\times\)44 voxeles. ### AMRA(r) Researcher: semiautomatic quantification methodology For the 78 study subjects, body composition semiautomated quantification technique were performed from the reconstructed water and fat images using Dixon sequences in phase and out of phase, to later analyze these through the commercially available service AMRA(r) Researcher. Briefly and as commented before, the analysis used in AMRA(r) Researcher consisted of the following steps [26]: (1) Intensity inhomogeneity correction and calibration of fat and water images [24]. (2) Ground truth labels for fat compartments were registered to the acquired volumes using non-rigid atlas based registration. (3) All datasets were visually inspected and quality controlled by an trained analysis engineer at Advanced MR Analytics (Linkoping, Sweden), detecting and correcting common artifacts such as water-fat swaps (exchange of the signal-channel for fat and water due to ambiguities), anatomy outside field of view, breathing/motion artefacts, and issues with the MR-protocol. (4) Quantification of fat, measured in liters (L), based on the calibrated images by integrating over the quality controlled labels. Finally, a report was generated. The included fat compartments were visceral adipose tissue (VAT) and abdominal subcutaneous adipose tissue (ASAT). VAT was defined as adipose tissue within the abdominal cavity, excluding adipose tissue outside the abdominal skeletal muscles and adipose tissue and lipids within and posterior of the spine and posterior of the back muscles. ASAT was defined as subcutaneous adipose tissue in the abdomen from the top of the femoral head to the top of the thoracic vertebrae T9. In each of the reports generated by AMRA\({}^{\circledR}\) Researcher, a precision (calculated from the coefficients of repeatability, i.e. the smallest detectable difference between two measurements at a 95% confidence level) was declared equal to 0.17 L for VAT and equal to 0.33 L for ASAT. ### Proposed automatic quantification methodology In order to completely automate the quantification algorithm and avoid human intervention to correct the artifact known as water-fat swap, only in-phase Dixon sequences were studied. Remembering that each subject's scan was made up of five overlapping syllables, from top to bottom, only those numbered 2 and 3 were analyzed, since they contained the region of interest. Hereafter, these were called \(V_{1}\) and \(V_{2}\) respectively. Due to the overlap, the two volumes had to be joined by choosing the appropriate slice of each. Although the volumes were obtained in a single acquisition and with the indication of holding the breath, the joining process was not a trivial task. This was mainly due to artifacts caused by breathing or movement, causing differences between the range of intensities of both volumes, misalignment and mismatch in the anatomical regions. In order to correct this situation, a set of processes was proposed to correctly join the pair of volumes of each subject. All the algorithms presented in this work were developed with the MATLAB R2022b software, on a conventional computing system (Intel Core i7 12700H CPU, 16GB RAM, RTX 3070Ti GPU). #### 2.4.1 Processes to join \(V_{1}\) and \(V_{2}\) The intensities of the voxels of \(V_{1}\) and \(V_{2}\) were normalized, varying from 0 to 1 using the method known as min-max normalization. This was done assuming that between both volumes the voxels of lower intensity corresponded to the same type of tissue, as well as the voxels of higher intensity corresponded to another tissue. Considering the axial plane, the contrast of each volume was improved by histogram equalization. First, \(V_{2}\) was completely equalized using as reference the histogram of the last 15 slices of \(V_{1}\). Subsequently, \(V_{1}\) was completely equalized using as reference the histogram of the first 15 slices of \(V_{2}\) already equalized. Intensities less than 0.05 were set to 0, which corresponded to the empty bottom of each volume. In order to find the pair of slices (one from \(V_{1}\) and another from \(V_{2}\)) that will serve to join the two volumes, only the last 8 slices of \(V_{1}\) and the first 8 slices of \(V_{2}\) were compared. These sets of slices were called \(V^{\prime}_{1}\) and \(V^{\prime}_{2}\), which had dimensions of 192\(\times\)256\(\times\)8 voxels. In each volume different regions of the body that were not of interest could be visible, such as arms, shoulders and hands. So, before comparing volumes \(V^{\prime}_{1}\) and \(V^{\prime}_{2}\), it was necessary to apply an algorithm to exclude the mentioned regions. Considering that these regions appeared separated from the region of interest, the algorithm simply started from a voxel located approximately in the central area contained in the region of interest and only all neighboring voxels that were connected to each other were retained. Once this was done, the comparison between \(V^{\prime}_{1}\) and \(V^{\prime}_{2}\) continued. A box with the smallest dimensions was sought such that it completely contained either of the two volumes \(V^{\prime}_{1}\) and \(V^{\prime}_{2}\). Voxels with intensities greater than 0 were labeled using a threshold equal to 0.5, such that voxels with intensities less than or equal to 0.5 were labeled as 1, and voxels with intensities greater than 0.5 were labeled as 2. Next, the 8 slices of \(V^{\prime}_{1}\) and the 8 slices of \(V^{\prime}_{2}\), already labeled, were compared in pairs by calculating the so-called Dice coefficient. From the 64 comparisons made, the pair of slices that obtained the highest value of the Dice coefficient were used as a reference to join the two complete volumes \(V_{1}\) and \(V_{2}\) (Fig. 1). In addition to joining the volumes, they were also centered. To do this, using the chosen slices, the pair of voxels located in the center of them were used as reference points to center and finally join the volumes \(V_{1}\) and \(V_{2}\) (Fig. 2). After centering and joining \(V_{1}\) and \(V_{2}\), both volumes ended up displaced relative to each other. However, the joined volume had to be contained in a single volume with uniform dimensions. To do this, the two slices that served to join \(V_{1}\) and \(V_{2}\) were centered within two slices of dimensions 200x200 voxels respectively. Then, these slices were joined, and subsequently the rest of the volumes were contained in a single volume with dimensions in the axial plane of 200x200 voxels and height equal to the sum of the heights of the two joined volumes. Afterwards, only 30 total slices of the joined volume were retained, with 10 from \(V_{1}\) starting from its chosen slice upward, and 20 from \(V_{2}\) starting from its chosen slice downward. The joined volume was called \(V\), which had dimensions of 200\(\times\)200\(\times\)30 voxels. As mentioned before, regions that were not of interest were excluded from volumes \(V_{1}^{\prime}\) and \(V_{2}^{\prime}\) (with 8 slices each). Then, with the joined volume \(V\) (having 30 slices in total), this task was repeated in a slightly different way. Instead of choosing a voxel located in the center of the entire volume, voxels located in the center of each of the 30 slices were searched. In each slice separately, starting from the central voxel, only the voxels that were connected to it and to each other were added. Because the volume was already centered, performing this task for each slice was more efficient than performing it at once for the entire volume. Fig. 3 shows a diagram with all the processes followed to join the volumes. #### 2.4.2 Creation of total intensity maps \(I_{asat}\) and \(I_{vat}\) for training the proposed CNNs From each joined volume \(V\), two-dimensional maps \(I_{asat}\) and \(I_{vat}\) were created. These two together formed a new volume \(V_{I}\) with dimensions 200\(\times\)200\(\times\)2 voxels. The volumes \(V_{I}\) were used as inputs to two proposed two-dimensional CNNs whose tasks were the quantification of ASAT and VAT respectively. The volumes \(V_{I}\) were considered by the CNNs as 2D images with two different channels. The image \(I_{asat}\) was created from a volume \(V_{asat}\), which contained an approximate segmentation of the region where the ASAT should be located. On the other hand, the image Figure 1: **Pair of slices chosen to join \(V_{1}\) and \(V_{2}\). In (a) and (c) examples of slices of the volumes \(V_{1}^{\prime}\) and \(V_{2}^{\prime}\) (normalized and equalized) are shown respectively. In (b) and (d) the previous slices are shown with the voxels labeled within a box that completely contained them in volumes \(V_{1}^{\prime}\) and \(V_{2}^{\prime}\). These two slices were used to join the volumes since they obtained the highest value of the Dice coefficient. Furthermore, the slices served to center the volumes taking as reference the center of the chosen slices (red crosses).** Figure 3: **Volume joining.** Processes carried out to find the pair of slices from \(V_{1}^{\prime}\) and \(V_{2}^{\prime}\) that served as a reference to join the two volumes \(V_{1}\) and \(V_{2}\) and thus obtain, for each subject, a single joined volume \(V\) normalized, equalized and centered. Figure 2: **Joining and centering of \(V_{1}\) and \(V_{2}\).** View from the coronal (a) and sagittal (b) planes of the join of \(V_{1}\) and \(V_{2}\) without applying any process to them. View from the coronal (c) and sagittal (d) planes of the normalized, equalized and centered \(V_{1}\) and \(V_{2}\), using as reference the pair of slices chosen from \(V_{1}^{\prime}\) and \(V_{2}^{\prime}\) respectively. was created from a volume \(V_{vat}\), which contained an approximate segmentation of the region where the VAT should have been located. The images \(I_{asat}\) and \(I_{vat}\) were called total intensity maps. To obtain these two images, the following was done for each subject. The volume \(V\) was smoothed with a median filter of size 3\(\times\)3\(\times\)3 voxels and was subsequently normalized from 0 to 1. Then, a resized volume proportional to 85% of the original volume was obtained. This volume was smoothed with a median filter of size 7\(\times\)7\(\times\)7 voxels. All voxels with intensities greater than 0 were set equal to 1. A filling process was carried out to eliminate possible holes in the resized volume. This volume was used as a mask over the original volume \(V\), so that all voxels within the mask with intensities greater than a threshold equal to 0.75 were set to 0. The remaining voxels were set to 1, and again a filling process to eliminate possible holes was applied. This volume served as a new mask over the original volume \(V\) used in the following way. By eliminating the voxels that were inside the mask, the volume \(V_{asat}\) was obtained, which contained an approximate segmentation of the region that must have contained the ASAT. On the other hand, by conserving only the voxels that were within the last mask, the volume \(V_{vat}\) was obtained, which contained an approximate segmentation of the region that should have contained the VAT. An example of the process described above is shown in Fig. 4. Although the figure shows a slice as an example, the process was performed with the entire volume \(V\) at the same time. To obtain the total intensity maps \(I_{asat}\) and \(I_{vat}\) the following was done. All voxels of the volume \(V_{asat}\) whose intensities were different from 0 were set equal to 1 and the following equation was applied: \[I_{asat}(x,y)=\sum_{z=1}^{30}V_{asat}(x,y,z) \tag{1}\] so that all the intensities of the voxels located in the same position of each slice with dimensions equal to 200\(\times\)200 voxels were added. Thus, the total intensity map \(I_{asat}\) with dimensions equal to 200\(\times\)200\(\times\)1 voxels was obtained (Fig. 5(a)). To obtain the second map, the following was done. Figure 4: **Approximate segmentation of the regions that should have contained the VAT and ASAT.** (a) Slice of a volume \(V\). (b) Smoothing. (c) Volume resized and smoothed again. (d) Voxels set to 1, application of hole filling process and mask creation. (e) Volume obtained after applying the mask to the volume shown in (b), eliminating voxels located outside it. (f) Elimination of voxels with intensities greater than 0.75. (g) Voxels equal to 1. (h) Application of hole filling processes and creation of new mask. Applying this last mask to the volume shown in (a), the volume (i) \(V_{asat}\) was obtained by eliminating the voxels inside it, and the volume (j) \(V_{vat}\) by eliminating the voxels outside it. On the other hand, all voxels of the volume \(V_{vat}\) whose intensities were less than 0.7 were set to 0, while voxels with intensities greater than or equal to 0.7 retained their value. Then, the following equation was applied: \[I_{vat}(x,y)=\sum_{z=1}^{30}V_{vat}(x,y,z) \tag{2}\] so that all the intensities of the voxels located in the same position of each slice with dimensions equal to 200\(\times\)200 voxels were also added. Thus, the total intensity map \(I_{vat}\) was obtained with dimensions equal to 200\(\times\)200\(\times\)1 voxels. Finally, a volume \(V_{I}\) was created from the two aforementioned maps. This volume had \(I_{asat}\) as its first slice and \(I_{vat}\) as its second, thus forming a volume with dimensions 200\(\times\)200\(\times\)2 voxels (Fig. 5(b )). The volumes \(V_{I}\) obtained from each subject were used as inputs for the proposed CNNs that will be described in the following section. #### 2.4.3 Proposed CNNs Two CNN architectures were proposed to quantify ASAT and VAT respectively. Both CNNs had a similar structure and studied the same volumes \(V_{I}\) of each subject. From 78 subjects, 42 were randomly chosen for training, 18 for validation and 18 for testing. Table 1 shows their distribution according to their weight classification. Fig. 6 shows the architecture of the CNN to quantify the ASAT. There were four blocks with the same layers. The first was a convolution layer with 128 filters of size 3\(\times\)3 with stride equal to 1\(\times\)1; the second was an average pooling layer of size 2\(\times\)2 with stride equal to 2\(\times\)2 and same padding; the third was a Leaky ReLu activation layer with scale equal to 0.15; and the fourth was a dropout layer with probability equal to 0.5. After the mentioned blocks, there was a fully connected layer of 10 nodes, followed by a batch normalization layer and a Leaky ReLu activation layer with scale equal to 0.15. Then there was a BiLSTM layer with 10 hidden units and 20\(\times\)1 hidden states and a dropout layer with probability 0.2. Finally there was a regression layer with a single output node. To avoid overfitting, data augmentation was used through random rotations varying from -30 to 30 degrees. A minibatch equal to 42 was used (this being the total number of training samples), applying the SGDM optimizer with a constant learning range equal to 0.005. The CNN was trained for 30,000 epochs with validations every 10 epochs. \begin{table} \begin{tabular}{c c c c c c} \hline **Subset** & **Low weight** & **Normal weight** & **Overweight** & **Obesity** & **Total** \\ \hline Training & 3 & 20 & 8 & 11 & 42 \\ Validation & 0 & 11 & 5 & 2 & 18 \\ Testing & 0 & 11 & 4 & 3 & 18 \\ Total & 3 & 42 & 17 & 16 & 78 \\ \hline \end{tabular} \end{table} Table 1: Distribution of subjects according to their weight classification. Figure 5: Total intensity maps. (a) Total intensity map \(I_{asat}\), obtained from the volume \(V_{asat}\). (b) Total intensity map \(I_{vat}\), obtained from the volume \(V_{vat}\). Both maps formed the slices of a volume \(V_{I}\) of dimensions 200\(\times\)200\(\times\)2 voxels that was used as input to the proposed CNNs. Fig. 7 shows the CNN architecture to quantify the VAT. Its architecture was similar to that for quantifying the ASAT. Their differences were the following. For the second CNN, Leaky ReLu activation layers with scale equal to 0.5 were used; the probability of dropout layers were equal to 0.3; rotation angles for augmentation ranged from -10 to 10 degrees; and the constant learning range was equal to 0.001. The expected outputs of each CNN were the quantizations reported by AMRA(r) Researcher. After training the CNNs, they were applied to the training, validation and testing subjects. In order to compare the quantifications between AMRA(r) Researcher and those made by the CNNs, Bland-Altman plots were created, correlation analysis was performed and the non-parametric statistical test called Wilcoxon signed rank test was applied. ## 3 Results Fig. 8 shows the correlation and Bland-Altman plots obtained by comparing the VAT quantifications made by AMRA(r) Researcher and those obtained in the present work, this for the training, validation and testing subjects respectively. Something similar is shown in Fig. 9 when comparing the ASAT quantifications. The correlation graphs indicated the value of \(R^{2}\), the intercept and slope of the fit line. Table 2 shows the p-values that indicated whether there was no significant correlation between the compared quantifications (null hypothesis). All results indicated a high correlation which was statistically significant. Bland-Altman plots indicated the average difference of the quantifications, the 95% limits of agreement and the reproducibility coefficient (RCP). For the quantification of VAT, coefficients of variation (CV) equal to 7.3%, 16% and 17%, and RCP equal to 0.08 L, 0.13 L and 0.17 L, were obtained for the training, validation and testing subjects. Figure 6: CNN architecture to quantify the ASAT. The different blocks and layers that formed the proposed CNN are shown. Figure 7: CNN architecture to quantify the VAT. The different blocks and layers that formed the proposed CNN are shown. respectively. For the quantification of ASAT, CV equal to 1.8%, 10% and 8.6%, and RCP equal to 0.08 L, 0.32 L and 0.32 L, were obtained for the training, validation and testing subjects respectively. Table 3 shows the results from the Wilcoxon signed rank test between the AMRA(r) Researcher quantifications and those made in the present work, for the VAT and ASAT, with the training, validation and test subjects respectively. All tests indicated that there were no significant statistical differences (p-value \(>\) 0.05). ## 4 Discussion and conclusions The present work used as a reference standard the quantifications made by the widely validated commercial measurement system called AMRA(r) Researcher. Within the AMRA(r) Researcher \begin{table} \begin{tabular}{c c c} \hline \hline **Subset** & **p-value (VAT)** & **p-value (ASAT)** \\ \hline Training & 0 & 0 \\ Validation & 9.503\(\times 10^{-11}\) & 1.511\(\times 10^{-14}\) \\ Testing & 3.586\(\times 10^{-12}\) & 1.039\(\times 10^{-15}\) \\ \hline \hline \end{tabular} \end{table} Table 2: **Correlation p-values. p-values obtained to know if the correlations between AMRA® Researcher quantifications and those made in the present work were significant.** Figure 8: **Bland-Altman and correlation plots for VAT. (a), (b) and (c) show the correlation plots, while (d), (e) and (f) show the Blan-Altman plots, for the training, validation and testing subjects respectively for the quantification of VAT.** \begin{table} \begin{tabular}{c c c} \hline \hline **Subset** & **p-value (VAT)** & **p-value (ASAT)** \\ \hline Training & 0.2998 & 0.3093 \\ Validation & 0.2656 & 0.9478 \\ Testing & 0.7757 & 0.1765 \\ \hline \hline \end{tabular} \end{table} Table 3: **Results from the Wilcoxon signed rank test. The p-values obtained after applying the Wilcoxon signed rank test between the AMRA® Researcher quantifications and those made in the present work are shown.** reports (obtained during 2018, the year on which the measurements were carried out), precisions equal to 0.17 L and 0.33 L were indicated for the quantifications of VAT and ASAT respectively. This precision was defined as a repeatability coefficient, that is, the smallest detectable difference between two measurements with a confidence level of 95%, made under the same conditions, with the same imaging protocol and quantification methodology. In this work, the same Dixon sequences used by AMRA\({}^{\circledR}\) Researcher were studied (so the same imaging protocol was used). However, a different quantification methodology was proposed. Therefore, to make a comparison between the AMRA\({}^{\circledR}\) Researcher quantifications and those made in this work, the reproducibility coefficient (RPC) was used, which is defined as the value under which the absolute differences between two measurements would fall within 95 % probability, considering that these were calculated under different conditions or using different measurement systems [28]. As shown in the results, for the quantification of VAT a RCP \(\leq\) 0.17 L was obtained, while for the quantification of ASAT a RCP \(\leq\) 0.32 L was obtained. Therefore, it can be concluded that the Measurements made in this work were within the precision reported by AMRA\({}^{\circledR}\) Researcher. Although today the precision of AMRA\({}^{\circledR}\) Researcher may possibly be greater, it would be necessary to obtain new quantifications carried out through its updated methodology, compare them with those made in the present work and then compare the precision of both methodologies. Although the work depended on DIXON sequences generated using the AMRA\({}^{\circledR}\) Researcher imaging protocol, the proposed methodology could be verified on other databases. This is because these types of sequences are commonly obtained by different resonators. However, there may be an exception regarding the way slabs are obtained, since this procedure is specific to AMRA\({}^{\circledR}\) Researcher. Even considering the above, in general it would be more convenient to study a single volume obtained at once containing the entire region of interest, omitting the process of joining slabs, and reducing errors in quantifications due to errors made during the joining process. In this case, the proposed methodology could be applied, excluding the union process and appropriately choosing the 30 slices from which the total intensity maps would be obtained. The proposed two-dimensional CNNs analyzed a set of volumes \(V_{I}\) formed by what we call Figure 9: **Bland-Altman and correlation plots for ASAT.** (a), (b) and (c) show the correlation plots, while (d), (e) and (f) show the Blan-Altman plots, for the training, validation and testing subjects respectively for the quantification of ASAT. \(I_{vat}\) and \(I_{asat}\). These maps were obtained by adding the intensities of the voxels that resulted from approximately segmenting the regions that should have contained the VAT and ASAT respectively. Then, the proposed CNNs considered the volumes \(V_{I}\) as a 2D image with two different channels. Training CNNs in two dimensions required a much smaller amount of computational resources compared to studying 3D volumes. Therefore, this was an advantage of the proposed two-dimensional methodology. Furthermore, the proposed CNNs had much simpler architectures than many others used by different works, obtaining excellent results in this case. On the other hand, after studying the training, validation and testing subjects, it was observed that there was consistency in the results, thus demonstrating their reproducibility. When the Dixon-in-phase sequences were studied, the voxels with high intensities did not correspond solely to the fat signal, since it could have been a combination of this with the water signal. Furthermore, volumes \(V_{I}\) were made up of a small number of slices (30 in total), so they did not necessarily cover the entire region that contained the VAT and ASAT. Also, no anatomical reference was used, except for the choice of slabs numbered 2 and 3. Methodologies from other works (including AMRA(r) Researcher) needed to first accurately segment the fat deposits, then decided which ones were part of VAT and ASAT, and finally quantified them. For the quantifications of this work, it was hypothesized that the approximate segmentation made was sufficient to implicitly relate it to the amount of fat to be studied. The proposed methodology had only the objective of quantifying fat without having to locate or segment it precisely. It was known that the main strength of CNNs was the automatic search for abstract patterns in images, with the aim of successfully performing various tasks such as segmentation, classification or detection. Therefore, through the adequate training of the proposed CNNs using the total intensity maps \(I_{asat}\)\(I_{vat}\) as inputs, without requiring precise segmentations and without further anatomical considerations, it was possible to successfully quantify the VAT and ASAT with similar accuracy to AMRA(r) Researcher. Among the limitations of this work, it is found that the database studied was made up of a small number of samples. Additionally, the study subjects had different BMIs, and there was an imbalance between the total number of samples for each weight classification. Although accurate results were obtained, it can be deduced that the methodology had a bias towards subjects with a normal weight, since those were the ones who had the greatest number of samples during training. On the other hand, this work analyzed the DIXON sequence in phase, thus avoiding the necessary correction of the artifact known as water-fat swap, but wasting the use of fat-only images which contained more useful and explicit information to perform the quantifications. Also, the total intensity maps \(I_{asat}\) and \(I_{vat}\) lost spatial information when reducing a 3D volumes to a 2D images. Therefore, these maps were mostly affected by artifacts generated by movement. As this was a study conducted in children between 7 and 9 years old, the appearance of these artifacts was more likely since the breath-hold condition may not always have been met, nor the subjects were completly at rest during the complete acquisition of the MRI sequences. Future work should apply the proposed methodology in databases with a greater number of samples with balanced classes and performing cross-validations. Since the study was restricted to male children aged between 7 and 9 years, the proposed method could be applied to subjects of different ages, both children and adults, as well as men and women. Furthermore, fat-only DIXON sequences should be studied, proposing an automatic method for correcting the water-fat-swap artifact and thus taking advantage of the information that this type of sequence offers for the required quantification tasks. Also, an algorithm could be implemented which would automatically choose the best anatomical region of the volumes to perform the quantification, so that the total intensity maps \(I_{asat}\) and \(I_{vat}\) would contain a greater amount of useful information for train the CNNs. Finally, other CNN architectures could be proposed. In conclusion, an automatic, simple, reproducible and economical methodology for quantifying ASAT and VAT in children was proposed, with low demand for computational resources, based on the analysis of what we called total intensity maps and CNNs in two dimensions with a simple architecture, achieving the precision of the commercial AMRA(r) Researcher quantification method. In this work, Dixon sequences commonly obtained in different scanners were studied, making the proposed methodology accessible and reproducible by independent studies, in order to corroborate the results and implement improvements. In the end, all of the above had the final objective that the proposed methodology can serve as an accessible and free tool for the diagnosis, monitoring and prevention of diseases related to overweight and obesity in children.
2305.19918
Fully Dynamic Submodular Maximization over Matroids
Maximizing monotone submodular functions under a matroid constraint is a classic algorithmic problem with multiple applications in data mining and machine learning. We study this classic problem in the fully dynamic setting, where elements can be both inserted and deleted in real-time. Our main result is a randomized algorithm that maintains an efficient data structure with an $\tilde{O}(k^2)$ amortized update time (in the number of additions and deletions) and yields a $4$-approximate solution, where $k$ is the rank of the matroid.
Paul Dütting, Federico Fusco, Silvio Lattanzi, Ashkan Norouzi-Fard, Morteza Zadimoghaddam
2023-05-31T14:55:47Z
http://arxiv.org/abs/2305.19918v1
# Fully Dynamic Submodular Maximization over Matroids ###### Abstract Maximizing monotone submodular functions under a matroid constraint is a classic algorithmic problem with multiple applications in data mining and machine learning. We study this classic problem in the fully dynamic setting, where elements can be both inserted and deleted in real-time. Our main result is a randomized algorithm that maintains an efficient data structure with an \(\tilde{O}(k^{2})\) amortized update time (in the number of additions and deletions) and yields a 4-approximate solution, where \(k\) is the rank of the matroid. ## 1 Introduction Thanks to the ubiquitous nature of "diminishing returns" functions, submodular maximization is a central problem in unsupervised learning with multiple applications in different fields, including video analysis (Zheng et al., 2014), data summarization (Lin and Bilmes, 2011; Bairi et al., 2015), sparse reconstruction (Bach, 2010; Das and Kempe, 2011), and active learning (Golovin and Krause, 2011; Amanatidis et al., 2022). Given a submodular function \(f\), a universe of elements \(V\), and a family \(\mathcal{F}\subseteq 2^{V}\) of subsets of \(V\) the submodular maximization problem consists in finding a set \(S\in\mathcal{F}\) that maximizes \(f(S)\). A classic choice for \(\mathcal{F}\) are the capacity constraints (a.k.a. \(k\)-uniform matroid constraints) where every subset \(S\) of cardinality at most \(k\) is feasible. Another common restriction that generalizes capacity constraints and comes up in many real-world scenarios are matroid constraints. Submodular maximization under matroid constraints is NP-hard, although efficient approximation algorithms exist for this task in both the centralized and streaming setting (Fisher et al., 1978; Calinescu et al., 2011; Chakrabarti and Kale, 2015; Ene and Nguyen, 2019). One fundamental limitation of these algorithms is that they are not well-suited to handle highly dynamic datasets, where elements are added and deleted continuously. Many real-world applications exhibit such dynamic behaviour; for example, Dey et al. (2012) crawled two snapshots of 1.4 million New York City Facebook users several months apart and reported that 52% of the users changed their profile privacy settings during this period. Similarly, TikTok processes millions of video uploads and deletions each day, while also Snapchat processes millions of message uploads and deletions daily. In such settings, it is essential to quickly perform basic machine learning tasks, such as active learning or data summarization, so it is crucial to design _fully dynamic_ algorithms that can _efficiently_ process streams containing not only insertions but also an arbitrary number of deletions, with small processing time per update. For these reasons, many problems have been studied in the dynamic setting, even if it is notoriously difficult to obtain efficient algorithms in this model. For monotone submodular maximization with a cardinality constraint, a \((2+\varepsilon)\)-approximation algorithm with poly-logarithmic amortized update time (with respect to the length of the stream) was designed by Lattanzi et al. (2020); subsequently, this result has been proved to be tight by Chen and Peng (2022). In the case of submodular maximization with matroid constraints, algorithms have been proposed only for specialized dynamic settings, namely sliding windows (Chen et al., 2016; Epasto et al., 2017) and deletion robustness (Dutting et al., 2022; Mirzasoleiman et al., 2017; Zhang et al., 2022b). Our contribution.In this paper we propose the first fully dynamic algorithm for submodular maximization under a matroid constraint with amortized running time that is sublinear in the length of the stream. Our randomized algorithm processes a stream of arbitrarily interleaved insertions and deletions with an (expected) amortized time per update that is \(\tilde{O}(k^{2})\)*. Crucially, it also continuously maintains a solution whose value is (deterministically), after each update, at least \(\frac{1}{4}\) of the optimum on the available elements. Footnote *: In this work, \(\tilde{O}\) hides factors poly-logarithmic in \(n\) (the number of elements in the stream) and \(k\) (the rank of the matroid). Technical challenges.While many known algorithms handle insertions-only streams, it is challenging to efficiently handle deletions: removing one element from a candidate solution may make necessary to recompute a new solution from scratch using _all_ the elements arrived in previous insertions. This is the reason why well-known techniques for the centralized or streaming framework cannot be applied directly in the dynamic setting without suffering a linear amortized update time \(\Omega(n)\) (see Appendix C for further discussion). The fully-dynamic algorithm for cardinality constraint (Lattanzi et al., 2020) addresses this phenomenon via a two-dimensional bucketing data structure that allows to efficiently recover elements with large enough contribution to the current solution (and can be used to quickly recompose a good solution after a deletion). Unfortunately, that approach crucially depends on the nature of the constraint and does not extend to more structured constraints as matroids. The key difficulty is that when an element of an independent set in a matroid gets deleted, only a subset of the elements can replace it, according to the matroid constraint. This is a crucial difference with cardinality constraints, where all elements are interchangeable. Our techniques.In this paper, we also design and analyze a data structure that is organized in levels, each one providing robustness at different scales. In addition, we carefully design an update rule that simulates in real-time the behavior of the classic Swapping algorithm for submodular maximization under matroid constraint (Chakrabarti and Kale, 2015). A key insight of our approach is that one can reorder and delay the addition or swapping of the elements with lower robustness without losing the simplicity and effectiveness of the Swapping algorithm. Interestingly, our construction simplifies substantially that of Lattanzi et al. (2020) as it removes one of the two dimensions of the dynamic data structure. Finally, we highlight a speed-up to the swapping algorithm that reduces the number of matroid independence queries by a factor \((k/\log k)\). This result may be of independent interest. Additional related works.Independently and in parallel from the work of Lattanzi et al. (2020); Monemizadeh (2020) achieved the same approximation guarantee with \(\tilde{O}(k^{2})\) amortized update time were \(k\) is the cardinality constraint. An area of research that is very close to the fully dynamic setting is robust submodular optimization (Orlin et al., 2018; Bogunovic et al., 2017; Mirzasoleiman et al., 2017; Mitrovic et al., 2017; Kazemi et al., 2018; Avdiukhin et al., 2019; Zhang et al., 2022a). In this setting, the goal is to select a summary of the whole dataset that is robust to \(d\) adversarial deletions; crucially the number \(d\) of deletions is known to the algorithm and typically all the deletions happen after the insertion in the stream. The results in this line of research do not apply to our dynamic setting where the number of deletions is arbitrary and deletions are interleaved with insertions. ## 2 Preliminaries We consider a set function \(f:2^{V}\rightarrow\mathbb{R}_{\geq 0}\) on a (potentially large) ground set \(V\). Given two sets \(X,Y\subseteq V\), the _marginal gain_ of \(X\) with respect to \(Y\), \(f\left(X\mid Y\right)\), quantifies the change in value of adding \(X\) to \(Y\) and is defined as \(f\left(X\mid Y\right)=f(X\cup Y)-f(Y)\). When \(X\) consists of a singleton \(x\), we use the shorthand \(f(x\mid Y)\) instead of \(f(\{x\}\mid Y)\). Function \(f\) is called _monotone_ if \(f\left(e\mid X\right)\geq 0\) for each set \(X\subseteq V\) and element \(e\in V\), and _submodular_ if for any two sets \(X\subseteq Y\subseteq V\) and any element \(e\in V\setminus Y\) we have \[f\left(e\mid X\right)\geq f\left(e\mid Y\right).\] Throughout the paper, we assume that \(f\) is monotone and that it is _normalized_, i.e., \(f(\emptyset)=0\). We model access to the submodular function \(f\) via a value oracle that computes \(f(S)\) for given \(S\subseteq V\). Submodularity under a matroid constraint.A non-empty family of sets \(\mathcal{M}\subseteq 2^{V}\) is called a _matroid_ if it satisfies the following properties: * _Downward-closure_ if \(A\subseteq B\) and \(B\in\mathcal{M}\), then \(A\in\mathcal{M}\) * _Augmentation_ if \(A,B\in\mathcal{M}\) with \(|A|<|B|\), then there exists \(e\in B\) such that \(A+e\in\mathcal{M}\). For the sake of brevity, in this paper we slightly abuse the notation and for a set \(X\) and an element \(e\), use \(X+e\) to denote \(X\cup\{e\}\) and \(X-e\) for \(X\setminus\{e\}\). We call a set \(A\subseteq 2^{V}\)_independent_, if \(A\in\mathcal{M}\), and _dependent_ otherwise. An independent set that is maximal with respect to inclusion is called a _base_; all the bases of a matroid share the same cardinality \(k\), which is referred to as the _rank_ of the matroid. The problem of maximizing a function \(f\) under a _matroid constraint_\(\mathcal{M}\) is defined as selecting a set \(S\subseteq V\) with \(S\in\mathcal{M}\) that maximizes \(f(S)\). Similarly to what is done for the submodular function, we assume access to an independence oracle that takes in input \(S\subseteq V\) and outputs whether \(S\) is independent with respect to the matroid or not. Fully dynamic model.Consider a stream of exactly \(n\) insertion and \(n\) deletion operations chosen by an oblivious adversary. Denote by \(V_{i}\) the set of all elements inserted and not deleted up to the \(i\)-th operation. Let \(O_{i}\) be an optimum solution for \(V_{i}\) and denote \(\mathrm{OPT}_{i}=f(O_{i})\). Our goal is to design a dynamic data structure with two key properties. On the one hand, we want the data structure to maintain, at the end of each operation \(i\), a good feasible solution \(S^{i}\subseteq V_{i}\). In particular, we say that an algorithm is an \(\alpha\)-approximation of the best (dynamic) solution if \(\mathrm{OPT}_{i}\leq\alpha f(S^{i})\), for all \(i=1,\ldots,2n\). On the other hand, we are interested in updating our data structure efficiently. We measure efficiency in terms of the amortized running time, i.e., the average per-operation computation: we say that an algorithm has amortized running time \(t\) if its expected total running time to process any stream of \(2n\) insertions and deletions is at most \(2nt\).++ Throughout this paper, we refer to running time as the total number of submodular function evaluations (value oracle) and independent set evaluations with respect to the matroid (independence oracle). This is a standard practice in submodular optimization as these two oracles typically dominates the running time of optimization algorithms. Insertion-only streams.The fully dynamic model can be considered -- to some extent -- a generalization of the insertion-only streaming model. There, an arbitrary sequence of sole insertions is passed to the algorithm that is tasked with retaining a good solution (with respect to the offline optimum), while using only little "online" memory. A key ingredient in our analysis is the Swapping algorithm by Chakrabarti and Kale (2015) that is a simple yet powerful routine for submodular maximization with matroid constraint in the streaming setting. Swapping maintains a feasible solution and, for each new arriving element, it adds it to the solution if either it does not violate the matroid constraint or it is possible to swap it with some low-value element++. We use a slightly modified version of the original algorithm (see pseudocode for details); namely, the weight of a new element is computed as its marginal value with respect to the set \(S^{\prime}\) of all the elements that _at some point_ were in the solution. We refer to Appendix A for a formal proof of the fact that our modified version of Swapping still retains the approximation guarantees we want: Footnote ‡: With ties in line 8 solved in any consistent way. **Theorem 2.1**.: _For any (possibly adaptive) stream of elements in \(V\), Swapping outputs a deterministic \(4\)-approximation to the best (offline) independent set in \(V.\)_ ## 3 The Algorithm In the main body of the paper we present and analyze a simplified version of our data structure whose amortized running time depends poly-logarithmically on a parameter \(\Delta\) of the function \(f\): \[\Delta=\frac{\max_{x\in V}f(x)}{\min_{T\subseteq V,x\notin T_{0}}f(x\mid T)},\] where with \(T_{0}\) we denote the set of all the elements with \(0\) marginal contribution with respect to \(T\). In Appendix B, we show how to replace this dependence in \(\Delta\) with a \(O(k/\varepsilon)\) term for any chosen precision parameter \(\varepsilon\) that influences the approximation factor in an additive way. To further simplify the presentation, we also assume without loss of generality that our algorithm knows the number \(n\) of insertions and deletions in advance, and that \(n\) is a power of 2. We show in Section 6 a simple way to avoid this assumption without affecting the approximation guarantee. We are ready to introduce our algorithm. At a high level, it carefully maintains the stream of elements in a data structure characterized by a small amortized update time and that mimics the behavior of Swapping, at each insertion or deletion operation. Our data structure contains \(L+1\) levels, with \(L=\log n\). Each one of these levels is characterized by four sets of elements: a partial solution \(S_{\ell}\), an auxiliary set \(S^{\prime}_{\ell}\) that contains \(S_{\ell}\) and some elements that used to belong to \(S_{\ell}\) but were later swapped out from it, a set \(A_{\ell}\) of candidate elements, that meet certain criteria and are considered _good_ addition to the solution, and a buffer \(B_{\ell}\) of still not processed elements. Moreover, the invariants that \(|A_{\ell}|\) and \(|B_{\ell}|\) are smaller than \(n/2^{\ell}\) are enforced. We claim that the solution of the last level, i.e., \(S_{L}\) (that plays the role of \(S^{i}\)), is consistently a constant factor approximation of \(\mathrm{OPT}_{i}\), at the end of each operation \(i\). We describe how the data structure is maintained; this clarifies the details of our approach. Initialization.At the beginning of the stream, the routine Initialization is called. It takes in input \(n\) and initializes \(\Theta(\log n)\) empty sets; the ones described above: \(S_{\ell},S^{\prime}_{\ell},A_{\ell}\) and \(B_{\ell}\) for all \(\ell=0,1,\ldots,L\). ``` 1:Input:\(n\) 2:\(L\leftarrow\log n\) 3:Initialize empty sets \(A_{\ell},S_{\ell},S^{\prime}_{\ell},B_{\ell}\) \(\forall\,0\leq\ell\leq L\) ``` **Algorithm 2** Initialization Handling insertions.When a new element \(e\) is inserted, it gets immediately added to all the buffers (line 1 of Insertion). This addition induces the call of another routine, Level-Construct, on the level \(\ell\) with smallest index such that the buffer \(B_{\ell}\) exceeds a certain cardinality (namely when \(|B_{\ell}|\geq n/2^{\ell}\), see line 2 of Insertion). Such level always exists by our choice of \(L\). ``` 1:\(B_{\ell}\gets B_{\ell}+e\)\(\forall\,0\leq\ell\leq L\) 2:if there exists an index \(\ell\) such that \(|B_{\ell}|\geq\frac{n}{2^{\ell}}\)then 3: Let \(\ell^{\star}\) be such \(\ell\) with lowest value 4: Call Level-Construct(\(\ell^{\star}\)) ``` **Algorithm 3** Insertion(\(e\)) Handling deletions.When an element \(e\) is deleted from the stream, then the data structure is updated according to Deletion. Element \(e\) is removed from all the candidate elements sets \(A_{\ell}\) and buffers \(B_{\ell}\) (lines 1 and 2) and causes a call of Level-Construct on the smallest-index level such that \(e\in S_{\ell}\) (line 5). While Insertion always induces a call of Level-Construct, Deletion only causes it if the deleted element belongs to some partial solution \(S_{\ell}\). ``` 1:\(A_{\ell}\gets A_{\ell}-e\)\(\forall\,0\leq\ell\leq L\) 2:\(B_{\ell}\gets B_{\ell}-e\)\(\forall\,0\leq\ell\leq L\) 3:if\(e\in S_{\ell}\) for some \(\ell\)then 4: Let \(\ell\) be the smallest index such that \(e\in S_{\ell}\) 5: Call Level-Construct(\(\ell\)) ``` **Algorithm 4** Deletion(\(e\)) Level-Construct.We now describe the main routine of our data structure: Level-Construct. A call to this routine at level \(\ell\) triggers some operations relevant to sets at level \(\ell\), and it then recursively runs Level-Construct at level \(\ell+1\). Therefore Level-Construct(\(\ell\)) is essentially responsible for reprocessing the whole data structure at all levels \(\ell,\ell+1,\cdots,L\). When it is called on some level \(\ell\), all the sets associated to that level \((S_{\ell},S^{\prime}_{\ell},A_{\ell}\) and \(B_{\ell})\) are reinitialized: the candidate elements set \(A_{\ell}\) is initialized with the elements in \(A_{\ell-1}\) and \(B_{\ell-1}\) (line 1), the buffer \(B_{\ell}\) is erased (line 2), while \(S_{\ell}\) and \(S^{\prime}_{\ell}\) are copied from the previous level (lines 3 and 4). Then, the following iterative procedure is repeated, until the cardinality of \(A_{\ell}\) becomes smaller or equal to \(n/2^{\ell}\): first, all the elements in \(A_{\ell}\) that would not be added to \(S_{\ell}\) by Swapping are filtered out (lines 9 to 11), then, if the cardinality of \(A_{\ell}\) is still large enough (i.e., \(|A_{\ell}|\geq n/2^{\ell}\), see line 12) an element \(e\) from it is drawn uniformly at random and is added to the solution and to \(S^{\prime}_{\ell}\) (lines 14 and 15); note that if \(S_{\ell}+e\notin\mathcal{M}\), then \(e\) needs to be swapped with some element \(s_{e}\) in the solution (see line 8). Two important implementation details are worth mentioning here: \((i)\) every time an element \(e\) is added to a partial solution \(S_{\ell}\), also the information about the weight \(w(e)\) it has at the moment is stored in \(S_{\ell}\); \((ii)\) the partial solutions \(S_{\ell}\) are maintained sorted in increasing order of weight. Note that these two points do not entail any call of the value or independence oracles. ``` 1:\(A_{\ell}\gets A_{\ell-1}\cup B_{\ell-1}\) 2:\(B_{\ell}\leftarrow\emptyset\) 3:\(S_{\ell}\gets S_{\ell-1}\) 4:\(S^{\prime}_{\ell}\gets S^{\prime}_{\ell-1}\) 5:repeat 6:for any element \(e\in A_{\ell}\)do 7:\(w(e)\gets f(e\mid S^{\prime}_{\ell})\) 8:\(s_{e}\leftarrow\operatorname*{argmin}\{w(y)\mid y\in S_{\ell}\wedge S_{\ell}-y +e\in\mathcal{M}\}\) 9:\(E_{\ell}\leftarrow\{e\in A_{\ell}\mid S_{\ell}+e\in\mathcal{M}\}\) 10:\(F_{\ell}\leftarrow\{e\in A_{\ell}\setminus E_{\ell}\mid w(e)>2\,w(s_{e})\}\) 11:\(A_{\ell}\gets E_{\ell}\cup F_{\ell}\) 12:if\(|A_{\ell}|\geq\frac{n}{2^{\ell}}\)then 13: Pop \(e\) from \(A_{\ell}\) uniformly at random 14:\(S_{\ell}\gets S_{\ell}+e-s_{e}\) 15:\(S^{\prime}_{\ell}\gets S^{\prime}_{\ell}+e\) 16:until\(|A_{\ell}|<\frac{n}{2^{\ell}}\) 17:if\(\ell<L\), call Level-Construct(\(\ell+1\)). ``` **Algorithm 5**Level-Construct(\(\ell\)) ## 4 Approximation Guarantee Fix any operation \(i\), we want to show that the solution \(S_{L}\) maintained in the data structure at the end of the computation relative to operation \(i\) is a good approximation of the best independent set \(O_{i}\) (of value \(\operatorname{OPT}_{i}\)) in \(V_{i}\). To not overload the notation, we omit the index \(i\) when it is clear from the context, so that \(V\) stands for the elements that were inserted but not deleted from the stream up to operation \(i\) (included) and \(D\) stands for the set of elements deleted from the stream up to operation \(i\) (included). Clearly the set of all the elements arrived up to operation \(i\) is exactly \(V\cup D\). We want to show that \(f(S_{L})\) is a (deterministic) 4-approximation of the independent set in \(V\) with largest value. In the following, we actually we prove that \(f(S_{L})\) is a 4-approximation of something that is at least OPT. Up to operation \(i\) the content of the data structure has changed multiple times, but for the sake of the analysis it is enough to consider a subset of all these modifications. Formally, for each level \(\ell\), denote with \(i_{\ell}\) the last operation that triggered a call of Level-Construct (\(\ell\)) before or at operation \(i\). This may have happened either directly via an insertion or a deletion at level \(\ell\) or indirectly because something was inserted or removed at some level with smaller index. Denote with \(X_{\ell}\) the elements in \(V_{i_{\ell}}\) that were added to \(S_{\ell}\) during the computation and with \(Y_{\ell}\) the elements that were filtered out due failure in the "swapping" test. Formally, \(X_{\ell}=S_{\ell}\setminus S_{\ell-1}\) at the end of the computation relative to operation \(i_{\ell}\), while \(Y_{\ell}\) is made up of all the elements that were in \(A_{\ell}=B_{\ell-1}\cup A_{\ell-1}\) at the beginning of that call of Level-Construct (\(\ell\)), but that were neither passed to the following level nor added to the solution during the repeat loop of that call of Level-Construct. Note that, by definition of \(i_{\ell}\), nothing happens in level \(\ell\) between operations \(i_{\ell}\) and \(i\) besides possibly some basic addition induced by Insertion (line 1) and deletions induced by Deletion (lines 1 and 2), thus sets \(X_{\ell}\), \(Y_{\ell}\) and \(S_{\ell}\) do not change. We start proving that all the \(X_{\ell}\) and \(Y_{\ell}\) are pair-wise disjoint. **Lemma 4.1**.: _All the \(2L+2\) sets \(X_{\ell}\) and \(Y_{\ell}\) are pair-wise disjoint._ Proof.: By definitions, it is immediate to see that \(X_{\ell}\cap Y_{\ell}=\emptyset\) for each level \(0\leq\ell\leq L\). Now, consider any two levels \(\ell\) and \(r\), with \(\ell<r\). Since level \(\ell\) has a smaller index, the last operation \(i_{\ell}\) during whose computation Level-Construct(\(\ell\)) was called is precedent (or equal) to \(i_{r}\), that is the equivalent operation for level \(r\). This means that any element \(e\) in \(X_{\ell}\cup Y_{\ell}\) do not belong to \(X_{r}\cup Y_{r}\): \(e\) was not passed to level \(\ell+1\) (and thus to any larger level, \(r\) included) during the computation at operation \(i_{\ell}\) and, by definition of \(i_{\ell}\) it was not considered (i.e., it did not appear in any call of line 1 of Level-Construct) in any level with larger index, for all the computations between operation \(i_{\ell}\) and operation \(i\) (\(i_{r}\) included!). Denote with \(X\) the (disjoint) union of all the \(X_{\ell}\) and the \(Y_{\ell}\). We prove that \(X\) is a superset of \(V\). **Lemma 4.2**.: \(V\) _is contained in \(X\): for any \(e\in V\) there exists (exactly) one level \(\ell\) such that \(e\in X_{\ell}\) or \(e\in Y_{\ell}\)._ Proof.: When a new element \(e\in V\) is inserted in the stream, it is added to all buffers (line 1 of Insertion) and triggers a call of Level-Construct at some level \(\ell^{*}\) (line 4 of Insertion). Thus it is either added to the solution or filtered out at some level. However, this is not enough to prove the Lemma, as it is possible that that call of Level-Construct is not the last time that element \(e\) is considered. To formally prove the Lemma we introduce two (moving) auxiliary levels \(u_{e}\) and \(d_{e}\) such that the following two invariants hold from the operation in which \(e\) is added onwards (up to operation \(i\), included): * \(e\) belongs to the buffer \(B_{\ell}\) for all \(\ell<d_{e}\) * \(e\) belongs to \(A_{\ell}\) for all \(d_{e}\leq\ell<u_{e}\) * \(e\) belongs to either \(X_{u_{e}}\) or \(Y_{u_{e}}\), for some \(u_{e}\geq d_{e}\). Stated differently, we show that at the end of each operation \(j\) that follows the insertion of \(e\) (included) up to operation \(i\), included, there exist two levels (possibly different for each operation) such that \(a\)), \(b\)) and \(c\)) hold. For the sake of clarity, we omit the dependence of \(u_{e}\) and \(d_{e}\) from \(j\). When element \(e\) is inserted, it triggers Level-Construct at some level (we call \(d_{e}\) such level and note that \(a\)) is respected) and it is either added or filtered out at some other level (we call \(u_{e}\) such level so note that also \(b\)) and \(c\)) are respected). By construction of Level-Construct, it holds that \(u_{e}\geq d_{e}\). Note that \(e\in V\), thus \(e\) is not deleted from the stream before operation \(i\). So we only need to show that any further call of Level-Construct happening between the insertion of \(e\) and operation \(i\) (included) does not affect the invariants. We have three cases. For any Level-Construct that is called for some \(\ell>u_{e}\), nothing changes, as levels \(\ell\leq u_{e}\) are not touched. If Level-Construct is called for some \(\ell<d_{e}\), then element \(e\) belongs to the buffer \(B_{\ell-1}\) (by the invariant \(a\)) and it is then added to either \(X_{u_{e}}\) or \(Y_{u_{e}}\) for some (new) \(u_{e}\geq\ell\). We rename \(d_{e}\) the above \(\ell\), and it is easy to verify that all the invariants still hold. Finally, if Level-Construct is called for some \(d_{e}\leq\ell<u_{e}\), then by invariant \(b\), it holds that \(e\) belongs to \(A_{\ell-1}\), thus it will end up filtered out or added to the solution in some new \(u_{e}\geq\ell\). In this case we do not change \(d_{e}\), and it is easy to see that the invariants still hold. So the three invariants hold during the execution of the entire stream. To conclude the proof we just note that the invariants imply that \(e\) is only contained in either one \(X_{\ell}\) or one \(Y_{\ell}\) for some \(\ell\). There is a natural notion of ordering on the elements of \(X\), induced by the order in which they were considered by the algorithm, i.e. in which they were either added to the solution \(S_{\ell}\) (line 3 of Level-Construct) or filtered out by the recomputation of \(E_{\ell}\) and \(F_{\ell}\) (line 11 of Level-Construct), with ties broken arbitrarily. Call \(\pi\) this ordering. To have a better intuition of \(\pi\), note that it can be split into contiguous intervals, the first corresponding to the elements considered in the first level \(X_{0}\cup Y_{0}\), then in the second \(X_{1}\cup Y_{1}\), and so on. In interval \(\ell\), elements of \(X_{\ell}\cup Y_{\ell}\) are ordered using the same order in which they have been added or filtered out in the last call of Level-Construct (\(\ell\)). The crucial observation is that the solution \(S\) at the end of operation \(i\) is _exactly_ the output of Swapping on \(\pi\). To see why this is the case, consider the story of each element \(e\) arriving in \(\pi\). There are two cases to consider. If \(e\) is in some \(X_{\ell}\), then our algorithm has added it to the candidate solution \(S_{\ell}\) during the operation \(i_{\ell}\) because \(e\) was in either \(E_{\ell}\) or \(F_{\ell}\). Similarly, also Swapping would have added \(e\) to its solution, with the exact same swap. If \(e\) is in \(Y_{\ell}\), then it means that the algorithm has filtered it out during operation \(i_{\ell}\) because it failed to meet the swapping condition: thus it would have been discarded also by Swapping. This implies, by Theorem 2.1, that \(f(S)\) is a \(4\)-approximation to the best independent set in \(X\), which is an upper bound on \(\mathrm{OPT}\) (as it is the best independent set on a larger set). **Theorem 4.3**.: _For any operation \(i\) it holds that the solution \(S^{i}\) output by the algorithm at the end of the computation relative to iteration \(i\) is a deterministic \(4\)-approximation of \(\mathrm{OPT}_{i}\)._ ## 5 Running Time Analysis In this section, we analyze the amortized running time of our algorithm. Recall that, throughout this paper, we refer to running time as the total number of submodular function evaluation plus the number of independent set evaluation of the matroid. We start by showing some of the basic properties of the \(A\) and \(B\) sets. _Observation 5.1_.: For any level \(0\leq\ell\leq L\), at the end of Level-Construct(\(\ell\)), \(|A_{\ell}|<\frac{n}{2^{\ell}}\) and \(|B_{\ell}|=0\). Proof.: It follows directly from Line 16 in Level-Construct that \(A_{\ell}<\frac{n}{2^{\ell}}\), otherwise the loop does not stop. Moreover, Line 2 in Level-Construct is the only place that \(B_{\ell}\) is changed and it is set to empty set. _Observation 5.2_.: For any level \(0\leq\ell\leq L\), during the execution of the algorithm \(|B_{\ell}|\leq\frac{n}{2^{\ell}}\). Proof.: The only place where the size of \(B_{\ell}\) increases is in Line 1 of Insertion, where it increases by at most one. When \(|B_{\ell}|=\frac{n}{2^{\ell}}\), then Level-Construct(\(\ell\)) is called directly from Line 4 of Insertion or indirectly in Line 17 of Level-Construct. In both cases \(|B_{\ell}|=0\) due to Observation 5.1. _Observation 5.3_.: For any level \(0\leq\ell\leq L\), during the execution of the algorithm \(|A_{\ell}|\leq\frac{n}{2^{\ell-2}}\). Proof.: For any level \(\ell\), the cardinality of \(A_{\ell}\) only varies in two cases: when an element in \(A_{\ell}\) is removed from the stream (line 1 of Deletion) or during a call of Level-Construct on level \(\ell\). Since the latter decreases the cardinality of \(A_{\ell}\), we only study the former. When Level-Construct(\(\ell\)) is called, \(A_{\ell}\) is initialized in line 1 and then its cardinality only decreases (Lines 11 and 13). To conclude the proof is then sufficient to prove that, every time \(A_{\ell}\) is initialized in a new call of Level-Construct(\(\ell\)), its cardinality is at most \(\frac{n}{2^{\ell-1}}\). The set \(A_{\ell}\) is initialized with the elements in \(B_{\ell-1}\) and in \(A_{\ell-1}\). We know that \(|B_{\ell-1}|\leq\frac{n}{2^{\ell-1}}\) at any time of the algorithm (Observation 5.2), while the cardinality of \(|A_{\ell-1}|\) did not increase since the end of last call of Level-Construct(\(\ell-1\)). All in all, using the bound in Observation 5.1 we get the desired bound: \[|A_{\ell}|\leq\frac{n}{2^{\ell-1}}+\frac{n}{2^{\ell-1}}=\frac{4n}{2^{\ell}}. \tag{1}\] The size of \(B_{\ell}\) only increases in Line 1 of Insertion, where it increases by at most one. When \(|B_{\ell}|=\frac{n}{2^{\ell}}\), then Level-Construct(\(\ell\)) is called directly from Line 4 of Insertion or indirectly in Line 17 of Level-Construct. In both cases \(|B_{\ell}|=0\) due to Observation 5.1. Before moving to Level-Construct, we show that it is possible to compute the candidate swaps \(s_{e}\) in line 8 in \(O(\log k)\) calls of the independence oracle of the matroid. **Lemma 5.4**.: _For any element \(e\in A_{\ell}\) it is possible to find the candidate swap \(s_{e}\) in line 8 of Level-Construct in \(O(\log k)\) calls of the independence oracle of the matroid._ Proof.: Consider any iteration of the repeat loop in Level-Construct, let \(S\) be the solution, \(A\) the candidate set (we omit the dependence on \(\ell\) for simplicity) and \(e\) any element in it. If \(S+e\in\mathcal{M}\) then \(s_{e}\) is set to the empty set and the claim holds. Otherwise, call \(C\) the set of all elements in \(S\) that can be swapped with \(e\) to obtain an independent set: \[C=\{y\in S\mid S-y+e\in\mathcal{M}\}.\] It is immediate to see that \(s_{e}\in\operatorname*{argmin}_{y\in C}w(y)\). We know that the solution \(S=\{x_{1},x_{2},\ldots,x_{j}\}\) is maintained in decreasing order of weights (resolving ties arbitrarily) and, by the downward closure property of matroids, we can use binary search to find \[i^{*}=\max\{i\mid\{x_{1},\ldots,x_{i-1}\}+e\in\mathcal{M}\}.\] We claim that \(x_{i^{*}}\) is a good choice of \(s_{e}\), i.e., that \(x_{i^{*}}\in\operatorname*{argmin}_{y\in C}w(y)\). First, note that \(x_{i^{*}}\) belongs to \(C\). To see this, consider the set \(R=\{x_{1},\ldots x_{i^{*}-1}\}+e\in\mathcal{M}\), and recursively add element from \(S\) to it while keeping \(R\) independent. By the augmentation property, we know that this is possible until \(|R|=|S|\). A single element remains in \(S\setminus R\) and it has to be \(x_{i^{*}}\), as we know that \(\{x_{1},\ldots,x_{i^{*}}\}+e\) is dependent, thus \(S-x_{i^{*}}+e=R\in\mathcal{M}\). Now, we show that no element in \(C\) can have smaller weight (i.e. larger index) than \(x_{i^{*}}\). Assume toward contradiction that this is the case, i.e. that there is an \(x_{j}\) such that \(S-x_{j}+e\in\mathcal{M}\) and \(j<i^{*}.\) This implies that \(\{x_{1},\ldots x_{j-1}\}+e\) is independent, which contradicts the minimality of \(i^{*}\). **Lemma 5.5**.: _For any level \(0\leq\ell\leq L\), the running time of \(\textsc{Level-Construct}(\ell)\) is \(O(\frac{nk\log\Delta\log k}{2^{\ell}})\)._ Proof.: We prove this Lemma in two steps. First, we control the running time of the non-recursive part of \(\textsc{Level-Construct}\) (\(\ell\)) (i.e., all the algorithm but the recursive call in line 17), then we study, by induction, how these bounds combine recursively. We start proving that, for every level \(\ell\) from \(0\) to \(L\), the running time of the non-recursive part of \(\textsc{Level-Construct}\) (\(\ell\)) is dominated by \(c\frac{nk\log k\log\Delta}{2^{\ell}}\), for some positive constant \(c\). Focus on any level \(\ell\) and consider any call of \(\textsc{Level-Construct}\) (\(\ell\)). The main computation is performed in the repeat loop (Lines 5-16), which is repeated at most \(k\log\Delta\) times (due to the exponential increase of the weight assigned to the elements, the fact we can at most add \(k\) elements without swapping and the definition of \(\Delta\)). In each iteration of the repeat loop, the running time of finding the swap elements in Lines 6-11 is at most order of \(|A_{\ell}|\log k\), which is in \(O(\frac{nk\log k}{2^{\ell}})\) (recall, the cardinality of \(A_{\ell}\) is bounded in Observation 5.3). This concludes the analysis of the non recursive part of \(\textsc{Level-Construct}\): there exists a constant \(c>0\) such that its running time is at most \(c\frac{nk\log k\log\Delta}{2^{\ell}}\), for every level \(\ell\) from \(0\) to \(L\). We now conclude the proof of the Lemma by induction on \(\ell\) from \(L\) to \(0\). More precisely, we show that the (overall) running time of \(\textsc{Level-Construct}(\ell)\) is bounded by \(2c\frac{nk\log k\log\Delta}{2^{\ell}}\) for any level \(0\leq\ell\leq L\). We start considering the base case \(\ell=L\). As there is no recursive call, the inequality immediately descends on the bound on the non-recursive running time of \(\textsc{Level-Construct}\) (\(L\)). For the induction step, assume that the running time of \(\textsc{Level-Construct}(\ell+1)\) is upper bounded by \(2c\frac{nk\log k\log\Delta}{2^{\ell+1}}\) and we show that the same holds for level \(\ell\). This is pretty easy to see: the interval running time of \(\textsc{Level-Construct}\) (\(\ell\)) is at most \(c\frac{nk\log k\log\Delta}{2^{\ell}}\), while the recursive call to \(\textsc{Level-Construct}\) (\(\ell+1\)) has running time \(2c\frac{nk\log k\log\Delta}{2^{\ell+1}}\) by the inductive hypothesis. Summing up these two terms yields the desired result. We have just assessed a deterministic upper bound on the computational complexity of each call of \(\textsc{Level-Construct}\). We now bound the number of times that \(\textsc{Level-Construct}\) is called during the stream of insertions and deletions. To do so, we bound separately the number of times \(\textsc{Level-Construct}\) is directly induced by Deletion or Insertion. **Lemma 5.6**.: _For any level \(0\leq\ell\leq L\), the expected number of times that the \(\textsc{Level-Construct}\) (\(\ell\)) function is called from \(\textsc{Deletion}\) is at most \(2^{\ell+3}k\log\Delta\)._ Proof.: Fix any level \(\ell\). As a first step in this proof we show that the probability that a set of \(\frac{n}{k2^{\ell+1}\log\Delta}\) deletions hit at least one element sampled in Line 13 of \(\textsc{Level-Construct}(\ell)\) is at most \(1/2\) (note, some of the elements sampled may get swapped out in the same \(\textsc{Level-Construct}(\ell)\) execution). The reason is that there are at most \(k\log\Delta\) elements sampled from \(A_{\ell}\). Each of these elements are sampled from a candidate pool of at least \(\frac{n}{2^{\ell}}\) elements. Therefore the probability that any particular deleted element is from the sampled elements is at most \(\frac{k2^{\ell}\log\Delta}{n}\). The claim follows by union bound over all the \(\frac{n}{k2^{\ell+1}\log\Delta}\) deletions. We call a period between two invocation of \(\textsc{Level-Construct}(\ell)\) from Deletion an epoch, and denote with \(N_{\ell}\) the (random) number of such epochs. We call an epoch short if its length is less than \(\frac{n}{k2^{\ell+1}\log\Delta}\) and long otherwise. We denote with \(N_{\ell}^{-}\) and \(N_{\ell}^{+}\) the number of short, respectively long, epochs. Every time we recompute \(\textsc{Level-Construct}(\ell)\), the probability of the next epoch being short is at most \(1/2\). So we have: \[\mathbb{E}\left[N_{\ell}^{-}\right]\leq\frac{1}{2}\mathbb{E}\left[N_{\ell} \right]=\frac{1}{2}\mathbb{E}\left[N_{\ell}^{-}\right]+\frac{1}{2}\mathbb{E} \left[N_{\ell}^{+}\right], \tag{2}\] which implies that \(\mathbb{E}\left[N_{\ell}^{-}\right]\leq\mathbb{E}\left[N_{\ell}^{+}\right].\) We know that the number \(N_{\ell}^{+}\) of long epochs is at most the total number of operations \(2n\), divided by the lower bound on the cardinality of each long epoch, \(\frac{n}{k2^{\ell+1}\log\Delta}\). All in all, \(N_{\ell}^{+}\leq 2^{\ell+2}k\log\Delta\). We are ready to conclude: \[\mathbb{E}\left[N_{\ell}\right]=\mathbb{E}\left[N_{\ell}^{-}\right]+\mathbb{E} \left[N_{\ell}^{+}\right]\leq 2\mathbb{E}\left[N_{\ell}^{+}\right]\leq 2^{\ell+3}k \log\Delta.\qed\] **Lemma 5.7**.: _For any level \(0\leq\ell\leq L\), the number of times that the Level-Construct\((\ell)\) function is called from Insertion is at most \(2^{\ell}\)._ Proof.: The only place where the size of set \(B_{\ell}\) increases is in Line 1 of Insertion, and it increases by at most one per insertion. Moreover, \(B_{\ell}\) is set to the empty set in Line 2 of Level-Construct\((\ell)\). Also there are at most \(n\) insertions and Level-Construct\((\ell)\) is called when the size of \(B_{\ell}\) is equal to \(\frac{n}{2^{\ell}}\). Therefore there are at most \(n\frac{2^{\ell}}{n}=2^{\ell}\) calls to Level-Construct\((\ell)\) from Insertion. **Lemma 5.8**.: _The average running time per operation is \(O(k^{2}\log k\log^{2}\Delta\log n)\), in expectation over the randomness of the algorithm._ Proof.: The running time when an element is inserted or deleted is \(O(L)=O(\log n)\) beside the calls made to Level-Construct. In what follows we focus on the total running time spent in Level-Construct. There are two places that Level-Construct\((\ell)\) can be called from (beside the recursion in Line 17 of Level-Construct): Insertion and Deletion. By comparing Lemma 5.7 and Lemma 5.6 it results that the number of calls induced by Deletion dominates those induced by Insertion. So we only focus on the former term. Let \(c\) be the constant as in the analysis of Lemma 5.5, we bound the total expected running time by \[2\sum_{0\leq\ell\leq L}k2^{\ell+3}\log\Delta\cdot\left(2c\cdot \frac{nk\log k\log\Delta}{2^{\ell}}\right) =32c\sum_{0\leq\ell\leq L}nk^{2}\log k\log^{2}\Delta\] \[=32c\cdot\left(nk^{2}\log k\log^{2}\Delta\log n\right).\qed\] ## 6 Putting it Together The results in the previous sections hold under the assumption that the algorithm designer knows the number of insertion and deletions (denoted by \(n\)) in advance. In this section we present a well-known tool that enables our algorithm to run without this assumption. We simply start by \(n=1\), and whenever the number of insertions and deletions reaches to \(n\), we restart algorithm and double the value of \(n\). Therefore, if the total operations is \(m\), then largest value \(n\) that we use is the smallest power of \(2\) bigger than \(m\), which is at most \(2m\). Combining with Lemma 5.8 we get that the total running time per operation is (up to multiplicative constants) \[\sum_{1\leq n_{0}\leq\log n}k^{2}\log k\log^{2}\Delta n_{0}=k^{2}\log k\log^{2 }\Delta\log n^{2}=k^{2}\log k\log^{2}\Delta\log^{2}m\,. \tag{3}\] Combining Equation (3) and Theorem 4.3, we have the main result of the paper. **Theorem 6.1**.: _Our algorithm yields a \(4\)-approximation to the fully dynamic monotone submodular maximization problem with matroid constraint and exhibit an \(O(k^{2}\log k\log^{2}\Delta\log^{2}n)\) expected amortized running time._ In Appendix B we also explain how one can avoid the dependency on \(\Delta\) which requires: (i) designing and analyze a new algorithm that combines swapping and thresholding (presented in Appendix A) and (ii) apply standard techniques to guess the value of OPT and run multiple copies of the algorithm at the same time. **Corollary 6.2**.: _For any constant \(\varepsilon>0\), there exists a \((4+O(\varepsilon))\)-approximation to the fully dynamic monotone submodular maximization problem with matroid constraint that exhibits an \(O\left(\frac{k^{2}}{\varepsilon}\log k\log^{2}n\log^{3}\frac{k}{\varepsilon}\right)\) expected amortized running time._ ## 7 Conclusions and Future Directions In this paper we design the first efficient algorithm for fully-dynamic submodular maximization with matroid constraint. An interesting open question stems immediately from our result: is it possible to reduce the amortized running to depend only poly-logarithmically in \(k\) (currently it is \(\tilde{O}(k^{2})\))? In this paper we focus on the crucial worst-case paradigm, constructing an algorithm whose guarantees are robust to any (oblivious) adversary that generates the stream of insertions and deletions. An interesting open problem of research is to study beyond-worst case analysis, when it is natural to assume some "non-adversarial" structure on the stream, similarly to what has been done, for instance, in the random-order arrival model for insertion-only streams (Norouzi-Fard et al., 2018; Liu et al., 2021; Feldman et al., 2022). ## Acknowledgements The work of Federico Fusco is partially supported by ERC Advanced Grant 788893 AMDROMA "Algorithmic and Mechanism Design Research in Online Markets", PNRR MUR project PE0000013-FAIR", and PNRR MUR project IR0000013-SoBigData.it. Part of this work was done while Federico was an intern at Google Research, hosted by Paul Dutting.
2309.00144
Multi Agent DeepRL based Joint Power and Subchannel Allocation in IAB networks
Integrated Access and Backhauling (IAB) is a viable approach for meeting the unprecedented need for higher data rates of future generations, acting as a cost-effective alternative to dense fiber-wired links. The design of such networks with constraints usually results in an optimization problem of non-convex and combinatorial nature. Under those situations, it is challenging to obtain an optimal strategy for the joint Subchannel Allocation and Power Allocation (SAPA) problem. In this paper, we develop a multi-agent Deep Reinforcement Learning (DeepRL) based framework for joint optimization of power and subchannel allocation in an IAB network to maximize the downlink data rate. SAPA using DDQN (Double Deep Q-Learning Network) can handle computationally expensive problems with huge action spaces associated with multiple users and nodes. Unlike the conventional methods such as game theory, fractional programming, and convex optimization, which in practice demand more and more accurate network information, the multi-agent DeepRL approach requires less environment network information. Simulation results show the proposed scheme's promising performance when compared with baseline (Deep Q-Learning Network and Random) schemes.
Lakshya Jagadish, Banashree Sarma, R. Manivasakan
2023-08-31T21:30:25Z
http://arxiv.org/abs/2309.00144v1
# Multi Agent DeepRL based Joint Power and Subchannel Allocation in IAB networks ###### Abstract Integrated Access and Backhauling (IAB) is a viable approach for meeting the unprecedented need for higher data rates of future generations, acting as a cost-effective alternative to dense fiber-wired links. The design of such networks with constraints usually results in an optimization problem of non-convex and combinatorial nature. Under those situations, it is challenging to obtain an optimal strategy for the joint Subchannel Allocation and Power Allocation (SAPA) problem. In this paper, we develop a multi-agent Deep Reinforcement Learning (DeepRL) based framework for joint optimization of power and subchannel allocation in an IAB network to maximize the downlink data rate. SAPA using DDQN (Double Deep Q-Learning Network) can handle computationally expensive problems with huge action spaces associated with multiple users and nodes. Unlike the conventional methods such as game theory, fractional programming, and convex optimization, which in practice demand more and more accurate network information, the multi-agent DeepRL approach requires less environment network information. Simulation results show the proposed scheme's promising performance when compared with baseline (Deep Q-Learning Network and Random) schemes. Integrated Access and Backhaul, Deep Reinforcement Learning, Power Allocation, Subchannel Allocation ## I Introduction With data traffic growing exponentially and the demand for more capacity, mm-Wave seems to be a viable solution. But owing to the higher path loss and attenuation of mm-Wave, cell size must be reduced [11]. While network densification may be a viable option for mm-Wave which also meets the goal of an ultra-dense network, setting up wired backhaul links is not economically feasible. In the mm-wave networks using wireless backhauling IAB nodes can be installed fairly easily compared to optical fiber networks. Furthermore, mm-wave wireless backhaul can boost network capacity and spectrum efficiency. Since the same spectrum is used for access and backhaul links, an efficient resource allocation is warranted for such a system. Although the resource allocation problem for an IAB system has been studied extensively, most previous work has used complex conventional ways to solve it where the full or partial CSI (Channel State Information) is required. Much of the previous work deals with solving optimization problems with high computational complexity. Because of the unpredictability of future wireless environments, rule-based decision-making that selects decisions directly from training may not be ideal. As a result, it may not be effective to design a priori cost functions and then solve optimal control problems in real time. DeepRL is being prominently used for solving such problems in 5G [12]. The approaches can be centralized or decentralized. Since the information of all the IAB nodes should be reported to the central controller for solving the resource allocation optimization problem, the transmission overhead is large. It grows dramatically with the size of the network, which prevents these methods from scaling to large networks. Therefore, in this paper, we focus on decentralized resource allocation approaches. Advancements in the field of Reinforcement Learning (RL) have provided the opportunity of finding effective solutions to such resource allocation and optimization problems using Q-Learning, deep Q-learning (DQN), and double deep Q-learning (DDQN). Q-learning is the most basic architecture. However, it requires the maintenance of huge tables and leads to slower convergence in large state-action spaces. DQN uses a deep neural network (DNN) as a functional approximation for the Q-Table of the state-action pairs instead of storing Q-values for each state-action pair. This can lead to faster convergence to the optimal solution but can lead to overfitting, which can depreciate the performance of the trained model. DDQN decomposes the max of objective function operation (downlink data rate in our case) for the given state in the target network into action selection and evaluation. Therefore the greedy policy is evaluated according to the online network (evaluation), using the target network to estimate its value (selection). The weights of the target network are periodically updated according to the weights of the online network. To overcome the drawbacks of the traditional totally centralized and distributed deep reinforcement learning-based resource allocation approaches, we propose a multi-agent deep Q-Learning algorithm with decentralized learning and centralized execution to solve the formulated optimization problem. This problem is formulated as a mixed integer non-linear optimization problem, intending to maximize the downlink sum-rate throughput. ### _Related Work_ Many studies have been done on resource allocation in wireless backhaul networks. [13] dynamically allocates the spectrum using an auction-based design of the system. The main limitations of these conventional approaches are computational complexities apart from the need for complete channel state information, which might not be feasible for an ultra-dense network. The downlink sum rate or network
2309.08518
A generalization of the dual immaculate quasisymmetric functions in partially commutative variables
We define a new pair of dual bases that generalize the immaculate and dual immaculate bases to the colored algebras $QSym_A$ and $NSym_A$. The colored dual immaculate functions are defined combinatorially via tableaux, and we present results on their Hopf algebra structure, expansions to and from other bases, and skew functions. For the colored immaculate functions, defined using creation operators, we study expansions to and from other bases and provide a right Pieri rule. This includes a combinatorial method for expanding colored immaculate functions into the colored ribbon basis that specializes to a new analogous result in the uncolored case. We use the same methods to define colored generalizations of the row-strict immaculate and row-strict dual immaculate functions with similar results.
Spencer Daugherty
2023-09-15T16:26:47Z
http://arxiv.org/abs/2309.08518v2
# A generalization of the dual immaculate quasisymmetric functions in partially commutative variables ###### Abstract. We define a new pair of dual bases that generalize the immaculate and dual immaculate bases to the colored algebras \(QSym_{A}\) and \(NSym_{A}\). The colored dual immaculate functions are defined combinatorially via tableaux, and we present results on their Hopf algebra structure, expansions to and from other bases, and skew functions. This includes a combinatorial method for expanding the colored fundamental quasisymmetric functions into the colored dual immaculate basis that specializes to a new analogous result in the uncolored case. For the colored immaculate functions, defined using creation operators, we study expansions to and from other bases and provide a right Pieri rule. We use the same methods to define colored generalizations of the row-strict immaculate and row-strict dual immaculate functions with similar results. ## 1. Introduction The quasisymmetric functions, introduced by Gessel [13], and the non-commutative symmetric functions, introduced by Gelfand, Krob, Lascoux, Leclerc, Retakh, and Thibon [12], are generalizations of the symmetric functions with rich theory and importance in algebraic combinatorics. Their algebras, QSym and NSym, are dual Hopf algebras that also appear in representation theory, algebraic geometry, and category theory. A significant amount of work has been done to find quasisymmetric or non-commutative analogues of symmetric functions objects, specifically of the Schur basis. This includes the development of the quasisymmetric Schur basis, row-strict quasisymmetric Schur basis, dual immaculate basis, and row-strict dual immaculate basis in QSym and the dual quasisymmetric Schur basis, the row-strict dual quasisymmetric Schur basis, the shin basis, the immaculate basis, and the row-strict immaculate basis in NSym [3, 9, 17, 23]. The immaculate functions, for example, are Schur-like in that they map to the Schur functions under the forgetful map from \(NSym\) to \(Sym\), and they have a Jacobi-Trudi rule, a right Pieri rule, and a creation operator construction. The dual immaculate functions on the other hand resemble the combinatorial definition of the Schur functions using tableaux. The primary goal of this paper is to define and study generalizations of the immaculate and dual immaculate functions in the colored algebras \(QSym_{A}\) and \(NSym_{A}\) introduced by Doliwa in [11]. The isomorphism between \(NSym\) and a subalgebra of rooted trees led to a generalization colored of \(NSym\), called \(NSym_{A}\), that is isomorphic to a larger subalgebra of colored rooted trees. The colored algebra \(QSym_{A}\) is defined dually using partially commutative colored variables. The initial goal of these generalizations was to extend the study of the relationship between symmetric functions and integrable systems to a non-commutative setting which is of growing interest in mathematical physics [10, 19]. Additionally, the Hopf algebra of rooted trees has various applications in the field of symbolic computation [16]. Doliwa defines the algebraic structure of \(QSym_{A}\) and \(NSym_{A}\) and analogues to some classic bases. We bring the study of Schur-like bases to this space to continue developing its theory. Studying the lift of the dual immaculate and immaculate functions to the colored quasisymmetric functions and colored non-commutative symmetric functions also allows us to obtain results on the original bases. The introduction of colored variables reduces a significant amount of cancellation and allows for the study of various patterns in more detail. Any results on the colored dual immaculate functions or the colored immaculate functions have immediate implications for their original counterparts. Section 2 of this paper provides background on the symmetric functions, Hopf algebras, the quasisymmetric functions, and the non-commutative symmetric functions. We then review the immaculate and dual immaculate functions, the skew dual immaculate functions, and the immaculate poset. Section 3 introduces Adam Doliwa's colored generalizations of \(QSym\) and \(NSym\). We review their Hopf algebra structure as well as the previously defined bases. In Section 4, we define the colored dual immaculate functions by introducing a colored generalization of immaculate tableaux. We then give expansions of the colored dual immaculate functions into the colored monomial and colored fundamental bases using the combinatorics of colored immaculate tableaux. Further, we provide an expansion of the colored fundamental functions into the colored dual immaculate basis defined combinatorially by counting paths in a graph related to standard colored immaculate tableaux. This result specializes to a new analogous result on the fundamental and dual immaculate bases in \(QSym\). Section 5 defines the colored immaculate functions using a colored generalization of Bernstein creation operators. We prove a right Pieri rule for the colored immaculate basis and give expansions of the colored complete homogeneous and colored ribbon bases into the colored immaculate basis. In Section 6, we introduce a partially ordered set on sentences in the style of the immaculate poset and skew colored immaculate tableaux. We use this poset to define skew colored dual immaculate functions and find results related to the structure constants of the colored immaculate basis. In section 7, we first review the row-strict immaculate and row-strict dual immaculate functions, then we define the colored row-strict immaculate and colored row-strict dual immaculate functions. These two bases are related to the immaculate and dual immaculate bases by an involution on sentences, which we use to translate our results from previous sections to the row-strict case. ### Acknowledgements The author is grateful to Laura Colmenarejo and Sarah Mason for their support and generative feedback, and to Nantel Bergeron, Sheila Sundaram, and Aaron Lauve for helpful conversations. ## 2. Background A _partition_ of a positive integer \(n\), written \(\lambda\vdash n\), is a sequence of positive integers \(\lambda=(\lambda_{1},\ldots,\lambda_{k})\) such that \(\lambda_{1}\geq\ldots\geq\lambda_{k}\) and \(\sum_{i}\lambda_{i}=n\). The _length_ of a partition \(\lambda=(\lambda_{1},\ldots,\lambda_{k})\) is the number of parts, \(\ell(\lambda)=k\), and the _size_ of a partition is the sum of its parts, \(|\lambda|=\sum_{i}\lambda_{i}\). A _composition_ of a positive integer \(n\), written \(\alpha\vDash n\), is a sequence of positive integers \(\alpha=(\alpha_{1},\ldots,\alpha_{k})\) such that \(\sum\alpha_{i}=n\). The _length_ of a composition \(\alpha=(\alpha_{1},\ldots,\alpha_{k})\) is the number of parts, \(\ell(\alpha)=k\), and the _size_ of a composition is the sum of its parts, \(|\alpha|=\sum_{i}\alpha_{i}\). A _weak composition_ is a composition that allows zeroes as entries. If \(\beta\) is a weak composition then \(\tilde{\beta}\), called the _flattening_[2] of \(\beta\), is the composition that results from removing all \(0\)'s from \(\beta\). The length of a weak composition is also its number of parts, although it is often implicitly assumed that there are infinitely many zeroes at the end of any weak composition. **Example 2.1**.: The partition \(\lambda=(3,2,1,1)\) has size \(|\lambda|=7\) and length \(\ell(\lambda)=4\). The composition \(\alpha=(2,1,3)\) has size \(|\alpha|=6\) and length \(\ell(\alpha)=3\). The flattening of the weak composition \(\beta=(0,1,1,0,2)\) is \(\tilde{\beta}=(1,1,2)\). The _Young diagram_ of a partition \(\lambda=(\lambda_{1},\ldots,\lambda_{k})\) is a left-justified array of boxes such that row \(i\) has \(\lambda_{i}\) boxes. Following the English convention, the top row is considered to be row \(1\). The _composition diagram_ of a composition \(\alpha=(\alpha_{1},\ldots,\alpha_{k})\) is a left-justified array of boxes such that row \(i\) has \(\alpha_{i}\) boxes. This only differs from a Young diagram in that the number of boxes in each row of a Young diagram must weakly decrease from top to bottom, but there is no such restriction for composition diagrams. Let \(\alpha=(\alpha_{1},\ldots,\alpha_{k})\) and \(\beta=(\beta_{1},\ldots,\beta_{j})\) be compositions such that \(j<k\) and \(\beta_{i}<\alpha_{i}\) for \(1\leq i\leq j\). The _skew shape_\(\alpha/\beta\) is a composition diagram of shape \(\alpha\) where the first \(\beta_{i}\) boxes in the \(i^{\text{th}}\) row are removed for \(1\leq i\leq j\). We represent this removal by shading in the removed boxes. **Example 2.2**.: Let \(\lambda=(3,2,1,1)\), \(\alpha=(2,1,3)\), and \(\beta=(1,1,2)\). Then the Young diagram of \(\lambda\), the composition diagram of \(\alpha\), and the skew shape \(\alpha/\beta\) respectively are: Let \(\alpha=(\alpha_{1},\ldots,\alpha_{k})\) and \(\beta=(\beta_{1},\ldots,\beta_{j})\) be two compositions. Under the _refinement order_\(\preceq\) on compositions of size \(n\), we say \(\alpha\preceq\beta\) if and only if \(\{\beta_{1},\beta_{1}+\beta_{2},\ldots,\beta_{1}+\cdots+\beta_{j}=n\}\subseteq \{\alpha_{1},\alpha_{1}+\alpha_{2},\ldots,\alpha_{1}+\cdots+\alpha_{k}=n\}\). Under the _lexicographic order_\(\leq_{\ell}\) on compositions, \(\alpha\leq_{\ell}\beta\) if and only if \(\alpha_{i}<\beta_{i}\) where \(i\) is the first positive integer such that \(\alpha_{i}\neq\beta_{i}\) where \(\alpha_{i}=0\) if \(i>k\) and \(\beta_{i}=0\) if \(i>j\). Under the _reverse lexicographic order_\(\leq_{r\ell}\) on compositions, \(\alpha\leq_{r\ell}\beta\) if and only if \(\alpha_{i}>\beta_{i}\) where \(i\) is the smallest positive integer such that \(\alpha_{i}\neq\beta_{i}\). Note that, in the last two orders, if such an \(i\) does not exist then \(\alpha=\beta\). Under _dominance order_\(\subseteq\) on compositions, we say \(\alpha\subseteq\beta\) if and only if \(k\leq j\) and \(\alpha_{i}\leq\beta_{i}\) for \(1\leq i\leq k\). **Example 2.3**.: We have the following chains in the corresponding orders: 1. Under the refinement order, \((1,1,1,1)\preceq(1,2,1)\preceq(1,3)\preceq(4)\). 2. Under the lexicographic order, \((1,2,3)\leq_{\ell}(1,3,2)\leq_{\ell}(2,1,3)\leq_{\ell}(2,3,1)\leq_{\ell}(3,1,2)\leq_{\ell}(3,2,1)\). 3. Under the reverse lexicographic order, \((3,2,1)\leq_{r\ell}(3,1,2)\leq_{r\ell}(2,3,1)\leq_{r\ell}(2,1,3)\leq_{r\ell}( 1,3,2)\leq_{r\ell}(1,2,3)\). 4. Under the dominance order, \((1,1,1)\subseteq(2,1,1,1)\subseteq(2,3,1,2)\). There is a natural bijection between ordered subsets of \([n-1]=\{1,2,\ldots,n-1\}\) and compositions of \(n\). For an ordered set \(S=\{s_{1},\ldots,s_{k}\}\subseteq\{[n-1]\}\), \(comp(S)=(s_{1},s_{2}-s_{1},\ldots,s_{k}-s_{k-1},n-s_{k})\) and for a composition \(\alpha=(\alpha_{1},\ldots,\alpha_{j})\), \(set(\alpha)=\{\alpha_{1},\alpha_{1}+\alpha_{2},\ldots,\alpha_{1}+\alpha_{2}+ \ldots+\alpha_{j-1}\}\). **Example 2.4**.: For \(n=8\), let \(S=\{2,3,6,7\}\) and \(\alpha=(1,2,1,4).\) Then \(comp(S)=(2,1,3,1,1)\) and \(set(\alpha)=\{1,3,4\}\). For a positive integer \(s\) and compositions \(\alpha\models n\) and \(\beta\models n+s\), we write \(\alpha\subset_{s}\beta\) if \(\alpha_{j}\leq\beta_{j}\) for all \(1\leq j\leq\ell(\alpha)\), and \(\ell(\beta)\leq\ell(\alpha)+1\). This notation comes from [3] and \(\subset_{1}\) constitutes a partial order on compositions. **Example 2.5**.: The compositions \(\beta\) for which \((1,2)\subset_{2}\beta\) are \[(1,2)\subset_{2}(2,3)\text{ and }(1,2)\subset_{2}(2,2,1)\text{ and }(1,2) \subset_{2}(1,3,1).\] A _permutation_\(\omega\) of a set is a bijection from the set to itself. The permutation \(\omega\) of \([n]\) is written in one-line notation as \(\omega(1)\omega(2)\cdots\omega(n)\). **Example 2.6**.: The permutation \(312\) maps \(1\to 3\), \(2\to 1\), and \(3\to 2\). For any set \(I\), the _Kroenecker delta_ is the function defined for \(i,j\in I\) as \[\delta_{i,j}=\begin{cases}1&\text{ if }i=j\\ 0&\text{ if }i\neq j\end{cases}\] ### The symmetric functions Let \(x=(x_{1},x_{2},\ldots)\) and \(c_{\alpha}\in\mathbb{Q}\). A _symmetric function_\(f(x)\) with rational coefficients is a formal power series \(f(x)=\sum_{\alpha}c_{\alpha}x^{\alpha}\) where \(\alpha\) is a weak composition of a positive integer, \(x^{\alpha}=x_{1}^{\alpha_{1}}\ldots x_{k}^{\alpha_{k}}\), and \(f(x_{\omega(1)},x_{\omega(2),\ldots})=f(x_{1},x_{2},\ldots)\) for all permutations \(\omega\) of \(\mathbb{Z}_{>0}\). **Example 2.7**.: The following function \(f(x)\) is a symmetric function: \[f(x)=x_{1}^{2}x_{2}^{3}x_{3}+x_{1}^{2}x_{3}^{3}x_{2}+x_{2}^{2}x_{1}^{3}x_{3}+x _{2}^{2}x_{3}^{3}x_{1}+x_{3}^{2}x_{1}^{3}x_{2}+x_{3}^{2}x_{2}^{3}x_{1}+\ldots +x_{4}^{2}x_{5}^{3}x_{7}+\ldots.\] The algebra of symmetric functions is denoted \(Sym\), and we take \(\mathbb{Q}\) as our base field unless otherwise specified. \(Sym\) has many bases with various applications and combinatorial importance, but we limit ourselves to defining the Schur basis here. See [25] for more background on symmetric functions. For a partition \(\lambda\vdash n\), a _semistandard Young tableau_ (SSYT) of shape \(\lambda\) is a filling of the Young diagram of \(\lambda\) with positive integers such that the numbers are weakly increasing from left to right in the rows and strictly increasing from top to bottom in the columns. The _size_ of an SSYT is its number of boxes, \(n=\sum_{i}\lambda_{i}\), and its _type_ is a weak composition encoding the number of boxes filled with each integer. We write \(type(T)=\beta=(\beta_{1},\ldots,\beta_{j})\) if \(T\) has \(\beta_{i}\) boxes containing an \(i\) for all \(i\in[j]\). Note that "type" is also referred to as "content" in the literature. A _standard Young tableau_ (SYT) of size \(n\) is a Young tableau in which the numbers in \([n]\) each appear exactly once. A SSYT \(T\) of type \(\beta=(\beta_{1},\ldots,\beta_{k})\) is associated with the monomial \(x^{T}=x_{1}^{\beta_{1}}\cdots x_{k}^{\beta_{k}}\). **Example 2.8**.: The semistandard Young tableaux of shape \((2,2)\) with entries in \(\{1,2,3\}\) and their associated monomials are: \[\begin{array}{c}\framebox{$1$}\\ \framebox{$2$}\\ x_{1}^{2}x_{2}^{2}\\ \end{array}\qquad\begin{array}{c}\framebox{$1$}\\ \framebox{$2$}\\ \framebox{$2$}\\ x_{1}^{2}x_{2}x_{3}\\ \end{array}\qquad\begin{array}{c}\framebox{$1$}\\ \framebox{$3$}\\ \framebox{$2$}\\ x_{1}x_{2}^{2}x_{3}\\ \end{array}\qquad\begin{array}{c}\framebox{$1$}\\ \framebox{$2$}\\ \framebox{$3$}\\ \framebox{$2$}\\ x_{1}x_{2}^{2}x_{3}\\ \end{array}\qquad\begin{array}{c}\framebox{$1$}\\ \framebox{$2$}\\ \framebox{$3$}\\ \framebox{$2$}\\ x_{1}x_{2}^{2}x_{3}\\ \end{array}\qquad\begin{array}{c}\framebox{$2$}\\ \framebox{$3$}\\ \framebox{$2$}\\ \framebox{$2$}\\ x_{1}x_{2}^{2}x_{3}\\ \end{array}\] The standard Young tableaux of shape \((2,2)\), both of type \((1,1,1,1)\), are: \[\begin{array}{|c|c|}\hline 1&2\\ \hline 3&4\\ \hline\end{array}\qquad\begin{array}{|c|c|}\hline 1&3\\ \hline 2&4\\ \hline\end{array}\] **Definition 2.9**.: For a partition \(\lambda\), the Schur symmetric function is defined as \[s_{\lambda}=\sum_{T}x^{T},\] where the sum runs over all semistandard Young tableaux \(T\) of shape \(\lambda\) with entries in \(\mathbb{Z}_{>0}\). These functions form a basis of \(Sym\)[25]. **Remark 2.10**.: The _Schur polynomials_ are defined over finitely many variables \(s_{\lambda}(x_{1},\ldots,x_{n})\) and correspond to tableaux filled only with integers in \([n]\). This paper deals only with functions in infinitely many variables, but many results restrict to polynomials. The Schur functions can be defined in numerous other ways including via a Jacobi Trudi formula or with Bernstein creation operators. One of their most important properties is that they are the characters of irreducible representations of the general linear group [25]. ### Hopf algebras Hopf algebras are widespread in combinatorics and other fields with notable examples including \(Sym\), \(QSym\), and \(NSym\). We provide a brief overview of the structures needed for our purposes. See [11, 15] for more details. **Definition 2.11**.: For a field \(\Bbbk\) of characteristic zero, a _Hopf algebra_\((\mathcal{H},\mu,\Delta,\eta,\epsilon)\), often simply denoted \(\mathcal{H}\), is a bialgebra consisting of: 1. an associative algebra \((\mathcal{H},\mu,\eta)\), where \(\mu:\mathcal{H}\otimes\mathcal{H}\rightarrow\mathcal{H}\) is \(\Bbbk\)-linear multiplication and \(\eta:\Bbbk\rightarrow\mathcal{H}\) is a \(\Bbbk\)-linear unital algebra morphism, satisfying the following commutative diagrams: \[\begin{array}{ Two bases \(\{a_{i}\}_{i\in I}\) and \(\{b_{i}\}_{i\in I}\) of \(\mathcal{A}\) and \(\mathcal{B}\) respectively are _dual bases_ if and only if \(\langle a_{i},b_{j}\rangle=\delta_{i,j}\). **Example 2.13**.: \(Sym\) is a self-dual Hopf algebra with the inner product \(\langle s_{\lambda},s_{\mu}\rangle=\delta_{\lambda,\mu}\), meaning that the Schur basis is dual to itself. The following result gives a relation for the change of bases between dual bases. **Proposition 2.14**.: _[_18_]_ _Let \(\mathcal{A}\) and \(\mathcal{B}\) be dually paired algebras and let \(\{a_{i}\}\) be a basis of \(\mathcal{A}\). A basis \(\{b_{i}\}\) of \(\mathcal{B}\) is the unique basis that is dual to \(\{a_{i}\}\) if and only if the following relationship holds for any pair of dual bases \(\{c_{i}\}\) in \(\mathcal{A}\) and \(\{d_{i}\}\) in \(\mathcal{B}\):_ \[a_{i}=\sum_{j}k_{i,j}c_{j}\quad\text{and}\quad\ d_{j}=\sum_{i}k_{i,j}b_{i}.\] Next, we have a relation for the coefficient of the product and the coproduct of dual bases. **Proposition 2.15**.: _[_15_]_ _The coproduct of the basis \(\{b_{i}\}_{i\in I}\) in \(\mathcal{B}\) is uniquely defined by the product of its dual basis \(\{a_{i}\}_{i\in I}\) in \(\mathcal{A}\) in the following way:_ \[a_{j}a_{k}=\sum_{i\in I}c^{i}_{j,k}a_{i}\quad\Longleftrightarrow\quad\Delta( b_{i})=\sum_{(j,k)\in I\times I}c^{i}_{j,k}b_{j}\otimes b_{k}.\] _Further, \(\Delta:\mathcal{B}\to\mathcal{B}\otimes\mathcal{B}\) is an algebra homomorphism._ **Example 2.16**.: For partitions \(\lambda\) and \(\mu\), the product of Schur functions in \(Sym\) is given by \(s_{\lambda}s_{\mu}=\sum_{\nu}c^{\nu}_{\lambda,\mu}s_{\nu}\) where \(c^{\nu}_{\lambda,\mu}\) are the Littlewood Richardson coefficients. For a partition \(\nu\), the coproduct on Schur functions is given by \(\Delta(s_{\nu})=\sum_{\lambda,\mu}c^{\nu}_{\lambda,\mu}s_{\lambda}\otimes s_{\mu}\) by Proposition 2.15. ### Quasisymmetric functions Let \(x=(x_{1},x_{2},\ldots)\) and \(b_{\alpha}\in\mathbb{Q}\). A _quasisymmetric function_\(f(x)\) with rational coefficients is a formal power series of the form \[f(x)=\sum_{\alpha}b_{\alpha}x^{\alpha},\] where 1. \(\alpha\) is a weak composition of a positive integer, 2. \(x^{\alpha}=x_{1}^{\alpha_{1}}\ldots x_{k}^{\alpha_{k}}\), and 3. the coefficients of monomials \(x_{i_{1}}^{a_{1}}\ldots x_{i_{k}}^{a_{k}}\) and \(x_{j_{1}}^{a_{1}}\ldots x_{j_{k}}^{a_{k}}\) are equal if \(i_{1}<\ldots<i_{k}\) and \(j_{1}<\ldots<j_{k}\). We define the two most common bases of \(Qsym\). Given a composition \(\alpha\), the _monomial quasisymmetric function_\(M_{\alpha}\) is defined as \[M_{\alpha}=\sum_{i_{1}<\ldots<i_{k}}x_{i_{1}}^{\alpha_{1}}\ldots x_{i_{k}}^{ \alpha_{k}},\] where the sum runs over strictly increasing sequences of \(k\) positive integers \(i_{1},\ldots,i_{k}\in\mathbb{Z}_{>0}\). The _fundamental quasisymmetric function_\(F_{\alpha}\) is defined as \[F_{\alpha}=\sum_{\beta\preceq\alpha}M_{\beta},\] where the sum runs over weakly increasing sequences of \(k\) positive integers \(i_{1},\ldots,i_{k}\in\mathbb{Z}_{>0}\). The fundamental functions are also denoted \(L_{\alpha}\) in the literature [25]. **Example 2.17**.: The monomial quasisymmetric function indexed by \((2,1)\) is \[M_{(2,1)}=\sum_{i<j}x_{i}^{2}x_{j}=x_{1}^{2}x_{2}+x_{1}^{2}x_{3}+\ldots+x_{2}^ {2}x_{3}+x_{2}^{2}x_{4}+\ldots+x_{3}^{2}x_{4}+x_{3}^{2}x_{5}+\ldots.\] The expansion of \(F_{(3)}\) into the monomial basis is \[F_{(3)}=M_{(3)}+M_{(2,1)}+M_{(1,2)}+M_{(1,1,1)}.\] The algebra of quasisymmetric functions, denoted \(QSym\), admits a Hopf algebra structure. The monomial basis inherits its product and coproduct from the quasisuffle and concatenation operations on compositions. The _quasisuffle_\(\mathop{\hbox{\vbox{\hrule width 100 \hbox{\vrule width 1px \kern 5.0pt\vrule width 0.4pt height 0.4pt width 0.4pt depth 0.0pt\kern-0.4pt\vrule width 0.4pt hei ght 0. same composition may appear multiple times in the quasishuffle. Multiplication of monomial functions is given by \[M_{\alpha}M_{\beta}=\sum_{\gamma}M_{\gamma},\] where \(\gamma\) is a summand in \(\alpha\)\(\begin{array}{c}Q\\ \hline\end{array}\)\(\beta\) with multiplicity. Comultiplication of the monomial functions is given by \[\Delta(M_{\alpha})=\sum_{\beta\cdot\gamma=\alpha}M_{\beta}\otimes M_{\gamma},\] where the sum runs over all compositions \(\beta,\gamma\) such that \(\beta\cdot\gamma=\alpha\). **Example 2.18**.: The following equations show the product and coproduct on monomial quasisymmetric functions expanded in terms of the monomial basis: \[M_{(2,1)}M_{(1)}=2M_{(2,1,1)}+M_{(1,2,1)}+M_{(2,2)}+M_{(3,1)},\] \[\Delta(M_{(1,2,1)})=1\otimes M_{(1,2,1)}+M_{(1)}\otimes M_{(2,1)} +M_{(1,2)}\otimes M_{(1)}+M_{(1,2,1)}\otimes 1.\] For more details on quasisymmetric functions see [22]. ### Non-commutative symmetric functions The algebra of _non-commutative symmetric functions_, written \(NSym\), is the Hopf algebra dual to \(Qsym\). \(NSym\) can be defined as the algebra with generators \(\{H_{1},H_{2},\ldots\}\) and no relations, that is \(NSym=\mathbb{Q}\left\langle H_{1},H_{2},\ldots\right\rangle.\) Given a composition \(\alpha=(\alpha_{1},\ldots,\alpha_{k})\), we define \(H_{\alpha}=H_{\alpha_{1}}H_{\alpha_{2}}\ldots H_{\alpha_{k}}\). Then, the set \(\{H_{\alpha}\}_{\alpha}\) forms a basis of \(NSym\) called the _complete homogeneous basis_. \(NSym\) and \(QSym\) are dually paired by the inner product defined by \(\left\langle H_{\alpha},M_{\beta}\right\rangle=\delta_{\alpha,\beta}\) for all compositions \(\alpha,\beta\). Multiplication and comultiplication in \(NSym\) are defined for the complete homogeneous functions as: \[H_{\alpha}H_{\beta}=H_{\alpha\cdot\beta}\qquad\text{and}\qquad\Delta(H_{ \alpha})=\sum_{(\beta,\gamma)}H_{\beta}\otimes H_{\gamma},\] where \(\beta\) and \(\gamma\) are compositions such that \(\alpha\) can be obtained from \(\beta\)\(\begin{array}{c}Q\\ \hline\end{array}\)\(\gamma\). For a composition \(\alpha\), the _ribbon non-commutative symmetric function_ is defined as \[R_{\alpha}=\sum_{\beta\succeq\alpha}(-1)^{\ell(a)-\ell(\beta)}H_{\beta}.\] The ribbon functions are a basis of \(NSym\) dual to the fundamental basis of \(QSym\), meaning \(\left\langle R_{\alpha},F_{\beta}\right\rangle=\delta_{\alpha,\beta}\). For a composition \(\alpha\), the _elementary non-commutative symmetric function_ is defined as \[E_{\alpha}=\sum_{\beta\preceq\alpha}(-1)^{|\alpha|-\ell(\beta)}H_{\beta}.\] For more details on the non-commutative symmetric functions see [12]. ### The dual immaculate quasisymmetric functions The dual immaculate basis of \(QSym\) was introduced by Berg, Bergeron, Saliola, Serrano, and Zabrocki in [3]. Like the Schur functions, the dual immaculate functions are defined combinatorially as the sum of monomials associated to certain tableaux. **Definition 2.19**.: Let \(\alpha\) and \(\beta\) be a composition and weak composition respectively. An _immaculate tableau_ of shape \(\alpha\) and type \(\beta\) is a labelling of the boxes of the diagram of \(\alpha\) by positive integers in such a way that: 1. the number of boxes labelled by \(i\) is \(\beta_{i}\), 2. the sequence of entries in each row, from left to right, is weakly increasing, and 3. the sequence of entries in the first column, from top to bottom, is strictly increasing. An immaculate tableau \(T\) of type \(\beta=(\beta_{1},\ldots,\beta_{h})\) is associated with the monomial \(x^{T}=x_{1}^{\beta_{1}}x_{2}^{\beta}\cdots x_{h}^{\beta_{h}}\). **Example 2.20**.: The immaculate tableaux of shape \(\alpha=(2,3)\) and type \(\beta=(1,2,2)\) are: [MISSING_PAGE_POST] \[\begin{array}{c **Definition 2.21**.: For a composition \(\alpha\), the _dual immaculate function_ is defined by \[\mathfrak{S}_{\alpha}^{*}=\sum_{T}x^{T},\] where the sum runs over all immaculate tableaux \(T\) of shape \(\alpha\). **Example 2.22**.: The dual immaculate function \(\mathfrak{S}_{(2,2)}^{*}\) corresponds to immaculate tableaux of shape \((2,2)\): \[\begin{array}{|c| **Definition 2.29**.: _[_3, 25_]_ _A standard immaculate tableau \(U\) has a descent in position \(i\) if \((i+1)\) is in a row strictly lower than \(i\) in \(U\). We denote the set of all descents in \(U\) as \(Des(U)\), called the descent set of \(U\). If \(Des(U)=\{d_{1},\ldots,d_{k-1}\}\) then the descent composition of \(U\) is defined as \(co(U)=comp(Des(U))=(d_{1},d_{2}-d_{1},d_{3}-d_{2},\ldots,n-d_{k-1})\)._ **Proposition 2.30**.: _[_3_]_ _The dual immaculate functions \(\mathfrak{S}_{\alpha}^{*}\) are fundamental positive. They expand as_ \[\mathfrak{S}_{\alpha}^{*}=\sum_{\beta\leq\iota\alpha}L_{\alpha,\beta}F_{\beta},\] _where \(L_{\alpha,\beta}\) is the number of standard immaculate tableaux with shape \(\alpha\) and descent composition \(\beta\)._ **Example 2.31**.: Let \(\alpha=(2,2)\). The standard immaculate tableaux of shape \((2,2)\), listed with their descent sets and descent compositions, are \[\begin{array}{c}S_{1}=\framebox{1}\framebox{2}\\ \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\parpar\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par **Example 2.35**.: If \(\alpha=(\alpha_{1},\alpha_{2})\), then \(\mathfrak{S}_{(\alpha_{1},\alpha_{2})}=\mathbb{B}_{\alpha_{1}}(H_{\alpha_{2}})=H _{\alpha_{1}}H_{\alpha_{2}}-H_{\alpha_{1}+1}H_{\alpha_{2}-1}\). Properties of these Bernstein operators lead to a right Pieri rule for the immaculate functions. **Theorem 2.36**.: _[_3_]_ _For a composition \(\alpha\) and an integer \(s\),_ \[\mathfrak{S}_{\alpha}H_{s}=\sum_{\alpha\subset_{s}\beta}\mathfrak{S}_{\beta},\] _where the sum runs over all compositions \(\beta\) such that \(\alpha\subset_{s}\beta\)._ **Example 2.37**.: Applying the Pieri rule for \(\alpha=(2,1)\) and \(s=2\) yields \[\mathfrak{S}_{(2,1)}H_{(2)}=\mathfrak{S}_{(2,1,2)}+\mathfrak{S}_{(2,2,1)}+ \mathfrak{S}_{(3,1,1)}+\mathfrak{S}_{(2,3)}+\mathfrak{S}_{(3,2)}+\mathfrak{S}_ {(4,1)}.\] Iteration of this Pieri rule yields the following positive expansions of the complete homogeneous and ribbon bases in terms of the immaculate basis: \[H_{\beta}=\sum_{\alpha\geq\iota\beta}K_{\alpha,\beta}\mathfrak{S}_{\alpha} \qquad\text{ and }\qquad R_{\beta}=\sum_{\alpha\geq\iota\beta}L_{\alpha,\beta} \mathfrak{S}_{\alpha}.\] Notice that these expansions relate to those in Propositions 2.23 and 2.30 via Proposition 2.14. The expansion of the immaculate functions into the complete homogeneous basis follows a _Jacobi-Trudi rule_. **Theorem 2.38**.: _[_3_]_ _For \(\alpha=[\alpha_{1},\ldots,\alpha_{m}]\in\mathbb{Z}^{m}\),_ \[\mathfrak{S}_{\alpha}=\sum_{\sigma\in S_{m}}(-1)^{\sigma}H_{\alpha_{1}+\sigma _{1}-1,\alpha_{2}+\sigma_{2}-2,\ldots,\alpha_{m}+\sigma_{m}-m},\] _with \(H_{0}=1\) and \(H_{-m}=0\) for \(m>0\). This is equivalent to taking the non-commutative analogue of the determinant of the matrix below obtained by expanding the determinant of the matrix along the first row and multiplying those elements on the left:_ \[\left[\begin{array}{cccc}H_{\alpha_{1}}&H_{\alpha_{1}+1}&\cdots&H_{\alpha_{ 1}+\ell-1}\\ H_{\alpha_{2}-1}&H_{\alpha_{2}}&\cdots&H_{\alpha_{2}+\ell-2}\\ \vdots&\vdots&\ddots&\vdots\\ H_{\alpha_{\ell}-\ell+1}&H_{\alpha_{\ell}-\ell+2}&\cdots&H_{\alpha_{\ell}}\\ \end{array}\right].\] Certain classes of immaculate functions also have simpler expansions in terms of the complete homogeneous basis [3]. For instance, for a positive integer \(n\), \[\mathfrak{S}_{1^{n}}=\sum_{\alpha\models n}(-1)^{n-\ell(\alpha)}H_{\alpha}.\] There is another right Pieri rule for multiplication by these immaculate functions. For a composition \(\alpha\) and a positive integer \(s\), \[\mathfrak{S}_{\alpha}\mathfrak{S}_{1^{s}}=\sum_{\begin{subarray}{c}\beta \models|\alpha|+s\\ \alpha_{i}\leq\beta_{i}\leq\alpha_{i}+1\end{subarray}}\mathfrak{S}_{\beta}.\] ### Skew dual immaculate functions and the immaculate poset The _immaculate poset_\(\mathfrak{P}\), also defined in [3], is a labelled poset on compositions where \(\alpha\) covers \(\beta\) if \(\beta\subset_{1}\alpha\). In other words, \(\alpha\) covers \(\beta\) if \(\alpha\) can be obtained by adding \(1\) to any part of \(\beta\) or to the end of \(\beta\) as a new part. In terms of diagrams, this is equivalent to adding a box to the right of any row or adding a box at the bottom of the tableau. In the Hasse diagram of \(\mathfrak{P}\), label the arrow from \(\beta\) to \(\alpha\) with \(m\), where \(m\) is the number of the row where the new box is added. Maximal chains from \(\emptyset\) to \(\alpha\) are equivalent to standard immaculate tableaux of shape \(\alpha\), and maximal chains from \(\beta\) to \(\alpha\) define skew standard immaculate tableaux of shape \(\alpha/\beta\). A path \(\{\beta=\beta^{(0)}\rightarrow^{m_{1}}\beta^{(1)}\rightarrow^{m_{2}}\ldots \rightarrow^{m_{k}}\beta^{(k)}=\alpha\}\) corresponds to the skew standard immaculate tableaux of shape \(\alpha/\beta\) where the boxes are filled with positive integers in the order they were added following the path. **Example 2.39**.: Consider two paths \(\mathcal{P}_{1}=\{\emptyset\xrightarrow{1}(1)\xrightarrow{2}(1,1)\xrightarrow{2}(1,2 )\xrightarrow{1}(2,2)\}\) and \(\mathcal{P}_{2}=\{\emptyset\xrightarrow{1}(1)\xrightarrow{1}(2)\xrightarrow{2}(2,1 )\xrightarrow{2}(2,2)\}\). These paths correspond to the standard immaculate tableaux \(T_{1}\) and \(T_{2}\) below, respectively. The path \(\mathcal{P}_{3}=\{(1)\xrightarrow{2}(1,1)\xrightarrow{1}(2,1)\xrightarrow{2}(2, 2)\}\) corresponds to the skew standard immaculate tableau \(T_{3}\). **Definition 2.40**.: [23] Let \(\alpha\) and \(\beta\) be compositions where \(\beta\subseteq\alpha\). A _skew immaculate tableau_ of shape \(\alpha/\beta\) is a skew shape \(\alpha/\beta\) filled with positive integers such that the entries in the first column of \(\alpha\) are strictly increasing from top to bottom and the entries in rows are weakly increasing from left to right. We say \(T\) is a _skew standard immaculate tableau_ if it contains the entries \(1,\ldots,|\alpha|-|\beta|\) with each appearing exactly once. **Definition 2.41**.: [3] Given compositions \(\alpha\) and \(\beta\) with \(\beta\subseteq\alpha\), the _skew dual immaculate function_ is defined as \[\mathfrak{S}^{*}_{\alpha/\beta}=\sum_{\gamma}\langle\mathfrak{S}_{\beta}H_{ \gamma},\mathfrak{S}^{*}_{\alpha}\rangle M_{\gamma},\] where the sum runs over all \(\gamma\) such that \(|\alpha|-|\beta|=|\gamma|\). The coefficient \(\langle\mathfrak{S}_{\beta}H_{\gamma},\mathfrak{S}^{*}_{\alpha}\rangle\) is exactly equal to the number of skew standard immaculate tableaux of shape \(\alpha/\beta\) with type \(\gamma\)[23]. Thus, the skew dual immaculate functions can also be defined by a sum over skew immaculate tableaux. **Theorem 2.42**.: _[_23_]_ _Let \(\alpha\) and \(\beta\) be compositions with \(\beta\subseteq\alpha\). Then_ \[\mathfrak{S}^{*}_{\alpha/\beta}=\sum_{T}x^{T},\] _where the sum runs over all skew immaculate tableaux of shape \(\alpha/\beta\)._ Expansions of the skew dual immaculate functions into the fundamental and dual immaculate bases yield coefficients with connections to the multiplicative structure of the immaculate functions. **Proposition 2.43**.: _[_3_]_ _Given compositions \(\alpha\) and \(\beta\) with \(\beta\subseteq\alpha\),_ \[\mathfrak{S}^{*}_{\alpha/\beta}=\sum_{\gamma}\langle\mathfrak{S}_{\beta}R_{ \gamma},\mathfrak{S}^{*}_{\alpha}\rangle F_{\gamma}=\sum_{\gamma}\langle \mathfrak{S}_{\beta}\mathfrak{S}_{\gamma},\mathfrak{S}^{*}_{\alpha}\rangle \mathfrak{S}^{*}_{\gamma},\] _where the sums run over all \(\gamma\in\mathfrak{P}\) such that \(|\alpha|-|\beta|=|\gamma|\). Moreover, the coefficients \(c^{\alpha}_{\beta,\gamma}=\langle\mathfrak{S}_{\beta}\mathfrak{S}_{\gamma}, \mathfrak{S}^{*}_{\alpha}\rangle\) are the immaculate structure constants that appear in the expansion_ \[\mathfrak{S}_{\beta}\mathfrak{S}_{\gamma}=\sum_{\alpha}c^{\alpha}_{\beta, \gamma}\mathfrak{S}_{\alpha}.\] Additionally, the comultiplication of the dual immaculate functions can be described using skew compositions. **Definition 2.44**.: [23] Given \(\alpha\models n\), the comultiplication on \(\mathfrak{S}^{*}_{\alpha}\) is defined as \[\Delta(\mathfrak{S}^{*}_{\alpha})=\sum_{\beta}\mathfrak{S}^{*}_{\beta}\otimes \mathfrak{S}^{*}_{\alpha/\beta},\] where the sum runs over all compositions \(\beta\) such that \(\beta\subseteq\alpha\). The multiplication and antipode of the dual immaculate functions do not yet have combinatorial definitions for the general case. For more on the immaculate and dual immaculate functions see [1, 4, 6, 7, 8, 14, 20, 21]. ## 3. Doliwa's colored \(QSym_{A}\) and \(NSym_{A}\) The algebra of non-commutative symmetric functions, and dually the algebra of quasisymmetric functions, have natural generalizations isomorphic to algebras of sentences. In [11], Doliwa introduces these generalizations which are built using partially commutative colored variables. Let \(A=\{a_{1},a_{2},\ldots,a_{m}\}\) be an alphabet of letters, which we call _colors_. _Words_ over \(A\) are finite sequences of colors written without separating commas. Finite sequences of non-empty words are called _sentences_. The empty word and the empty sentence are both denoted by \(\emptyset\). A _weak sentence_ may include empty words. The _size_ of a word \(w\), denoted \(|w|\), is the total number of colors it contains. Note that when we refer to "the number of colors", we are counting repeated colors unless we say "the number of unique colors". The _size_ of a sentence \(I=(w_{1},w_{2},\ldots,w_{k})\), denoted \(|I|\), is also the number of colors it contains. The _length_ of a sentence \(I\), denoted \(\ell(I)\), is the number of words it contains. The _concatenation_ of two words \(w=a_{1}\cdots a_{k}\) and \(v=b_{1}\cdots b_{j}\) is \(w\cdot v=a_{1}\cdots a_{k}b_{1}\cdots b_{j}\), sometimes just denoted \(wv\). The word obtained by concatenating every word in a sentence \(I\) is called the _maximal word_ of \(I\), denoted \(w(I)=w_{1}w_{2}\ldots w_{k}\). For our purposes, we also define the _word length_ of \(I\) as \(w\ell(I)=(|w_{1}|,\ldots,|w_{k}|)\), which gives the underlying composition of the sentence. **Example 3.1**.: Let \(a,b,c\in A\) and let \(w_{1}=ac\), \(w_{2}=b\), and \(w_{3}=cab\) be words. Consider the sentence \(I=(w_{1},w_{2},w_{3})=(ac,b,cab)\). Then, \(|w_{1}|=2\), \(|w_{2}|=1\), \(|w_{3}|=3\), and \(|I|=6\). The length of \(I\) is \(\ell(I)=3\) and the word length of \(I\) is \(w\ell(I)=(2,1,3)\). The maximal word of \(I\) is \(w(I)=acbcab\). A sentence \(I\) is a refinement of a sentence \(J\), written \(I\preceq J\), if \(J\) can be obtained by concatenating some adjacent words of \(I\). In other words, \(I\preceq J\) if \(w(I)=w(J)\) and \(w\ell(I)\preceq w\ell(J)\). In this case, \(I\) is called a _refinement_ of \(J\) and \(J\) a _coarsening_ of \(I\). The _Mobius function_ on the poset of sentences ordered by refinement is \[\mu(J,I)=(-1)^{\ell(J)-\ell(I)}\quad\text{for}\quad\ J\preceq I. \tag{1}\] Given a total order \(\leq\) on \(A\), define the following _lexicographic order_\(\preceq_{\ell}\) on words. For words \(w=a_{1}\ldots a_{k}\) and \(v=b_{1}\ldots b_{j}\), we say \(w\leq_{\ell}v\) if \(a_{i}<b_{i}\) for the first positive integer \(i\) such that \(a_{i}\neq b_{i}\). Note that if no such \(i\) exists then \(w=v\). **Example 3.2**.: Let \(A=\{a<b<c\}\) and \(I=(abc)\). The refinements of \(I\) are \((abc)\), \((a,bc)\), \((ab,c)\), and \((a,b,c)\). Under lexicographic order, \(abc\preceq_{\ell}acb\preceq_{\ell}bac\preceq_{\ell}bca\preceq_{\ell}cab \preceq_{\ell}cba\). The _concatenation_ of two sentences \(I=(w_{1},\ldots,w_{k})\) and \(J=(v_{1},\ldots,v_{h})\) is \(I\cdot J=(w_{1},\ldots,w_{k},v_{1},\ldots,v_{h})\). Their _near-concatenation_ is \(I\odot J=(w_{1},\ldots,w_{k}v_{1},\ldots,v_{h})\) where the words \(w_{k}\) and \(v_{1}\) are concatenated into a single word. Given \(I=(w_{1},\ldots,w_{k})\) where \(a_{i}\) is the \(i^{\text{th}}\) entry in \(I\) and \(a_{i+1}\) is the \((i+1)^{\text{th}}\) entry in \(I\), we say that \(I\)_splits_ after the \(i^{\text{th}}\) entry if \(a_{i}\in w_{j}\) and \(a_{i+1}\in w_{j+1}\) for \(j\in[k]\). **Example 3.3**.: Let \(I=(a,bc)\) and \(J=(ca,b)\). Then, \(I\cdot J=(a,bc,ca,b)\) and \(I\odot J=(a,bcca,b)\). The sentence \((a,bcca,b)\) splits after the \(1^{\text{st}}\) and \(5^{\text{th}}\) entries. Given \(I=(w_{1},\ldots,w_{k})\), the _reversal_ of \(I\) is \(I^{r}=(w_{k},w_{k-1},\ldots,w_{1})\). The _complement_ of \(I\), denoted \(I^{c}\), is the unique sentence such that \(w(I)=w(I^{c})\) and \(I^{c}\) splits exactly where \(I\) does not. Both maps are involutions on sentences. **Example 3.4**.: Let \(I=(abc,de)\). Then \(I^{r}=(de,abc)\) and \(I^{c}=(a,b,cd,e)\). The _flattening_ of a weak sentence \(I\), denoted \(\tilde{I}\), is the sentence obtained by removing all empty words from \(I\). Further, for a weak sentence \(J=(v_{1},\ldots,v_{k})\) and a sentence \(I=(w_{1},\ldots,w_{k})\), we say that \(J\) is _right-contained_ in \(I\), denoted \(J\subseteq_{R}I\), if there exists a weak sentence \(I/_{R}J=(u_{1},\ldots,u_{k})\) such that \(w_{i}=u_{i}v_{i}\) for every \(i\in[k]\). We say that \(J\) is _left-contained_ in \(I\), denoted \(J\subseteq_{L}I\), if there exists a weak sentence \(I/_{L}J=(q_{1},\ldots,q_{k})\) such that \(w_{i}=v_{i}q_{i}\) for every \(i\in[k]\). Note that right-containment is denoted \(I/J\) in [11] but here that notation is used exclusively to denote skew shapes. **Example 3.5**.: Let \(I=(ab,cdef)\), \(J=(b,ef)\), and \(K=(a,cde)\). Then \(J\subseteq_{R}I\) and \(I/_{R}J=(a,cd)\), while \(K\subseteq_{L}I\) and \(I/_{L}K=(b,f)\). Given the weak sentence \(I=(\emptyset,a,\emptyset,bc)\), the flattening of \(I\) is \(\tilde{I}=(a,bc)\). ### The Hopf algebra of sentences and colored non-commutative symmetric functions The algebra of sentences (colored compositions) is a Hopf algebra with the multiplication being the concatenation of sentences, the comultiplication given by \[\Delta(I)=\sum_{J\subseteq R\ I}\widetilde{I/_{R}J}\otimes\tilde{J},\] the natural unity map, the counit \[\epsilon(I)=\begin{cases}1,&\text{if $I=1$},\\ 0,&\text{otherwise},\end{cases}\] and the antipode \[S(I)=\sum_{J\preceq I^{r}}(-1)^{\ell(J)}J.\] The algebra of sentences taken over an alphabet with only one letter is isomorphic to NSym. Thus, the algebra of sentences taken over any alphabet \(A\) is a natural extension of \(NSym\) called _the algebra of colored non-commutative symmetric functions_, denoted \(NSym_{A}\). The linear basis of sentences \(I\) is the complete homogeneous basis of \(NSym_{A}\), denoted \(\{H_{I}\}_{I}\). \(NSym_{A}\) can also be defined as the algebra freely generated over non-commuting elements \(H_{w}\) for any word in \(A\). The Hopf algebra operations extend to \(\{H_{I}\}_{I}\) as follows: \[H_{I}\cdot H_{J}=H_{I\cdot J},\qquad\quad\Delta(H_{I})=\sum_{J\subseteq R^{I} }H_{\widetilde{I/_{R}J}}\otimes H_{J},\qquad\quad S(H_{I})=\sum_{J\preceq I^ {r}}(-1)^{\ell(J)}H_{J}.\] The reversal and complement operations extend as \(H_{I}^{r}=H_{I^{r}}\) and \(H_{I}^{c}=H_{I^{c}}\). **Definition 3.6**.: The _uncoloring_ map \(\upsilon:NSym_{A}\to NSym\) is defined \(\upsilon(H_{I})=H_{w\ell(I)}\) and extended linearly. If the alphabet \(A\) only contains one color, then \(\upsilon\) is an isomorphism. We say that two bases \(\{B_{I}\}_{I}\) and \(\{C_{\alpha}\}_{\alpha}\) in \(NSym_{A}\) and \(NSym\) respectively are _analogous_ if \(\upsilon(B_{I})=C_{w\ell(I)}\) for all sentences \(I\) when \(A\) is an alphabet of one color. For instance, the colored complete homogeneous basis of \(NSym_{A}\) is analogous to the complete homogeneous basis of \(NSym\). \(NSym_{A}\) also contains analogues of the elementary and ribbon bases of \(NSym\). For a sentence \(I\), the _colored elementary function_ is defined by \[E_{I}=\sum_{J\preceq I}(-1)^{|I|-\ell(J)}H_{J},\] and the _colored ribbon function_ is defined by \[R_{I}=\sum_{J\succeq I}(-1)^{\ell(J)-\ell(I)}H_{J}\qquad\text{and}\qquad\quad H _{I}=\sum_{J\succeq I}R_{J}. \tag{2}\] Note that we use \(\upsilon\) to denote the uncoloring maps on both \(QSym_{A}\) and \(NSym_{A}\), and often refer to these together as if they are one map. ### The colored quasisymmetric functions and colored duality The colored quasisymmetric functions, which constitute the algebra dual to \(NSym_{A}\), are constructed using partially commutative colored variables. For a color \(a\in A\), define the set of infinite colored variables \(x_{a}=\{x_{a,1},x_{a,2},\ldots\}\) and let \(x_{A}=\cup_{a\in A}x_{a}\). These variables are assumed to be partially commutative in the sense that variables only commute if the second indices are different. That is, for \(a,b\in A\), \[x_{a,i}x_{b,j}=x_{b,j}x_{a,i}\text{ for }i\neq j\qquad\text{and}\qquad x_{a,i}x_{ b,i}\neq x_{b,i}x_{a,i}\text{ if }a\neq b.\] As a result, every monomial in variables \(x_{a,i}\) can be uniquely re-ordered so that the sequence of the second indices of the variables is weakly increasing, at which point any first indices sharing the same color can be combined into a single word. Every monomial has a sentence \((w_{1},\ldots,w_{m})\) defined by its re-ordered, combined form \(x_{w_{1},j_{1}}\cdots x_{w_{m},j_{m}}\) where \(j_{1}<\ldots<j_{m}\). Similar notions of coloring with different assumptions of partial commutativity can be found in [5, 24]. **Example 3.7**.: The monomial \(x_{a,2}x_{b,3}x_{b,1}x_{c,2}\) can be reordered as \(x_{b,1}x_{a,2}x_{c,2}x_{b,3}\) and combined as \(x_{b,1}x_{ac,2}x_{b,3}\). Then, the sentence of this monomial is \((b,ac,b)\). \(QSym_{A}\) is a subset of \(\mathbb{Q}[x_{A}]\) defined as the set of formal power series such that the coefficients of the monomials indexed by the same sentence are equal. **Example 3.8**.: The following function \(f(x_{A})\) is in \(QSym_{A}\): \[f(x_{A})=3x_{a,1}x_{bc,2}+3x_{a,1}x_{bc,3}+\ldots+3x_{a,2}x_{bc,3}+3x_{a,2}x_{ bc,4}+\ldots.\] Bases in \(QSym\) extend naturally to bases in \(QSym_{A}\). For a sentence \(I=(w_{1},w_{2},\ldots,w_{m})\), the _colored monomial quasisymmetric function_\(M_{I}\) is defined as \[M_{I}=\sum_{1\leq j_{1}<j_{2}<\ldots<j_{m}}x_{w_{1},j_{1}}x_{w_{2},j_{2}}\ldots x _{w_{m},j_{m}},\] where the sums runs over strictly increasing sequences of \(m\) positive integers \(j_{1},\ldots,j_{m}\in\mathbb{Z}_{>0}\). **Example 3.9**.: The colored monomial quasisymmetric function for the sentence \((a,bc)\) is \[M_{(a,bc)}=x_{a,1}x_{bc,2}+x_{a,1}x_{bc,3}+\ldots+x_{a,2}x_{bc,3}+x_{a,2}x_{bc,4}+\ldots+x_{a,3}x_{bc,4}+\ldots.\] **Proposition 3.10**.: _[_11_]_ _The subspace \(QSym_{A}\) of \(\mathbb{Q}[x_{A}]\) spanned by \(\{M_{I}\}_{I}\) is a subalgebra isomorphic to the graded algebra dual of \(NSym_{A}\) such that \(M_{I}\) is mapped to the dual of \(H_{I}\). That is, \(QSym_{A}\) and \(NSym_{A}\) are Hopf algebras dually paired by the inner product \(\langle H_{I},M_{J}\rangle=\delta_{I,J}\)._ \(QSym_{A}\) and \(NSym_{A}\) inherit the product and coproduct from the Hopf algebra of sentences. The quasishuffle \(I\)\(\begin{array}{c}Q\\ \hbox{\begin{picture}(0.0,0.0)\put(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.00,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0.0){ \vector(0.0,0){\vector(0.0,0.0){\vector(0.0,0.0){\vector(0.0,0){\vector(0.0){ \vector(0.0){\vector(0\)(0.0,{\vector(0)0.0){\vector(0\)(0.0,0.0){\vector(0.0,0){ \vector(0.0){\vector(0.00){\vector(0){\vector(0)(0.00,{\vector(0){\vector(0)(0){ \vector(0){\vector(0){\scriptsize(((((((((((({{{{{{{{\{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \ \ \\ \ \ \\ \ ## 4. A partially commutative generalization of the dual immaculate functions To generalize the dual immaculate functions to \(QSym_{A}\), we first define a colored generalization of tableaux. These allow for a combinatorial definition of the colored dual immaculate functions, which then expand positively into the colored monomial and colored fundamental bases. Additionally, we define the colored immaculate descent graph and use it to give an expansion of the colored fundamental functions into the colored dual immaculate functions. ### The colored dual immaculate basis of \(QSym_{A}\) **Definition 4.1**.: For a sentence \(J=(w_{1},\ldots,w_{k})\), the _colored composition diagram_ of shape \(J\) is a composition diagram of \(w\ell(J)\) where the \(j^{\text{th}}\) box in row \(i\) is colored, or filled, with the \(j^{\text{th}}\) color in \(w_{i}\). **Example 4.2**.: The colored composition diagram of shape \(J=(aba,cb)\) for \(a,b,c\in A\) is \begin{tabular}{|c|c|c|} \hline \(a\) & \(b\) & \(a\) \\ \hline \(c\) & \(b\) & \\ \hline \end{tabular} **Definition 4.3**.: For a sentence \(I\), a _colored immaculate tableau_ (CIT) of shape \(I\) is a colored composition diagram of \(I\) filled with positive integers such that the integer entries in each row are weakly increasing from left to right and the entries in the first column are strictly increasing from top to bottom. **Definition 4.4**.: The _type_ of a CIT \(T\) is a sentence \(B=(u_{1},\ldots,u_{j})\) that indicates how many boxes of each color are filled with each integer and in what order those boxes appear. That is, each word \(u_{i}\) in \(B\) is defined by starting in the lowest box containing an \(i\) and reading the colors of all boxes containing \(i\)'s going from left to right, bottom to top. If no box is filled with the number \(i\), then \(u_{i}=\emptyset\). The flat type of \(T\) is given by the flattening of \(B\), denoted again by \(\tilde{B}\). For a CIT \(T\) of type \(B=(u_{1},\ldots,u_{j})\), the monomial \(x_{T}\) is defined \(x_{T}=x_{u_{1},1}x_{u_{2},2}\cdots x_{u_{j},j}\), which may also be denoted \(x_{B}\). **Example 4.5**.: The colored immaculate tableaux of shape \(J=(aba,cb)\) and type \(B=(a,c,\emptyset,b,ba)\) are \begin{tabular}{|c|c|c|} \hline \(a,1\) & \(b,5\) & \(a,5\) \\ \hline \(c,2\) & \(b,4\) & \(c,2\) & \(b,5\) \\ \hline \end{tabular} Both tableaux are associated with the monomial \(x_{a,1}x_{c,2}x_{b,4}x_{ba,5}\) and have the flat type \(\tilde{B}=(a,c,b,ba)\). **Definition 4.6**.: For a sentence \(J\), the _colored dual immaculate function_ is defined as \[\mathfrak{S}_{J}^{*}=\sum_{T}x_{T},\] where the sum is taken over all colored immaculate tableaux \(T\) of shape \(J\). **Example 4.7**.: For \(J=(aba,cb)\), the colored dual immaculate function is \[\mathfrak{S}_{aba,cb}^{*}=x_{aba,1}x_{cb,2}+x_{ab,1}x_{cba,2}+x_{aba,1}x_{c,2 }x_{b,3}+\ldots+2x_{a,1}x_{c,2}x_{b,3}x_{ba,4}+\ldots.\] The colored dual immaculate functions map to the dual immaculate functions in \(QSym\) under the uncoloring map \(\upsilon\), thus we say the two bases are analogous. **Proposition 4.8**.: _Let \(A\) be an alphabet of one color, \(A=\{a\}\), and \(I\) be a sentence. Then,_ \[\upsilon(\mathfrak{S}_{I}^{*})=\mathfrak{S}_{w\ell(I)}^{*}.\] _Moreover, \(\{\mathfrak{S}_{I}^{*}\}_{I}\) in \(QSym_{A}\) is analogous to \(\{\mathfrak{S}_{\alpha}^{*}\}_{\alpha}\) in \(QSym\)._ Proof.: Observe that \(\upsilon\) acts on a monomial \(x_{T}\) where \(T\) is a colored immaculate tableau of shape \(I\) by mapping it to the monomial \(x^{T^{\prime}}\) where \(T^{\prime}\) is the immaculate tableau of shape \(w\ell(I)\) with the same integer entries as \(T\). Thus, \(\upsilon(\mathfrak{S}_{I}^{*})=\mathfrak{S}_{w\ell(I)}^{*}\) for all alphabets \(A\) and more specifically alphabets \(A\) containing only one color. We now introduce further results on colored immaculate tableaux to provide a foundation for the expansions of the colored dual immaculate functions into other bases of \(QSym\). **Definition 4.9**.: A _standard colored immaculate tableau_ (SCIT) of size \(n\) is a colored immaculate tableau in which the integers \(1\) through \(n\) each appear exactly once. The _standardization_ of a CIT \(T\), denoted \(std(T)\), is a standard colored immaculate tableau obtained by renumbering the boxes of \(T\) in the order they appear in its type. **Example 4.10**.: A few colored immaculate tableaux of shape \(J=(ab,cb)\) together with their standardizations, which are the only standard colored immaculate tableaux of shape \(J=(ab,cb)\), are: Standard colored immaculate tableaux share certain statistics and properties with non-colored standard immaculate tableaux. The number of SCIT of shape \(J\) is the same as the number of standard immaculate tableaux of shape \(wt(J)\), meaning both are counted by the same hook length formula in [3]. Additionally, the notions of _descent_ and _descent composition_ for SCIT are the same as those in Definition 2.29, simply disregarding color. However, we define an additional concept of the colored descent composition. **Definition 4.11**.: Let \(T\) be a standard colored immaculate tableau of type \(B\) with descent set \(Des(T)=\{i_{1},\ldots,i_{k}\}\) for some \(k\in\mathbb{Z}_{>0}\). The _colored descent composition_ of \(T\), denoted \(co_{A}(T)\), is the unique sentence obtained by splitting \(w(B)\) after the \(i_{j}^{\text{th}}\) entry for each \(j\in[k]\). The colored descent composition can also be defined as the sentence obtained by reading through the colors of the tableau in the order that the boxes are numbered and splitting into a new word each time the next box is in a strictly lower row. Note that for a SCIT \(T\) of type \(B\), the colored descent composition is the unique sentence for which \(w\ell(co_{A}(T))=co(T)\) and \(w(co_{A}(T))=w(B)\). **Example 4.12**.: The standard colored immaculate tableaux of shape \((ab,cb)\), along with their descent sets and colored descent compositions, are: \[T_{1}=\boxed{\begin{array}{c|c}a,1&b,2\\ \hline c,3&b,4\\ \hline\end{array}}\qquad T_{2}=\boxed{\begin{array}{c|c}a,1&b,3\\ \hline c,2&b,4\\ \hline\end{array}}\qquad T_{3}=\boxed{\begin{array}{c|c}a,1&b,4\\ \hline c,2&b,3\\ \hline\end{array}}\] **Proposition 4.13**.: _Let \(T_{1}\) and \(T_{2}\) be colored immaculate tableaux of shape \(J\) and type \(B\). Then, \(T_{1}=T_{2}\) if and only if \(std(T_{1})=std(T_{2})\)._ Proof.: It is trivial that \(T_{1}=T_{2}\) implies \(std(T_{1})=std(T_{2})\). Now, let \(std(T_{1})=std(T_{2})=U\), meaning by definition that the boxes of \(T_{1}\) appear in \(B\) in the same order as the boxes of \(T_{2}\). The box \((i,j)\) in row \(i\) and column \(j\) in both tableaux is filled with the same integer \(k\) and with the \(k^{\text{th}}\) color in \(w(B)\), thus \(T_{1}=T_{2}\). **Proposition 4.14**.: _Let \(U\) be a standard colored immaculate tableau of shape \(J\). For a weak sentence \(B\), there exists a colored immaculate tableau \(T\) of shape \(J\) and type \(B\) that standardizes to \(U\) if and only if \(\tilde{B}\preceq co_{A}(U)\)._ Proof.: (\(\Rightarrow\)) Let \(T\) be a colored immaculate tableau of shape \(J\) and type \(B\) such that \(std(T)=U\). Both \(B\) and \(co_{A}(U)\) are defined by the order that boxes appear in the type of \(T\), thus they have the same maximum words \(w(B)=w(co_{A}(U))\). Note that this also means the \(i^{\text{th}}\) letter in \(\tilde{B}\) and the \(i^{\text{th}}\) letter in \(co_{A}(U)\) correspond to the same box in \(J\). Recall that \(co_{A}(U)\) splits only after descents, and suppose that \(co_{A}(U)\) splits after the \(i^{\text{th}}\) letter. Then the \((i+1)^{\text{th}}\) letter is on a strictly lower row. Given that these entries correspond exactly to the \(i^{\text{th}}\) and \((i+1)^{\text{th}}\) letter in \(\tilde{B}\), this tells us that \(\tilde{B}\) must also split since the following entry is on a lower row. Thus \(\tilde{B}\) also splits after every descent which implies that \(\tilde{B}\preceq co_{A}(U)\). (\(\Leftarrow\)) Let \(\tilde{B}=(v_{1},\ldots,v_{j})\preceq co_{A}(U)\) and let \(v_{i}\) be the \(n_{i}^{\text{th}}\) word in \(B\). We create a colored immaculate tableau \(T\) of shape \(J\) and type \(B\) that standardizes to \(U\) by filling the boxes of \(T\) in the order they are numbered in \(U\). The first \(|v_{1}|\) boxes are labeled with \(n_{1}\)'s, the next \(|v_{2}|\) boxes are labeled with \(n_{2}\)'s, and continue this process until the last \(|v_{j}|\) boxes are labeled with \(n_{j}\)'s. Since \(\tilde{B}\preceq co_{A}(U)\), each time there is a descent in \(U\) the number being filled in must increase. This maintains the order of the boxes in the type from \(U\), meaning \(T\) standardizes to \(U\). This filling also maintains the strictly increasing condition on the first column and the weakly increasing condition on each row by construction. Therefore, \(T\) is a colored immaculate tableau of shape \(J\) and type \(B\) with \(std(T)=U\). ### Expansion into the colored monomial and colored fundamental bases The colored dual immaculate functions have positive expansions into the colored monomial and colored fundamental bases. Their coefficients are determined combinatorially using colored immaculate tableaux. #### 4.2.1. Expansion into the colored monomial functions First, we establish the relationship between the colored monomial quasisymmetric functions and colored immaculate tableaux. Then, we define coefficients counting colored immaculate tableaux and prove our expansion. Finally, the transition matrix of these coefficients leads to a proof that the colored dual immaculate functions are indeed a basis of \(QSym_{A}\). **Proposition 4.15**.: _For a sentence \(B\), consider a standard colored immaculate tableau \(U\) such that \(B\preceq co_{A}(U)\). Then,_ \[M_{B}=\sum_{T}x_{T},\] _where the sum runs over all colored immaculate tableaux \(T\) such that \(std(T)=U\) and \(\widetilde{\text{type}(T)}=B\)._ Proof.: Consider a standard colored immaculate tableau \(U\) and a sentence \(B=(v_{1},\ldots,v_{h})\) such that \(B\preceq co_{A}(U)\). By definition, \[M_{B}=\sum_{1\leq j_{1}<\ldots<j_{h}}x_{v_{1},j_{1}}\ldots x_{v_{h},j_{h}}.\] Each monomial \(x_{v_{1},j_{1}}\ldots x_{v_{h},j_{h}}\) is equal to \(x_{T}\) where \(T\) is the unique (by Proposition 4.13) colored immaculate tableau such that \(std(T)=U\) and its type \(C=(u_{1},\ldots,u_{g})\) is the sentence where word \(u_{j_{i}}\) is equal to \(v_{i}\) for \(1\leq i\leq h\) and all other words are empty. This includes a tableau \(T\) for every sentence \(C\) such that \(\tilde{C}=B\). Thus, the above sum is equivalent to summing \(x_{T}\) over all CIT \(T\) with type \(C\) such that \(std(T)=U\) and \(\tilde{C}=B\). **Example 4.16**.: The colored immaculate tableaux of shape \(J=(ab,cb)\) and type \(B=(a,cb,b)\) are \[T_{1}=\begin{array}{| takes a tableau \(T^{\prime}\) of shape \(J\) and type \(C\) and changes each \(i_{1}\) to \(1\), \(i_{2}\) to \(2\), \(\ldots\), and \(i_{h}\) to \(h\), which yields the initial tableau \(T\) of shape \(J\) and type \(\tilde{C}\). This is a bijection, meaning \(K_{J,C}=K_{J,\tilde{C}}\). **Example 4.19**.: Let \(J=(ab,cb)\) and \(B=(\emptyset,a,\emptyset,cb,b)\). Then, \(K_{J,B}=2\) because the colored immaculate tableaux of shape \(J\) and type \(B\) are: \[\begin{array}{|c|c|c|}\hline 2,a\ 5,b&\framebox{$2,a$}\ 4,b\\ \hline 4,c\ 4,b&\framebox{$4,c$}\ 5,b\\ \hline\end{array}\] Notice that \(K_{J,\tilde{B}}=2\) as well, since the colored immaculate tableaux of shape \(J\) and type \(\tilde{B}\) are: \[\begin{array}{|c|c|c|}\hline 1,a\ 3,b&\framebox{$1,a$}\ 2,b\\ \hline 2,c\ 2,b&\framebox{$2,c$}\ 3,b\\ \hline\end{array}\] **Theorem 4.20**.: _For a sentence \(J\), the colored dual immaculate function \(\mathfrak{S}_{J}^{*}\) expands positively into the colored monomial basis as_ \[\mathfrak{S}_{J}^{*}=\sum_{B}K_{J,B}M_{B},\] _where the sum is taken over all sentences \(B\) such that \(|B|=|J|\)._ Proof.: Let \(B_{1},\ldots,B_{j}\) be all possible flat types of colored immaculate tableaux of shape \(J\). Then arrange the sum \(\mathfrak{S}_{J}^{*}=\sum_{T}x_{T}\) into parts based on the flat types of the tableaux \(T\) as \[\mathfrak{S}_{J}^{*}=\sum_{\widetilde{type(T)}=B_{1}}x_{T}+\ldots+\sum_{ \widetilde{type(T)}=B_{j}}x_{T}.\] Consider the sum of \(x_{T}\) over \(T\) such that \(\widetilde{type(T)}=B_{i}\). By Proposition 4.18, for any \(C\) such that \(\tilde{C}=B\) we have \(K_{J,B_{i}}=K_{J,C}\). By definition, for any flat sentence B, \[M_{B}=\sum_{\tilde{C}=B}x_{C}.\] Thus, we can write \[\sum_{\widetilde{type(T)}=B_{i}}x_{T}=\sum_{\tilde{C}=B_{i}}K_{J,C}x_{C}=K_{J,B_{i}}\left(\sum_{\tilde{C}=B_{i}}x_{C}\right)=K_{J,B_{i}}M_{B_{i}}.\] Therefore the overall sum becomes \[\mathfrak{S}_{J}^{*}=K_{J,B_{1}}M_{B_{1}}+\ldots+K_{J,B_{j}}M_{B_{j}}=\sum_{B }K_{J,B}M_{B},\] where the sum runs over all flat types \(B\) of the colored immaculate tableaux of shape \(J\). For all other \(B\) such that \(|B|=|J|\), we have \(K_{J,B}=0\) and we can extend this sum to be over all sentences \(B\) such that \(|B|=|J|\). **Theorem 4.21**.: _The set of colored dual immaculate functions forms a basis for \(QSym_{A}\)._ Proof.: Let \(A\) be an alphabet with a total ordering, and consider the transition matrix \(\mathcal{K}\) from \(\{\mathfrak{S}_{I}^{*}\}_{I}\) to \(\{M_{I}\}_{I}\). By Theorem 4.20, the entry of \(\mathcal{K}\) in row \(J\) and column \(C\) is \(K_{J,C}\). We want to prove that \(\mathcal{K}\) is upper unitriangular and thus invertible when the rows and columns are ordered first by the reverse lexicographic order of compositions applied to word lengths, then by lexicographic order on words. For example, row \((a_{1}a_{2}a_{3},a_{4}a_{5})\) would come before row \((a_{1}a_{2},a_{3},a_{4}a_{5})\) because \((3,2)\preceq_{r\ell}(2,1,2)\), and, given \(a_{1}<a_{2}<\ldots<a_{5}\), row \((a_{1}a_{2}a_{2},a_{1}a_{3})\) would come before row \((a_{1}a_{2}a_{3},a_{4}a_{5})\) because \((3,2)=(3,2)\) and \((a_{1}a_{2}a_{2})\preceq_{\ell}(a_{1}a_{2}a_{3})\), Let \(J=(w_{1},\ldots,w_{k})\) and \(C=(v_{1},\ldots,v_{h})\) be sentences with \(|J|=|C|\). We claim that if \(wl(J)\succeq_{rl}wl(C)\) and \(K_{J,C}\neq 0\) then \(J=C\). Assume there exists a tableau \(T\) of shape \(J\) and type \(C\) with \(wl(J)\succeq_{rl}wl(C)\) and \(wl(J)\neqwl(C)\). Then \(|w_{1}|\leq|v_{1}|\). Observe that the first row of the tableau has \(|w_{1}|\) boxes and so if \(|w_{1}|<|v_{1}|\), there would have to be a \(1\) placed in a box somewhere below row \(1\). This is impossible by the conditions on colored immaculate tableaux so \(|w_{1}|=|v_{1}|\) and every box in row \(1\) is filled with \(1\)'s. Next, \(|w_{2}|\leq|v_{2}|\) and so the second row must start with a 2 for any 2's to exist in \(T\). This implies that the first entry in each subsequent row is greater than 2 meaning that no other row can contain 2's. If every 2 is in the second row then and the number of 2's is at least \(w_{2}\), then \(|w_{2}|=|v_{2}|\). Continuing this reasoning, \(|w_{i}|=|v_{i}|\) for \(1\leq i\leq k\). Thus, \(wl(J)=wl(C)\). Further, by this method we have filled the first row with 1's, the second row with 2's, the \(i^{\text{th}}\) row with \(i\)'s, etc. to construct a colored immaculate tableau such that \(w_{i}=v_{i}\) for all \(i\). Therefore, \(J=C\). By construction, this is the only tableau of shape \(J\) and type \(J\) so \(K_{J,J}=1\). To summarize, we have shown that \(K_{J,C}=0\) when \(wl(J)\succeq_{l}wl(C)\) unless \(J=C\), in which case the entry of the matrix lies on the diagonal and \(K_{J,J}=1\). Thus, we have proved \(\mathcal{K}\) is upper unitriangular. #### 4.2.2. Expansion into the colored fundamental functions To expand the colored dual immaculate functions into the colored fundamental basis we first define coefficients counting SCIT. Relating these to our earlier coefficients counting colored immaculate tableaux, we reformulate our expansion in Theorem 4.20 to an expansion in terms of the colored fundamental basis. **Definition 4.22**.: For sentences \(J\) and \(C\), define \(L_{J,C}\) as the number of standard colored immaculate tableaux of shape \(J\) that have colored descent composition \(C\). **Example 4.23**.: Let \(J=(ab,cb,b)\) and \(C=(a,cb,bb)\). The standard colored immaculate tableaux of shape \(J\) with colored descent composition \(C\) are Thus, \(L_{(ab,cb,b),(a,cb,bb)}=2\). **Proposition 4.24**.: _For sentences \(J\) and \(B\),_ \[K_{J,B}=\sum_{C\succeq B}L_{J,C}.\] Proof.: Recall that \(K_{J,B}\) is the number of colored immaculate tableaux of shape \(J\) and type \(B\). We want to show that \(K_{J,B}\) is equal to the sum of \(L_{J,C}\), the number of standard colored immaculate tableaux of shape \(J\) and descent composition \(C\), overall \(C\succeq B\). For this proof, let \(\mathcal{T}\) be the set of all colored immaculate tableaux of shape \(J\) and type \(B\), and let \(\mathcal{U}\) be the set of standard colored immaculate tableaux \(U\) of shape \(J\) and descent composition \(C\) with \(C\succeq B\). We need to show that the map \(std:\mathcal{T}\rightarrow\mathcal{U}\), where \(std\) is the standardization map from Definition 4.9, is a bijection on these sets. By Proposition 4.13, colored immaculate tableaux with the same shape and type must have different standardizations or they would be the same tableau, thus our map is injective. By Proposition 4.14, the map is surjective. This makes our map a bijection and so \(\mathcal{T}\) and \(\mathcal{U}\) have the same size. Thus, we have shown that \(K_{J,B}=\sum_{C\succeq B}L_{J,C}\). **Theorem 4.25**.: _For a sentence \(J\), the colored dual immaculate function \(\mathfrak{S}_{J}^{*}\) expands positively into the fundamental basis as_ \[\mathfrak{S}_{J}^{*}=\sum_{C}L_{J,C}F_{C},\] _where the sum runs over sentences \(C\) such that \(|C|=|J|\)._ Proof.: Let \(J\) be a sentence. First, observe that applying the Mobius inversion to Proposition 4.24 yields \[L_{J,C}=\sum_{C\preceq B}(-1)^{\ell(C)-\ell(B)}K_{J,B}.\] Then, by Theorem 4.20 and Equation (4), \[\mathfrak{S}_{J}^{*}=\sum_{B}K_{J,B}M_{B}=\sum_{B}K_{J,B}\left(\sum_{C\preceq B }(-1)^{\ell(C)-\ell(B)}F_{C}\right)=\sum_{C}\left(\sum_{C\preceq B}(-1)^{ \ell(C)-\ell(B)}K_{J,B}\right)F_{C}=\sum_{C}L_{J,C}F_{C}.\] This expansion can be written as a sum over all standard colored immaculate tableaux of a certain shape instead of using coefficients to count tableaux based on their colored descent compositions. **Corollary 4.26**.: _For a sentence \(J\),_ \[\mathfrak{S}_{J}^{*}=\sum_{U}F_{co_{A}(U)},\] _where the sum runs over all standard colored immaculate tableaux \(U\) of shape \(J\)._ ### The colored immaculate descent graph We define the colored immaculate descent graph to directly determine the expansion of the colored fundamental functions into the colored dual immaculate basis. Additionally, our result specializes to a new combinatorial expansion of the fundamental quasisymmetric functions into the dual immaculate functions. **Definition 4.27**.: Define the _colored immaculate descent graph_, denoted \(\mathfrak{D}_{A}^{n}\), as an edge-weighted directed simple graph such that the vertex set is the set of sentences in \(A\) of size \(n\), and there is a directed edge from \(I\) to \(J\) if there exists a standard colored immaculate tableau of shape \(I\) with colored descent composition \(J\). The edge from \(I\) to \(J\) is weighted with the coefficient \(L_{I,J}\) from Definition 4.22. For a path \(\mathcal{P}\) in \(\mathfrak{D}_{A}^{n}\), let \(\mathit{prod}(\mathcal{P})\) denote the product of the edge-weights in \(\mathcal{P}\) and let \(\mathit{prod}(\emptyset)=1\). **Example 4.28**.: In Figure 1 we illustrate the subgraph of \(\mathfrak{D}_{\{a,b,c\}}^{5}\) with top vertex \((ab,cbb)\). In this subgraph, all edges are weighted \(1\) because \(L_{I,J}=1\) for each \(I\) and \(J\) (and thus \(\mathit{prod}(\mathcal{P})=1\) for all paths) but, for example, the edge from \((ab,cb,b)\) to \((a,cb,bb)\) would be \(2\) since \(L_{(ab,cb,b)(a,cb,bb)}=2\) as in Example 4.23. The element \((ab,cbb)\in\mathfrak{D}_{\{a,b,c\}}^{5}\) has edges going down to elements \((a,cbb,b)\), \((a,cb,bb)\), and \((a,cbbb)\) because these sentences represent possible descent compositions (with the exception of \((ab,cbb)\) itself) of colored standard immaculate tableaux of shape \((ab,cbb)\) as shown below. Figure 1. A subgraph of \(\mathfrak{D}_{\{a,b,c\}}^{5}\). We say a sentence \(K\) is _reachable_ from a sentence \(I\) if there is a directed path from \(I\) to \(K\). This includes the empty path, meaning that \(I\) is reachable from itself. **Theorem 4.29**.: _For a sentence \(I\) of size \(n\), the colored fundamental functions expand into the colored dual immaculate basis as_ \[F_{I}=\sum_{K}L_{I,K}^{-1}\mathfrak{S}_{K}^{*}\qquad\text{with coefficients}\qquad L_{I,K}^{-1}=\sum_{\mathcal{P}}(-1)^{\ell(\mathcal{P})} prod(\mathcal{P}),\] _where the sums run over all sentences \(K\) reachable from \(I\) in \(\mathfrak{D}_{A}^{n}\) and directed paths \(\mathcal{P}\) from \(I\) to \(K\) in \(\mathfrak{D}_{A}^{n}\)._ Proof.: We proceed by induction on the length of the longest path starting at \(I\) in \(\mathfrak{D}_{A}^{n}\), denoted here with \(k\). If \(k=0\), there are no elements reachable from \(I\) so \(F_{I}=\mathfrak{S}_{I}^{*}\) which agrees with Theorem 4.25. Now for some positive integer \(k\), assume the statement is true for any path of length \(\leq k\). Consider a sentence \(I\) where the length of the longest path starting at \(I\) is \(k+1\). By Theorem 4.25, \[F_{I}=\mathfrak{S}_{I}^{*}-\sum_{J}L_{I,J}F_{J},\] where the sum runs over all sentences \(J\neq I\) such that \(|J|=|I|\). We only need to consider, however, sentences \(J\) that are descent compositions of a SCIT of shape \(I\) because otherwise \(L_{I,J}=0\). Since there is an edge from \(I\) to each of these \(J\)'s, the length of the longest path starting at any \(J\) is at most \(k\). Thus, by induction, \[F_{I}=\mathfrak{S}_{I}^{*}-\sum_{J}L_{I,J}\sum_{K}L_{J,K}^{-1}\mathfrak{S}_{K} ^{*},\] for all sentences \(K<J\) and \(L_{J,K}^{-1}=\sum_{\mathcal{P}}(-1)^{\ell(\mathcal{P})}L_{K_{1},K_{2}}\cdots L _{K_{j-1},K_{j}}\) for paths \(\mathcal{P}=\{K=K_{j}<K_{j-1}<\ldots<K_{1}=J\}\) from \(K\) to \(J\). Note that \[-\sum_{J}L_{I,J}\sum_{K}L_{J,K}^{-1}=\sum_{\mathcal{P}}(-1)^{\ell(\mathcal{P}) }L_{I,J}L_{K_{1},K_{2}}L_{K_{2},K_{3}}\cdots L_{K_{j-1},K_{j}}=L_{I,K}^{-1},\] where the sum runs over all paths \(\mathcal{P}=\{K=K_{j}<\ldots<K_{1}=J<I\}\) from \(K\) to \(I\). Then, \[F_{I}=\sum_{K}L_{I,K}^{-1}\mathfrak{S}_{K}^{*},\] summing over all sentences \(K<I\). **Example 4.30**.: The subgraph in Figure 1 yields the following expansion of \(F_{(ab,cbb)}\): \[F_{(ab,cbb)}=\mathfrak{S}_{(ab,cbb)}^{*}-\mathfrak{S}_{(a,cbb,b)}^{*}+ \mathfrak{S}_{(a,c,bbb)}^{*}-\mathfrak{S}_{(a,cbbb)}^{*}.\] Similarly, the (non-colored) immaculate descent graph \(\mathfrak{D}^{n}\) can be defined as the graph with a vertex set of compositions of size \(n\) where there is an edge from \(\alpha\) to \(\beta\) if there exists an s standard immaculate tableau of shape \(\alpha\) with descent composition \(\beta\). The edge from \(\alpha\) to \(\beta\) will be weighted with coefficient \(L_{\alpha,\beta}\). This leads to an analogous result that follows from the proof above. **Corollary 4.31**.: _For a composition \(\alpha\models n\), the fundamental quasisymmetric functions expand into the dual immaculate functions as_ \[F_{\alpha}=\sum_{\beta}L_{\alpha,\beta}^{-1}\mathfrak{S}_{\beta}^{*}\qquad \text{with coefficients}\qquad L_{\alpha,\beta}^{-1}=\sum_{\mathcal{P}}(-1)^{ \ell(\mathcal{P})} prod(\mathcal{P}),\] _where the sums runs over all \(\beta\) reachable from \(\alpha\) in \(\mathfrak{D}^{n}\) and over paths \(\mathcal{P}\) going from \(\alpha\) to \(\beta\) in \(\mathfrak{D}^{n}\)._ ## 5. A colored generalization of the immaculate functions in \(NSym_{A}\) A colored generalization of the immaculate basis can be defined by first introducing a colored version of non-commutative Bernstein creation operators. Various properties of these operators and extensions of our earlier results via duality lead to results on the colored immaculate functions. These notably include a right Pieri rule and an expansion of the colored immaculate functions into the colored ribbon functions. The process for constructing our generalization of the non-commutative Bernstein operators mirrors that done in [3] with some adjustments to account for the use of sentences in place of compositions. **Definition 5.1**.: For \(M\in QSym_{A}\), define the action of the linear operator \(M^{\perp}\) on \(H\in NSym_{A}\) by \(\langle M^{\perp}H,G\rangle=\langle H,MG\rangle\) for all \(G\in QSym_{A}\). We define the action of the linear operator \(M^{\underline{k}}\) on \(H\in NSym_{A}\) as \(\langle M^{\underline{k}}H,G\rangle=\langle H,GM\rangle\) for all \(G\in QSym_{A}\). Thus, for dual bases \(\{A_{I}\}_{I}\) of \(QSym_{A}\) and \(\{B_{I}\}_{I}\) of \(NSym_{A}\), we have \[M^{\perp}(H)=\sum_{I}\langle H,MA_{I}\rangle B_{I}\qquad\text{and}\qquad M^{ \underline{k}}(H)=\sum_{I}\langle H,A_{I}M\rangle B_{I}.\] These operators are dual to the left and right multiplication by \(M\) in \(QSym_{A}\). Note that the analogues to these operators in \(QSym\) are equivalent due to commutativity. **Proposition 5.2**.: _For sentences \(I=(w_{1},\dots,w_{k})\) and \(J=(v_{1},\dots,v_{h})\),_ \[M^{\underline{k}}_{I}(H_{J})=\sum_{K}H_{\widetilde{J/_{R}K}},\] _where the sum runs over all sentences \(K\) such that \(\tilde{K}=I\) and \(K\subseteq_{R}J\). Moreover, each \(\widetilde{J/_{R}K}\) appearing in this sum is equivalent to the shape of a colored composition diagram originally of shape \(J\) with boxes corresponding to each word in \(I\) uniquely removed from its righthand side such that each word \(w_{j}\) is removed from a single row strictly lower than the row from which \(w_{j+1}\) is removed._ Proof.: Let \(I=(w_{1},\dots,w_{k})\) and \(J=(v_{1},\dots,v_{h})\). We have that \[M^{\underline{k}}_{I}(H_{J})=\sum_{L}\langle H_{J},M_{L}M_{I}\rangle H_{L}= \sum_{L}\langle H_{J},\sum_{R}M_{R}\rangle H_{L}=\sum_{L}\sum_{R}\langle H_{J},M_{R}\rangle H_{L},\] where the sums run over all sentences \(L\) of size \(|J|-|I|\) and each summand \(R\) in \(L\mathrel{\mathop{\hbox to 12.0pt{\vbox to 0.0pt{\hrule height 0.4pt\hbox{ \vrule width 0.4pt height 6.0pt depth 0.0pt\hss} \hrule height 0.4pt}}}\limits^{Q}}I\), respectively. Note that each sentence \(R\) may occur multiple times in \(L\mathrel{\mathop{\hbox to 12.0pt{\vbox to 0.0pt{\hrule height 0.4pt\hbox{ \vrule width 0.4pt height 6.0pt depth 0.0pt\hss} \hrule height 0.4pt}}}\limits^{Q}}I\) and we account for the multiplicity in the summations. The sum \(\sum\limits_{R}\langle H_{J},M_{R}\rangle\) is equal to the number of times that \(J\) appears as a summand in \(L\mathrel{\mathop{\hbox to 12.0pt{\vbox to 0.0pt{\hrule height 0.4pt\hbox{ \vrule width 0.4pt height 6.0pt depth 0.0pt\hss} \hrule height 0.4pt}}}\limits^{Q}}I\). Recall that in \(L\mathrel{\mathop{\hbox to 12.0pt{\vbox to 0.0pt{\hrule height 0.4pt\hbox{ \vrule width 0.4pt height 6.0pt depth 0.0pt\hss} \hrule height 0.4pt}}}\limits^{L}}I\), each summand is a sentence made up of words from \(L\), words from \(I\), and concatenated pairs of words from \(L\) and \(I\) (in that order) where all words from \(L\) and all words from \(I\) are present and in the same relative order, respectively. For each time \(J\) is a summand in \(L\mathrel{\mathop{\hbox to 12.0pt{\vbox to 0.0pt{\hrule height 0.4pt\hbox{ \vrule width 0.4pt height 6.0pt depth 0.0pt\hss} \hrule height 0.4pt}}}\limits^{Q}}I\) there exists a unique weak sentence \(K^{\prime}\) such that \(\tilde{K^{\prime}}=I\) and \(\widetilde{J/_{R}K^{\prime}}=L\). Further, the set of all \(K^{\prime}\) obtained for \(J\) in \(L\mathrel{\mathop{\hbox to 12.0pt{\vbox to 0.0pt{\hrule height 0.4pt\hbox{ \vrule width 0.4pt height 6.0pt depth 0.0pt\hss} \hrule height 0.4pt}}}\limits^{Q}}I\) considered across every possible \(L\) is simply the set of weak sentences \(K\) such that \(\tilde{K}=I\) and \(K\subseteq_{R}J\), and so we can rewrite \[M^{\underline{k}}_{I}(H_{J})=\sum_{L}\sum_{K^{\prime}}H_{L}=\sum_{K}H_{ \widetilde{J/_{R}K}},\] where the sums run over all sentences \(L\) of size \(|J|-|I|\), all weak sentences \(K^{\prime}\) such that \(\tilde{K^{\prime}}=I\) and \(\widetilde{J/_{R}K^{\prime}}=L\), and all weak sentences \(K\) such that \(\tilde{K}=I\) and \(K\subseteq_{R}J\), respectively. Visualizing sentences as colored composition diagrams, we see that each weak sentence \(K\) can be viewed as a unique set of boxes being removed from the right-hand side of the colored composition diagram of \(J\) where the first word in \(K\) (including empty words) is removed from the first row of \(J\) and so on. Thus, the set of indices \(\widetilde{J/_{R}K}\) of \(H\) in the sum can also be viewed as the set of colored composition diagrams resulting from all possible ways of removing boxes corresponding to \(I\) from a colored composition diagram of shape \(J\) then moving rows up to fill empty rows, where each \(w_{j}\) in \(I\) is removed from a single row strictly lower than the single row from which \(w_{j+1}\) in \(I\) is removed. **Example 5.3**.: In this example we show the action of \(M^{\underline{k}}_{c,ab}\) on colored diagrams: \[M^{\underline{k}}_{(c,ab)}(H_{(ac,bc,ab,cab)})=H_{(a,bc,cab)}+H_{(a,bc,ab,c)}+H_{ (ac,b,cab)}+H_{(ac,b,ab,c)}.\] Next, we prove various properties of the \(M^{\underline{k}}\) operator that will be key in constructing creation operators for the colored immaculate basis. **Lemma 5.4**.: _Let \(J,K\) be sentences, \(A_{I}\in QSym_{A}\) and \(f,H\in NSym_{A}\). Then,_ \[\langle f\otimes H,\Delta(A_{I})(M_{J}\otimes M_{K})\rangle=\langle M^{ \underline{k}}(f)\otimes M_{K}^{\underline{k}}(H),\Delta(A_{I})\rangle.\] Proof.: Let \(a,b\in NSym_{A}\) and \(c,d\in QSym_{A}\). The inner product on \(NSym_{A}\times QSym_{A}\) extends to \(NSym_{A}\otimes NSym_{A}\times QSym_{A}\otimes Qsym_{A}\) as \[\langle\cdot,\cdot\rangle:NSym_{A}\otimes NSym_{A}\times QSym_{A}\otimes Qsym _{A}\rightarrow\mathbb{Q}\quad\text{where}\quad\langle a\otimes b,c\otimes d \rangle\rightarrow\langle a,c\rangle\langle b,d\rangle\] In Sweedler notation, \(\Delta(A_{I})=\sum_{i}A^{(i)}\otimes A_{(i)}\). Thus, we write \[\langle f\otimes H,\Delta(A_{I})(M_{J}\otimes M_{K})\rangle =\left\langle f\otimes H,\sum_{i}A^{(i)}M_{J}\otimes A_{(i)}M_{ K}\right\rangle=\sum_{i}\langle f\otimes H,A^{(i)}M_{J}\otimes A_{(i)}M_{K}\rangle\] \[=\sum_{i}\langle f,A^{(i)}M_{J}\rangle\langle H,A_{(i)}M_{K}\rangle\] \[=\sum_{i}\langle M_{J}^{\underline{k}}(f),A^{(i)}\rangle\langle M _{K}^{\underline{k}}(H),A_{(i)}\rangle\quad\text{by Definition \ref{def:svp}}\] \[=\sum_{i}\langle M_{J}^{\underline{k}}(f)\otimes M_{K}^{ \underline{k}}(H),A^{(i)}\otimes A_{(i)}\rangle=\left\langle M_{J}^{\underline {k}}(f)\otimes M_{K}^{\underline{k}}(H),\sum_{i}A^{(i)}\otimes A_{(i)}\right\rangle\] \[=\langle M^{\underline{k}}(f)\otimes M_{K}^{\underline{k}}(H), \Delta(A_{I})\rangle.\qed\] **Proposition 5.5**.: _For a sentence \(Q=(q_{1},\ldots,q_{i})\) and \(f,H\in NSym_{A}\),_ \[M_{Q}^{\underline{k}}(fH)=\sum_{0\leq j\leq i}M_{q_{1},\ldots,q_{i}}^{ \underline{k}}(f)M_{q_{j+1},\ldots,q_{i}}^{\underline{k}}(H).\] _In particular, for a word \(w\),_ \[M_{Q}^{\underline{k}}(fH_{w})=M_{Q}^{\underline{k}}(f)H_{w}+M_{(q_{1},\ldots, q_{i-1})}^{\underline{k}}(f)M_{q_{i}}^{\underline{k}}(H_{w}).\] Proof.: Let \(\{A_{I}\}\) and \(\{B_{I}\}\) be dual bases of \(QSym_{A}\) and \(NSym_{A}\) respectively, and let \(Q=(q_{1},\ldots,q_{i})\). Then, \[M_{Q}^{\underline{k}}(fH) =\sum_{I}\langle fH,A_{I}M_{Q}\rangle B_{I}\quad\text{by Definition \ref{def:svp}}\] \[=\sum_{I}\langle f\otimes H,\Delta(A_{I}M_{Q})\rangle B_{I}=\sum_ {I}\langle f\otimes H,\Delta(A_{I})\Delta(M_{Q})\rangle B_{I}\quad\text{by Definition \ref{def:svp}}\] \[=\sum_{I}\sum_{Q=J\cdot K}\langle f\otimes H,\Delta(A_{I})(M_{J} \otimes M_{K})\rangle B_{I}\quad\text{by Equation \eqref{def:svp}}\] \[=\sum_{I}\sum_{Q=J\cdot K}\langle M_{J}^{\underline{k}}(f) \otimes M_{K}^{\underline{k}}(H),\Delta(A_{I})\rangle B_{I}\quad\text{by Lemma \ref{def:svp}}\] \[=\sum_{I}\sum_{Q=J\cdot K}\langle M_{J}^{\underline{k}}(f)M_{K}^ {\underline{k}}(H),A_{I}\rangle B_{I}\quad\text{by Definition \ref{def:svp}}\] \[=\sum_{I}\langle\sum_{Q=J\cdot K}M_{J}^{\underline{k}}(f)M_{K}^ {\underline{k}}(H),A_{I}\rangle B_{I}=\sum_{Q=J\cdot K}M_{J}^{\underline{k}}(f) M_{K}^{\underline{k}}(H)\quad\text{by Definition \ref{def:svp}}\] \[=\sum_{j=0}^{i}M_{(q_{1},\ldots,q_{j})}^{\underline{k}}(f)M_{(q_{j +1},\ldots,q_{i})}^{\underline{k}}(H).\] In the case of \(H=H_{w}\), the term \(M_{q_{j+1},\ldots,q_{i}}^{\underline{k}}(H_{w})\) is \(0\) whenever \(i-(j+1)>0\) because boxes corresponding to \(q_{j},\ldots,q_{i}\) must each be removed from separate rows but \(w\) has only one row. Thus, the equation simplifies as \[M_{Q}^{\underline{k}}(fH_{w})=M_{Q}^{\underline{k}}(f)H_{w}+M_{q_{1},\ldots,q_{i- 1}}^{\underline{k}}(f)M_{q_{i}}^{\underline{k}}(H_{w}).\qed\] **Definition 5.6**.: For a word \(v\), the _colored non-commutative Bernstein operator_\(\mathbb{B}_{v}\) is defined to be \[\mathbb{B}_{v}=\sum_{u}\sum_{w(Q^{r})=u}(-1)^{i}H_{v\cdot u}\left(\sum_{Q\preceq S }M_{S}^{\frac{\mathbf{k}}{\mathbf{j}}}\right),\] where the sums run over all words \(u\), all sentences \(Q=(q_{1},\ldots,q_{i})\) such that \(q_{i}\cdot\ldots\cdot q_{1}=u\), and all sentences \(S\) that are coarsenings of \(Q\). Notice that, by the definition of \(M^{\mathbf{\underline{k}}}\), the only values of \(u\) that could yield a nonzero summand in \(\mathbb{B}_{v}(H_{I})\) for a sentence \(I\) are those for which there is some permutation of the letters in \(u\) that yields a subword of \(w(I)\). Thus, this sum always has a finite number of terms. **Definition 5.7**.: For a sentence \(J=(v_{1},\ldots,v_{h})\), we define the _colored immaculate function_\(\mathfrak{S}_{J}\) as \[\mathfrak{S}_{J}=\mathbb{B}_{v_{1}}\mathbb{B}_{v_{2}}\ldots\mathbb{B}_{v_{h}}( 1).\] **Example 5.8**.: The colored immaculate functions \(\mathfrak{S}_{def}\) and \(\mathfrak{S}_{abc,def}\) are obtained using creation operators as follows: \[\mathfrak{S}_{(def)} =\mathbb{B}_{def}(1)=\sum_{u}\sum_{w(Q^{r})=u}(-1)^{i}H_{(def\cdot u )}\left(\sum_{Q\preceq S}M_{S}^{\frac{\mathbf{k}}{\mathbf{j}}}(1)\right)=(-1) ^{0}H_{(def)}M_{\emptyset}^{\mathbf{\underline{k}}}(1)=H_{(def)}.\] \[\mathfrak{S}_{(abc,def)} =\mathbb{B}_{abc}(\mathfrak{S}_{(def)})=\mathbb{B}_{abc}(H_{( def)})=\sum_{u}\sum_{w(Q^{r})=u}(-1)^{i}H_{(abc\cdot u)}\left(\sum_{Q\preceq S }M_{S}^{\frac{\mathbf{k}}{\mathbf{j}}}(H_{(def)})\right)\] \[=(-1)^{0}H_{(abc)}M_{\emptyset}^{\mathbf{\underline{k}}}(H_{(def) })+(-1)^{1}H_{(abcf)}M_{(f)}^{\mathbf{\underline{k}}}(H_{(def)})+(-1)^{1}H_{ (abcef)}M_{(ef)}^{\mathbf{\underline{k}}}(H_{(def)})\] \[\quad+(-1)^{2}H_{(abcef)}M_{(ef)}^{\mathbf{\underline{k}}}(H_{( def)})+(-1)^{1}H_{(abcdef)}M_{(def)}^{\mathbf{\underline{k}}}(H_{(def)})+(-1)^{2}H_{ (abcefd)}M_{(def)}^{\mathbf{\underline{k}}}(H_{(def)})\] \[\quad+(-1)^{2}H_{(abcfed)}M_{(def)}^{\mathbf{\underline{k}}}(H_{( def)})+(-1)^{3}H_{(abcefed)}M_{(def)}^{\mathbf{\underline{k}}}(H_{(def)})\] \[=H_{(abc,def)}-H_{(abcf,de)}-H_{(abcef,d)}+H_{(abcefe,d)}-H_{(abcdef )}+H_{(abcefd)}+H_{(abcfe)}-H_{(abdef)}.\] To get the term \(H_{(abcfe,d)}\), for example, we look at \(u=fe\). The possible values of \(Q\) for this \(u\) are \(Q=(fe)\) and \(Q=(e,f)\), meaning the possible \(S\) values are \(S=(fe)\), \(S=(e,f)\), and \(S=(ef)\). Observe that \(M_{fe}^{\mathbf{\underline{k}}}(H_{(def)})\) and \(M_{e,f}^{\mathbf{\underline{k}}}(H_{(def)})\) are both zero because \(S\) is not right-contained in \(def\). Thus, the only remaining term for these values is \(S=(ef)\) for which \(M_{ef}^{\mathbf{\underline{k}}}(H_{(def)})=H_{d}\). Thus the term of the sum given by \(u=fe\), \(Q=(e,f)\), and \(S=ef\) is \((-1)^{2}H_{(abcefe,d)}\), which is also the only term for \(u=fe\). Many values of \(u\) will yield entirely zero terms. Before proving that this basis is indeed analogous to the immaculate functions in \(NSym\), we must prove that it is dual to the colored dual immaculate basis. The following property of the colored non-commutative Bernstein operators leads to a right Pieri rule which illuminates the structure of the colored immaculate functions to this end. **Proposition 5.9**.: _Let \(w=a_{1}\ldots a_{k}\) and \(f,H\in NSym_{A}\), then_ \[\mathbb{B}_{v}(f)H_{w}=\sum_{0\leq j\leq k}\mathbb{B}_{v\cdot a_{j+1}\ldots a_{ k}}(fH_{a_{1}\ldots a_{j}}).\] Proof.: Given a sentence \(Q=(q_{1},\ldots,q_{i})\), we write \(Q^{\prime}=(q_{1},\ldots,q_{i-1})\). Let \(f\in NSym_{A}\) and let \(v\) and \(w=a_{1}\ldots a_{k}\) be words. Then, \[\mathbb{B}_{v}(fH_{w}) =\sum_{u}\sum_{w(Q^{r})=u}(-1)^{i}H_{v\cdot u}\left(\sum_{Q\preceq S }M_{S}^{\mathbf{\underline{k}}}(fH_{w})\right)\text{ by Definition \ref{ We want to consider the cases in which \(M^{\underline{k}}_{s_{t}}(H_{w})\) is non-zero. This only happens whenever \(s_{t}\subseteq_{R}w\) because in our combinatorial interpretation, we visualize \(M^{\underline{k}}_{s_{t}}\) as removing \(s_{t}\) from the righthand side of \(w=a_{1}\cdots a_{k}\) to get \(H_{a_{1}\ldots a_{k}}\) for some \(h\leq k\). Note that because \(Q\preceq S\) and \(q_{i}\) and \(s_{t}\) are the final words in \(Q\) and \(S\) respectively, \(q_{i}\subseteq_{R}s_{t}\). It follows that \(q_{i}\subseteq_{R}w\) and thus \(q_{i}=a_{j+1}\cdots a_{k}\) for a non-negative integer \(j<k\). Recalling that \(u=q_{i}\cdots q_{1}\), let \(u^{\prime}=q_{i-1}\cdots q_{1}\) so that we can write \(u=a_{j+1}\cdots a_{k}\cdot u^{\prime}\). Rewriting the last equation in terms of \(u^{\prime}\) and \(Q^{\prime}\) yields \[\mathbb{B}_{v}(fH_{w})=\mathbb{B}_{v}(f)H_{w}+\sum_{0\leq j<k}\sum_{u^{\prime }}\sum_{Q^{\prime}}(-1)^{i}H_{v\cdot a_{j+1}\ldots a_{k}u^{\prime}}\left(\sum_ {Q^{\prime}\cdot(a_{j+1}\ldots a_{k})\preceq S}M^{\underline{k}}_{S^{\prime} }(f)M^{\underline{k}}_{s_{t}}(H_{a_{1}\ldots a_{j}})\right).\] Next, the sum can be split into two parts by separating out the cases where \(q_{i}=s_{t}\) and those where \(q_{i}\neq s_{t}\). If \(q_{i}=s_{t}\) for \(q_{i}=a_{j+1}\ldots a_{k}\) then \(M^{\underline{k}}_{s_{t}}(H_{w})=M^{\underline{k}}_{a_{j+1}\ldots a_{k}}(H_{w} )=H_{a_{1}\cdots a_{j}}\). Otherwise, there must exist a non-negative integer \(\iota<i-1\) such that \(s_{t}=q_{i+1}\cdots q_{i-1}q_{i}\). We can rearrange the part of the sum by substituting \(s_{t}\) with \(q_{\iota+1}\cdots q_{i}\) and summing over the possible \(\iota\). Then, \[\mathbb{B}_{v}(fH_{w})=\mathbb{B}_{v}(f)H_{w}-\sum_{0\leq j<k} \sum_{u^{\prime}}\sum_{Q^{\prime}}\left(\,(-1)^{i-1}H_{va_{j+1}\ldots a_{k}u^{ \prime}}\Bigg{[}\sum_{Q^{\prime}\preceq S^{\prime}}M^{\underline{k}}_{S^{ \prime}}(f)H_{a_{1}\ldots a_{j}}\Bigg{]}\right.\] \[\left.+\Bigg{[}\sum_{0\leq\iota<i-1}\sum_{(q_{1},\ldots,q_{i}) \preceq S^{\prime}}M^{\underline{k}}_{S^{\prime}}(f)M^{\underline{k}}_{q_{ \iota+1}\cdots q_{i-1}a_{j+1}\cdots a_{k}}(H_{w})\Bigg{]}\right).\] Again thinking of the combinatorial visualization for the \(M^{\underline{k}}\) operator, observe that \(M^{\underline{k}}_{q_{i+1}\cdots q_{i-1}a_{j+1}\cdots a_{k}}(H_{w})=M^{ \underline{k}}_{q_{i+1}\cdots q_{i-1}}(H_{a_{1}\cdots a_{j}})\) and so \[\mathbb{B}_{v}(fH_{w})=\mathbb{B}_{v}(f)H_{w}-\sum_{0\leq j<k} \sum_{u^{\prime}}\sum_{Q^{\prime}}\Bigg{(}\,(-1)^{i-1}H_{va_{j+1}\ldots a_{k}u ^{\prime}}\Bigg{[}\sum_{Q^{\prime}\preceq S^{\prime}}M^{\underline{k}}_{S^{ \prime}}(f)H_{a_{1}\ldots a_{j}}\Bigg{]}\] \[\left.+\Bigg{[}\sum_{0\leq\iota<i-1}\sum_{(q_{1},\ldots,q_{i}) \preceq S^{\prime}}M^{\underline{k}}_{S^{\prime}}(f)M^{\underline{k}}_{q_{ \iota+1}\cdots q_{i-1}}(H_{a_{1}\cdots a_{j}})\Bigg{]}\right).\] Next, rename every \(S^{\prime}\) to \(R=(r_{1},\ldots,r_{\tau})\) in the first section of the sum. In the second section, rename \(S^{\prime}\) to \(R^{\prime}=(r_{1},\ldots,r_{\tau-1})\) and let \(q_{\iota+1}\cdots q_{i-1}=r_{\tau}\). \[\mathbb{B}_{v}(fH_{w})=\mathbb{B}_{v}(f)H_{w}-\sum_{0\leq j<k} \sum_{u^{\prime}}\sum_{Q^{\prime}}\Bigg{(}\,(-1)^{i-1}H_{va_{j+1}\ldots a_{k}u ^{\prime}}\Bigg{[}\sum_{Q^{\prime}\preceq R}M^{\underline{k}}_{R}(f)H_{a_{1} \ldots a_{j}}\Bigg{]}\] \[\left.+\Bigg{[}\sum_{0\leq\iota<i-1}\sum_{(q_{1},\ldots,q_{i}) \preceq R^{\prime}}M^{\underline{k}}_{R^{\prime}}(f)M^{\underline{k}}_{r_{ \tau}}(H_{a_{1}\cdots a_{j}})\Bigg{]}\right).\] In the second part of the sum, notice that considering \(R^{\prime}\cdot r_{\tau}\) where \(R^{\prime}=(q_{1},\ldots,q_{\iota})\) and \(r_{\tau}=q_{\iota+1}\cdots q_{i-1}\) for \(1\leq\iota\leq i-1\) is equivalent to considering \(R^{\prime}\cdot r_{\tau}=R\succeq(q_{1},\ldots,q_{i-1})=Q\). Then, \[\mathbb{B}_{v}(fH_{w})=\mathbb{B}_{v}(f)H_{w}-\sum_{0\leq j<k} \sum_{u^{\prime}}\sum_{Q^{\prime}}\Bigg{(}\,(-1)^{i-1}H_{va_{j+1}\ldots a_{k}u ^{\prime}}\Bigg{[}\sum_{Q^{\prime}\preceq R}M^{\underline{k}}_{R}(f)H_{a_{1} \ldots a_{j}}\Bigg{]}\] \[\left.+\Bigg{[}\sum_{Q^{\prime}\preceq R}M^{\underline{k}}_{R^{ \prime}}(f)M^{\underline{k}}_{r_{\tau}}(H_{a_{1}\cdots a_{j}})\Bigg{]}\right).\] Now in both parts of the sum, we are looking at sentences \(R\) such that \(Q^{\prime}\preceq R\), and combining them we get \[\mathbb{B}_{v}(fH_{w}) =\mathbb{B}_{v}(f)H_{w}-\sum_{0\leq j<k}\,\sum_{u^{\prime}}\sum_{Q^ {\prime}}(-1)^{i-1}H_{va_{j+1}\ldots a_{k}u^{\prime}}\Bigg{(}\sum_{Q^{\prime} \preceq R}\left[M_{R}^{\underline{k}}(f)H_{a_{1}\cdots a_{j}}+M_{R^{\prime}}^{ \underline{k}}(f)M_{r_{r}}^{\underline{k}}(H_{a_{1}\cdots a_{j}})\right]\Bigg{)}\] \[=\mathbb{B}_{v}(f)H_{w}-\sum_{0\leq j<k}\,\sum_{u^{\prime}}\sum_{Q ^{\prime}}(-1)^{i-1}H_{va_{j+1}\ldots a_{k}u^{\prime}}\left(\sum_{Q^{\prime} \preceq R}M_{R}^{\underline{k}}(fH_{a_{1}\ldots a_{j}})\right)\text{by Proposition \ref{prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:propprop:prop:prop:prop:prop:prop:prop:propprop:prop:prop:prop:prop:propprop:prop:prop:prop:prop:prop:propprop:prop:prop:propprop:prop:prop:prop:prop:propprop:prop:prop:prop:prop:propprop:prop:prop:prop:propprop:prop:prop:prop:prop:prop:prop:propprop:prop:prop:prop:prop:propprop:prop:prop:prop:prop:propprop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:propprop:prop:prop:prop:propprop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:propprop:prop:prop:prop:prop:prop:prop:prop:prop:propprop:prop:prop:prop:propprop:prop:prop Proof.: Let \(C=(t_{1},\ldots,t_{k})\) and \(C^{\prime}=(t_{1},\ldots,t_{k-1})\). First we claim that \(K_{J,C}=\sum_{G\subset_{t_{k}}J}K_{G,C^{\prime}}\) where the sum runs over sentences \(G\) such that \(G\subset_{t_{k}}J\). For any colored immaculate tableau of shape \(J\) and type \(C\), we can remove the boxes of \(T\) filled with the number k, all of which will be on the right-hand side of \(T\), to obtain a colored immaculate tableau of shape \(G\) with type \(C^{\prime}\). Thus the sum of \(K_{G,C^{\prime}}\) for all the \(G\subset_{t_{k}}J\) gives \(K_{J,C}\). With this fact, we proceed by induction on the length of \(C\). \[H_{C} =H_{C^{\prime}}H_{t_{k}}=\left(\sum_{G}K_{G,C^{\prime}}\mathfrak{ S}_{G}\right)H_{t_{k}}\quad\text{by induction}\] \[=\sum_{G}K_{G,C^{\prime}}\mathfrak{S}_{G}H_{t_{k}}=\sum_{G}K_{G,C ^{\prime}}\sum_{G\subset_{t_{k}}J}\mathfrak{S}_{J}\quad\text{ by Theorem \ref{thm:eq:sum_G}}\] \[=\sum_{J}\left(\sum_{G\subset_{t_{k}}J}K_{G,C^{\prime}}\right) \mathfrak{S}_{J}\quad\text{by rearranging the sums}\] \[=\sum_{J}K_{J,C}\mathfrak{S}_{J},\] where the final two sums run over all sentences \(J\) such that there exists a colored immaculate tableau of shape \(J\) and type \(C\). If there is no such CIT of shape \(J\) and type \(C\) then \(K_{J,C}=0\), and it is equivalent to taking this sum over all sentences \(J\) such that \(|J|=|C|\). Note that this unique expansion satisfies Proposition 2.14 and in fact verifies the duality of the colored immaculate and colored dual immaculate bases. **Corollary 5.13**.: _The colored immaculate basis is dual to the colored dual immaculate basis._ With this duality verified, we can prove that the colored immaculate functions are analogous to the original non-commutative Bernstein operators because they are isomorphic under \(\upsilon\) in the case of a unary alphabet \(A\). **Proposition 5.14**.: _Let \(G\in NSym_{A}\) and \(F\in QSym_{A}\). If \(A=\{a\}\), then \(\langle G,F\rangle=\langle\upsilon(G),\upsilon(F)\rangle.\)_ Proof.: Let \(A=\{a\}\), and let \(G=\sum_{J}c_{J}H_{J}\) and \(F=\sum_{I}b_{I}M_{I}\) where the sums run over all sentences \(I,J\), respectively. Then, \[\langle G,F\rangle=\left\langle\sum_{J}c_{J}H_{J},\sum_{I}b_{I}M_{I}\right \rangle=\sum_{I,J}c_{J}b_{I}\left\langle H_{J},M_{I}\right\rangle=\sum_{I}c_ {I}b_{I}.\] Next, for \(\upsilon(G)\in NSym\) and \(\upsilon(F)\in QSym\), we have that \[\langle\upsilon(G),\upsilon(F)\rangle=\left\langle\sum_{J}c_{J}\upsilon(H_{J}),\sum_{I}b_{I}\upsilon(M_{I})\right\rangle=\left\langle\sum_{J}c_{J}H_{w\ell( J)},\sum_{I}b_{I}M_{w\ell(I)}\right\rangle=\sum_{I,J}c_{J}b_{I}\langle H_{w\ell( J)},M_{w\ell(I)}\rangle.\] The inner product \(\langle H_{w\ell(J)},M_{w\ell(I)}\rangle\) is zero unless \(w\ell(I)=w\ell(J)\) which happens exactly when \(I=J\) because the alphabet \(A\) is made up of only one color. In other words, there is exactly one sentence \(I\) such that \(w\ell(I)=\alpha\) for each composition \(\alpha\) in this case. Thus, \[\langle\upsilon(G),\upsilon(F)\rangle=\sum_{I}c_{I}b_{I}=\langle G,F\rangle.\qed\] **Proposition 5.15**.: _Let \(A=\{a\}\), and let \(I\) be a sentence. Then, \(\upsilon(\mathfrak{S}_{I})=\mathfrak{S}_{w\ell(I)}.\) Moreover, \(\{\mathfrak{S}_{I}\}_{I}\) in \(NSym_{A}\) is analogous to \(\{\mathfrak{S}_{\alpha}\}_{\alpha}\) in \(NSym\)._ Proof.: Let \(A=\{a\}\) and let \(I\) and \(J\) be sentences. By Proposition 5.14, \[\langle\mathfrak{S}_{I},\mathfrak{S}_{J}^{*}\rangle=\langle\upsilon(\mathfrak{ S}_{I}),\upsilon(\mathfrak{S}_{J}^{*})\rangle=\langle\upsilon(\mathfrak{S}_{I}), \mathfrak{S}_{w\ell(J)}^{*}\rangle.\] Because \(A\) is unary, \(I=J\) if and only if \(w\ell(I)=w\ell(J)\) and thus \(\delta_{I,J}=\delta_{w\ell(I),w\ell(J)}\). As a result, \[\langle\mathfrak{S}_{I},\mathfrak{S}_{J}^{*}\rangle=\langle\mathfrak{S}_{w\ell (I)},\mathfrak{S}_{w\ell(J)}^{*}\rangle=\langle\upsilon(\mathfrak{S}_{I}), \mathfrak{S}_{w\ell(J)}^{*}\rangle,\] for all sentences \(I\) and \(J\). Therefore, \(\upsilon(\mathfrak{S}_{I})=\mathfrak{S}_{w\ell(I)}\). The expansion of the colored ribbon functions into the colored immaculate functions now follows from the application of Proposition 2.14 to Theorem 4.25. **Corollary 5.16**.: _For a sentence \(C\), the colored ribbon non-commutative symmetric functions expand positively into the colored immaculate functions as_ \[R_{C}=\sum_{J}L_{J,C}\mathfrak{S}_{J},\] _where the sum runs over all sentences \(J\) such that \(|J|=|C|\)._ This corollary allows us to define the expansion of the colored immaculate function indexed by a sentence of the form \((a_{1},\dots,a_{k})\) in terms of the \(\{H_{I}\}\) basis. **Proposition 5.17**.: _For a sentence \((a_{1},\dots a_{k})\),_ \[\mathfrak{S}_{(a_{1},\dots,a_{k})}=\sum_{J\preceq(a_{1},\dots,a_{k})}(-1)^{k- \ell(J)}H_{J}.\] Proof.: Let \(C=(a_{1},\dots,a_{k})\), and notice that \(L_{J,(a_{1},\dots,a_{k})}=0\) unless \(J=(a_{1},\dots,a_{k})\) in which case \(L_{(a_{1},\dots,a_{k}),(a_{1},\dots,a_{k})}=1\). Then by Corollary 5.16, we have \(\mathfrak{S}_{(a_{1},\dots,a_{k})}=R_{(a_{1},\dots,a_{k})}\). Then, expanding \(R_{(a_{1},\dots,a_{k})}\) into the \(\{H_{I}\}\) basis yields the desired formula. Applying Proposition 2.14 to Theorem 4.29 also yields an expansion of the colored immaculate functions into the colored ribbon basis using the colored immaculate descent graph of Definition 4.27. **Corollary 5.18**.: _For a sentence \(J\), the colored immaculate functions expand into colored ribbon functions as_ \[\mathfrak{S}_{J}=\sum_{I}L_{I,J}^{-1}R_{I}\qquad\text{with coefficients}\qquad L_{I,J}^{-1}=\sum_{\mathcal{P}}(-1)^{\ell(\mathcal{P})} prod(\mathcal{P}),\] _where the sums run over all sentences \(I\) above \(J\) in \(\mathfrak{D}_{A}^{n}\) and over directed paths \(\mathcal{P}\) from \(I\) to \(J\) in \(\mathfrak{D}_{A}^{n}\)._ **Example 5.19**.: The colored immaculate function \(\mathfrak{S}_{(a,cb,b)}\) expands in to the colored ribbon functions as \[\mathfrak{S}_{(a,cb,b)}=R_{(a,cb,b)}-R_{(ab,cb)}+R_{(ab,cb)}-R_{(ab,c,b)}.\] The term \(R_{(abb,c)}\), for example, has a coefficient of \(1\) because the only path from \((abb,c)\) to \((a,cb,b)\) is \[(abb,c)\xrightarrow{1}(ab,cb)\xrightarrow{1}(a,cb,b).\] Proposition 2.14 can be applied to Corollary 4.31 to get a result in \(NSym\) analogous to Corollary 5.18. **Corollary 5.20**.: _For a composition \(\beta\models n\), the immaculate functions expand into the ribbon functions as_ \[\mathfrak{S}_{\beta}=\sum_{\alpha|=n}L_{\alpha,\beta}^{-1}R_{\alpha}\qquad \text{with coefficients}\qquad L_{\alpha,\beta}^{-1}=\sum_{\mathcal{P}}(-1)^{ \ell(\mathcal{P})}prod(\mathcal{P}),\] _where the sums run over all \(\alpha\) above \(\beta\) in \(\mathfrak{D}^{n}\) and over directed paths \(\mathcal{P}\) from \(\alpha\) to \(\beta\) in \(\mathfrak{D}^{n}\), respectively._ While the colored immaculate functions mirror many of the properties of the immaculate functions, the Jacobi-Trudi formula does not generalize naturally. This is in part due to the challenges of a deletion operation on words which would be needed to generalize integer subtraction. Future work may investigate such a formula. ## 6. The colored immaculate poset and skew colored immaculate tableaux Colored composition diagrams admit a natural partial ordering similar to that of Young's lattice and the immaculate poset. The elements of this poset can be thought of as sentences or colored composition diagrams, which gives a more visual representation. This poset has a combinatorial relationship with standard colored immaculate tableaux and leads to a natural definition of skew colored immaculate tableaux which in turn leads to the skew colored dual immaculate functions. Additionally, the right Pieri rule on colored immaculate functions connects this poset and these skew functions to the structure constants of the colored immaculate functions as it does in the non-colored case. **Definition 6.1**.: The _colored immaculate poset_\(\mathfrak{P}_{A}\) is the set of all sentences on \(A\) with the partial order defined by the cover relation that \(I\) covers \(J\) if \(J\subset_{a}I\) for some \(a\in A\). This cover relation means that \(I\) covers \(J\) if \(I\) differs from \(J\) by the addition of a box colored with \(a\) placed on the right side of, or below, \(J\). In this case, arrows from \(J\) to \(I\) in the Hasse diagram of \(\mathfrak{P}_{A}\) are labeled with \((m,a)\) where \(m\) is the number of the row to which \(a\) is added in \(J\). The maximal chains on \(\mathfrak{P}_{A}\) from \(\emptyset\) to \(I\) are equivalent to the standard colored immaculate tableaux of shape \(I\). The maximal chain \(C=\{\emptyset=J_{0}\xrightarrow{(m_{1},a_{1})}J_{1}\xrightarrow{(m_{2},a_{2}) }\cdots\xrightarrow{(m_{k},a_{k})}J_{k}=I\}\) is associated with the standard colored immaculate tableau of shape \(I\) whose boxes are filled with the integers \(1\) through \(n\) in the order they appear in the path. That is, the box added from \(J_{j}\xrightarrow{(m_{j+1},a_{j+1})}J_{j+1}\), which is added to row \(m_{j+1}\) and colored with \(a_{j+1}\), is filled with the integer \(j+1\). **Example 6.2**.: The maximal chain \(C=\{\emptyset\xrightarrow{(1,a)}[a]\xrightarrow{(2,d)}[a,d]\xrightarrow{(2,e) }[a,de]\xrightarrow{(1,b)}[ab,de]\xrightarrow{(2,f)}[ab,def]\xrightarrow{(1,c) }[abc,def]\}\) is associated with the following tableaux: Maximal chains starting from a non-empty sentence \(J\) going to \(I\) lead to a natural definition of _skew standard colored immaculate tableaux_. **Definition 6.3**.: For sentences \(I\) and \(J=(v_{1},\ldots,v_{h})\) with \(J\subseteq_{L}I\), the _colored skew shape_\(I/J\) is the colored composition diagram of \(I\) where, for \(1\leq i\leq h\), the first \(|v_{i}|\) boxes of the \(i^{\text{th}}\) row are inactive. The inactive boxes are shaded gray to indicate that they have in a sense been "removed", however the colors filling them are still relevant. **Definition 6.4**.: For sentences \(I\) and \(J\) with \(J\subseteq_{L}I\), a _skew colored immaculate tableau_ of shape \(I/J\) is a colored skew shape \(I/J\) filled with integers such that the sequence of integer entries in the first column is strictly increasing from top to bottom and the sequence of integer entries in each row is weakly increasing from left to right. Here the inactive boxes of \(I/J\) are not filled, and we consider the first column of a colored skew shape \(I/J\) to be the column corresponding to the first column of \(I\). The maximal chain \(C=\{J=J_{0}\xrightarrow{(m_{1},a_{1})}J_{1}\xrightarrow{(m_{2},a_{2})} \cdots\xrightarrow{(m_{k},a_{k})}J_{k}=I\}\) is associated with the skew standard colored immaculate tableau of shape \(I/J\) whose boxes are filled with the integers \(1,\ldots,k\) in the order they appear in the path. **Example 6.5**.: The maximal chain \(C=\{[a,de]\xrightarrow{(1,b)}[ab,de]\xrightarrow{(2,f)}[ab,def]\xrightarrow{(1,c)}[ abc,def]\}\) is associated with the following skew colored immaculate tableau: \begin{tabular}{|c|c|c|} \hline \(a\) & \(b,1\) & \(c,3\) \\ \hline \hline \(d\) & \(e\) & \(f,2\) \\ \hline \end{tabular} **Definition 6.6**.: For sentences \(I,J\) such that \(J\subseteq_{L}I\), define the _skew colored dual immaculate function_ as \[\mathfrak{S}_{I/J}^{*}=\sum_{K}\langle\mathfrak{S}_{J}H_{K},\mathfrak{S}_{I}^ {*}\rangle M_{K},\] where the sum runs over all sentences \(K\in\mathfrak{P}_{A}\) such that \(|I|-|J|=|K|\). **Proposition 6.7**.: _The coefficient \(\langle\mathfrak{S}_{J}H_{K},\mathfrak{S}_{I}^{*}\rangle\) is equal to the number of skew colored immaculate tableaux of shape \(I/J\) with type \(K\)._ Proof.: Let \(K=(u_{1},\ldots,u_{g})\) be a sentence. Notice that \(\mathfrak{S}_{J}H_{K}=(((\mathfrak{S}_{J}H_{u_{1}})H_{u_{2}})\cdots H_{u_{g}})\) and by Theorem 5.10, we have \[\mathfrak{S}_{J}H_{K}=\sum_{J\subseteq_{u_{1}}J_{1}\subseteq_{u_{2}}\ldots J _{g-1}\subseteq_{u_{g}}L}\mathfrak{S}_{L},\] for some sentences \(J_{1},\ldots J_{g-1}\). Thus, \[\langle\mathfrak{S}_{J}H_{K},\mathfrak{S}_{I}^{*}\rangle=\left\langle\sum_{J \subseteq_{u_{1}}J_{1}\subseteq_{u_{2}}\ldots J_{g-1}\subseteq_{u_{g}}L} \mathfrak{S}_{L},\mathfrak{S}_{I}^{*}\right\rangle=\sum_{J\subseteq_{u_{1}}J _{1}\subseteq_{u_{2}}\ldots J_{g-1}\subseteq_{u_{g}}L}\langle\mathfrak{S}_{L },\mathfrak{S}_{I}^{*}\rangle\] for some sentences \(J_{1},\ldots,J_{g-1}\). Therefore, for \(J_{1},\ldots,J_{g-1}\), this inner product is equivalent to the number of times that the sentence \(I\) appears when summing over all sentences \(L\) such that \(J\subseteq_{u_{1}}J_{1}\subseteq_{u_{2}}\ldots J_{g-1}\subseteq_{u_{g}}L\). Each occurrence of \(I\) can be associated with a unique sequence of sentences \((J,J_{1},\ldots,J_{g-1})\) that appear in the sum, and each sequence can be associated with a unique skew colored immaculate tableau of shape \(I/J\) and type \(K\). Starting with the colored skew shape \(I/J\), first fill the boxes corresponding to those in \(J_{1}/J\) with \(1\)'s. Then fill the boxes corresponding to \(J_{2}/J_{1}\) with \(2\)'s and continue repeating this process until the remaining boxes in \(I/J_{g-1}\) are filled with \((g-1)\)'s. Note that because \(J\subseteq_{u_{1}}J_{1}\subseteq_{u_{2}}\ldots J_{g-1}\subseteq_{u_{g}}I\), the colors of the boxes filled with each number \(j\), read from left to right and bottom to top, correspond exactly to the word \(u_{j}\). Through this construction, each sequence \(J,J_{1},\ldots,J_{g-1}\) corresponds to a unique skew colored immaculate tableau of shape \(I/J\) and type \(K\). Additionally, each skew CIT \(T\) of shape \(I/J\) and type \(K\) can be associated with a unique sequence \(J,J_{1},\ldots,J_{g-1}\) such that \(J\subseteq_{u_{1}}J_{1}\subseteq_{u_{2}}\ldots J_{g-1}\subseteq_{u_{g}}I\) by taking \(T\) and removing all boxes filled with integers greater than \(j\), for each \(1\leq j<g\), to get a colored tableau of shape \(J_{j}\). Therefore, \(\langle\mathfrak{S}_{J}H_{K},\mathfrak{S}_{I}^{*}\rangle\) counts the number of skew CIT with shape \(I/J\) and type \(K\). The use of linear functionals and properties of duality allows for the expansions of the skew colored dual immaculate functions into the colored fundamental basis and the colored dual immaculate basis with inner product coefficients. **Proposition 6.8**.: _For an interval \([J,I]\) in \(\mathfrak{P}_{A},\)_ \[\mathfrak{S}_{I/J}^{*}=\sum_{K}\langle\mathfrak{S}_{J}R_{K},\mathfrak{S}_{I}^ {*}\rangle F_{K}=\sum_{K}\langle\mathfrak{S}_{J}\mathfrak{S}_{K},\mathfrak{S }_{I}^{*}\rangle\mathfrak{S}_{K}^{*},\] _where the sums run over all sentences \(K\) such that \(|I|-|J|=|K|\). The coefficients \(\langle\mathfrak{S}_{J}\mathfrak{S}_{K},\mathfrak{S}_{I}^{*}\rangle\) are equal to the structure coefficients \(c_{J,K}^{I}\) for colored immaculate multiplication,_ \[\mathfrak{S}_{J}\mathfrak{S}_{K}=\sum_{I}c_{J,K}^{I}\mathfrak{S}_{I}=\sum_{I} \langle\mathfrak{S}_{J}\mathfrak{S}_{K},\mathfrak{S}_{I}^{*}\rangle\mathfrak{S} _{I},\] _where the sums run over all sentences \(I\)._ Proof.: Observe that by Definition 5.1, \[\mathfrak{S}_{I/J}^{*}=\sum_{K}\langle\mathfrak{S}_{J}H_{K},\mathfrak{S}_{I}^ {*}\rangle M_{K}=\mathfrak{S}_{J}^{\perp}(\mathfrak{S}_{I}^{*})=\sum_{K} \langle\mathfrak{S}_{J}R_{K},\mathfrak{S}_{I}^{*}\rangle F_{K}=\sum_{K} \langle\mathfrak{S}_{J}\mathfrak{S}_{K},\mathfrak{S}_{I}^{*}\rangle\mathfrak{S} _{K}^{*}.\qed\] The skew colored dual immaculate functions can also be defined explicitly in terms of skew colored immaculate tableaux following Definition 6.6. **Proposition 6.9**.: _Let \(I=(w_{1},\ldots,w_{k})\) and \(J=(v_{1},\ldots,v_{h})\) be sentences such that \(J\subseteq_{L}I\). Then_ \[\mathfrak{S}^{*}_{I/J}=\sum_{T}x_{T},\] _where the sum is taken over all skew colored immaculate tableaux of shape \(I/J\)._ Proof.: By Definition 6.6, \(\mathfrak{S}^{*}_{I/J}=\sum_{K}\langle\mathfrak{S}_{J}H_{K},\mathfrak{S}^{*}_ {I}\rangle M_{K}\), where the sum runs over all sentences \(K\in\mathfrak{P}_{A}\). By Proposition 6.7, \(\langle\mathfrak{S}_{J}H_{K},\mathfrak{S}^{*}_{I}\rangle\) is equal to the number of skew colored immaculate tableaux of shape \(I/J\) and type \(K\). Thus, following Proposition 4.18, \(\langle\mathfrak{S}_{J}H_{K},\mathfrak{S}^{*}_{I}\rangle M_{K}=\sum_{T^{\prime }}x_{T^{\prime}}\) where the sum runs over all skew CIT \(T^{\prime}\) of shape \(I/J\) and flat type \(K\). Therefore, \[\mathfrak{S}^{*}_{I/J}=\sum_{K}\sum_{T^{\prime}}x_{T^{\prime}}=\sum_{T}x_{T}\] where the sums run over sentences \(K\) such that \(|I|-|J|=|K|\), skew CIT \(T^{\prime}\) of shape \(I/J\) and flat type \(K\), and all skew CIT \(T\) of shape \(I/J\) and type \(T\). Additionally, comultiplication on the colored dual immaculate basis can be defined in terms of skew functions following Propositions 2.15 and 6.8. **Proposition 6.10**.: _For a sentence \(I\),_ \[\Delta(\mathfrak{S}^{*}_{I})=\sum_{J}\mathfrak{S}^{*}_{J}\otimes\mathfrak{S}^ {*}_{I/J},\] _where the sum runs over all sentences \(J\) such that \(J\subseteq_{L}I\)._ Proof.: Let \(J\) and \(K\) be sentences, and observe that \(\mathfrak{S}_{J}\mathfrak{S}_{K}=\langle\mathfrak{S}_{J}\mathfrak{S}_{K}, \mathfrak{S}^{*}_{I}\rangle\mathfrak{S}_{I}\). By Proposition 2.15, this implies \[\Delta(\mathfrak{S}^{*}_{I}) =\sum_{J,K}\langle\mathfrak{S}_{J}\mathfrak{S}_{K},\mathfrak{S}^ {*}_{I}\rangle\mathfrak{S}^{*}_{J}\otimes\mathfrak{S}^{*}_{K}=\sum_{J}\left( \mathfrak{S}^{*}_{J}\otimes\sum_{K}\langle\mathfrak{S}_{J}\mathfrak{S}_{K}, \mathfrak{S}^{*}_{I}\rangle\mathfrak{S}^{*}_{K}\right)\] \[=\sum_{J}\mathfrak{S}^{*}_{J}\otimes\mathfrak{S}^{*}_{I/J}\quad \text{by Proposition \ref{prop:K}.}\qed\] As in the non-colored case, finding general combinatorial formulas for multiplication and the antipode of the colored dual immaculate functions remains an open problem. As shown in the example below, the product of two colored dual immaculate functions does not have exclusively positive structure constants, and their combinatorial description is not yet evident. \[\mathfrak{S}^{*}_{(ab)}\mathfrak{S}^{*}_{(c)}=\mathfrak{S}^{*}_{(abc)}+ \mathfrak{S}^{*}_{(c,ab)}+\mathfrak{S}^{*}_{(ac,b)}-\mathfrak{S}^{*}_{(a,bc)}\] ## 7. A partially commutative generalization of the row-strict dual immaculate functions Niese, Sundaram, Van Willigenburg, Vega, and Wang define a pair of dual bases in \(QSym\) and \(NSym\) in [23] by applying an involution \(\psi\) to the immaculate and dual immaculate bases. The row-strict dual immaculate basis has extensive representation theoretic applications, specifically to \(0\)-Hecke algebras [23]. The combinatorics of this basis involve a variation of immaculate tableaux with different conditions on the rows and columns. Note that the original paper uses French notation for diagrams (the bottom row is row \(1\)) so the definitions here have been adapted to English notation. We first review the theory of these two bases, then define their colored generalizations, and finally extend our earlier results using a lift of the original \(\psi\). ### Row-strict immaculate and dual immaculate functions We begin by recalling several definitions and results from [23]. **Definition 7.1**.: Given a composition \(\alpha\), a _row-strict immaculate tableau_\(T\) is a filling of the diagram of \(\alpha\) such that the entries in the leftmost column weakly increase from top to bottom and the entries in each row strictly increase from left to right. A row-strict immaculate tableau with \(n\) boxes is _standard_ if each integer \(1\) through \(n\) appears exactly once. The _type_ of a row-strict immaculate tableau is defined the same way as the type of an immaculate tableau. Monomials are associated with row-strict immaculate tableaux according to their type in the same fashion as immaculate tableaux. **Definition 7.2**.: For a composition \(\alpha\), the _row-strict dual immaculate function_ is defined as \[\mathfrak{R}\mathfrak{S}_{\alpha}^{*}=\sum_{T}x^{T},\] where the sum runs over all row-strict immaculate tableaux \(T\) of shape \(\alpha\). To standardize a row-strict immaculate tableau \(T\), replace the \(1\)'s in \(T\) with \(1,2,\ldots\) moving left to right and top to bottom, then continue with the \(2\)'s, etc. Note also that the set of standard row-strict immaculate tableaux is the same as the set of standard immaculate tableaux. **Definition 7.3**.: A positive integer \(i\) is a _row-strict descent_ of a standard row-strict immaculate tableau \(U\) if \(U\) contains the entry \(i+1\) in a weakly higher row than entry \(i\). The _row-strict descent set_ of a standard row-strict immaculate tableau \(U\) is \[Des^{rs}(U)=\{i:i+1\text{ is weakly above }i\text{ in }U\}.\] The _row-strict descent composition_ of a standard row-strict immaculate tableaux \(U\) is defined as \[co^{rs}(U)=comp(Des^{rs}(U)).\] Following these definitions, the row-strict dual immaculate function for a composition \(\alpha\) can also be defined as \[\mathfrak{R}\mathfrak{S}_{\alpha}^{*}=\sum_{S}F_{co^{rs}(S)},\] where the sum is taken over all standard row-strict immaculate tableaux of shape \(\alpha\). **Definition 7.4**.: The dual involutions \(\psi:QSym\to QSym\) and \(\psi:NSym\to NSym\) are defined \[\psi(F_{\alpha})=F_{\alpha^{c}}\quad\text{and}\quad\ \ \psi(R_{\alpha})=R_{ \alpha^{c}}.\] Note that there are two separate \(\psi\) involutions, although they are often referred to together as if they are one map. **Theorem 7.5**.: _[_23_]_ _Let \(\alpha\) be a composition. Then, \(\psi(\mathfrak{S}_{\alpha}^{*})=\mathfrak{R}\mathfrak{S}_{\alpha^{c}}^{*}\)._ Since \(\psi\) is an involution and \(\mathfrak{S}_{\alpha}^{*}\) is a basis, \(\{\mathfrak{R}\mathfrak{S}_{\alpha}^{*}\}_{\alpha}\) is a basis for \(QSym\). **Definition 7.6**.: For a composition \(\alpha\) and weak composition \(\beta\), let \(K_{\alpha,\beta}^{rs}\) be the number of row-strict immaculate tableaux of shape \(\alpha\) and type \(\beta\), and \(L_{\alpha,\beta}^{rs}\) be the number of standard row-strict immaculate tableaux of shape \(\alpha\) with row-strict descent composition \(\beta\). **Theorem 7.7**.: _[_23_]_ _For a composition \(\alpha\), the row-strict dual immaculate function expands as_ \[\mathfrak{R}\mathfrak{S}_{\alpha}^{*}=\sum_{\beta}K_{\alpha,\beta}^{rs}M_{ \beta}=\sum_{\gamma}L_{\alpha,\gamma}^{rs}F_{\gamma}.\] _where the sums run over compositions \(\beta\) and \(\gamma\) such that \(|\beta|=|\alpha|\) and \(|\gamma|=|\alpha|\)._ The row-strict dual immaculate functions have a dual basis that can be constructed similarly to the immaculate basis. **Definition 7.8**.: For \(m\in\mathbb{Z}\), the _non-commutative row-strict Bernstein operator_\(\mathbb{B}_{m}^{rs}\) is defined by \[\mathbb{B}_{m}^{rs}=\sum_{i\geq 0}(-1)^{i}E_{m+i}F_{(i)}^{\perp},\qquad\text{ and}\qquad\mathbb{B}_{\alpha}^{rs}=\mathbb{B}_{\alpha_{1}}^{rs}\dots\mathbb{B}_{ \alpha_{k}}^{rs}\qquad\text{for }\alpha\in\mathbb{Z}^{k}.\] For a composition \(\alpha\), the _row-strict immaculate function_\(\mathfrak{R}\mathfrak{S}_{\alpha}\) is defined as \[\mathfrak{R}\mathfrak{S}_{\alpha}=\mathbb{B}_{\alpha}^{rs}(1).\] These functions are dual to the row-strict dual immaculate basis, \(\langle\mathfrak{R}\mathfrak{S}_{\alpha},\mathfrak{R}\mathfrak{S}_{\beta}^{* }\rangle=\delta_{\alpha,\beta}\), and they are the image of the immaculate basis under \(\psi\). Applying \(\psi\) to various results from [3] yields similar results for the row-strict immaculate and row-strict dual immaculate bases, which are summarized in the following result. **Theorem 7.9**.: _[_23_]_ _For compositions \(\alpha,\beta\models n\), \(s\in\mathbb{Z}_{\geq 0}\), \(m\in\mathbb{Z}\), and \(f\in NSym\),_ 1. \[\mathbb{B}_{m}(f)H_{s}=\mathbb{B}_{m+1}(f)H_{s-1}+\mathbb{B}_{m}(fH_{s}) \xivdLeftrightarrow{\psi}\mathbb{B}_{m+1}^{rs}(f)E_{s-1}+\mathbb{B}_{m}^{rs} (fE_{s}).\] 2. _Multiplicity-free right Pieri rule:_ \[\mathfrak{S}_{\alpha}H_{s}=\sum_{\alpha\subset_{s}\beta}\mathfrak{S}_{\beta} \xivdLeftrightarrow{\psi}\mathfrak{R}\mathfrak{S}_{\alpha}E_{s}=\sum_{ \alpha\subset_{s}\beta}\mathfrak{R}\mathfrak{S}_{\beta}.\] 3. _Multiplicity-free right Pieri rule:_ \[\mathfrak{S}_{\alpha}\mathfrak{S}_{(1^{s})}=\mathfrak{S}_{\alpha}E_{s}=\sum_ {\beta}\mathfrak{S}_{\beta}\xivdLeftrightarrow{\psi}\mathfrak{R}\mathfrak{S}_ {\alpha}\mathfrak{R}\mathfrak{S}_{(1^{s})}=\mathfrak{R}\mathfrak{S}_{\alpha}H _{s}=\sum_{\beta}\mathfrak{R}\mathfrak{S}_{\beta},\] _where the sum runs over compositions_ \(\beta\models|\alpha|+s\) _such that_ \(\alpha_{i}\leq\beta_{i}\leq\alpha_{i}+1\) _and_ \(\alpha_{i}=0\) _for_ \(i>\ell(\alpha)\)_._ 4. \[\mathfrak{S}_{(1^{n})}=\sum_{\alpha\models n}(-1)^{n-\ell(\alpha)}H_{\alpha}= E_{n}\xivdLeftrightarrow{\psi}\mathfrak{R}\mathfrak{S}_{(1^{n})}=\sum_{ \alpha\models n}(-1)^{n-\ell(\alpha)}E_{\alpha}=H_{n}.\] 5. _Complete homogeneous and elementary expansions:_ \[H_{\beta}=\sum_{\alpha\geq\iota\beta}K_{\alpha,\beta}\mathfrak{S}_{\alpha} \xivdLeftrightarrow{\psi}E_{\beta}=\sum_{\alpha\geq\iota\beta}K_{\alpha,\beta }\mathfrak{R}\mathfrak{S}_{\alpha},\qquad H_{\beta}=\sum_{\alpha\geq\iota \beta}K_{\alpha,\beta}^{rs}\mathfrak{R}\mathfrak{S}_{\alpha}\xivdLeftrightarrow{ \psi}E_{\beta}=\sum_{\alpha\geq\iota\beta}K_{\alpha,\beta}^{rs}\mathfrak{S}_{ \alpha}.\] 6. _Ribbon basis expansions:_ \[R_{\beta}=\sum_{\alpha\geq\iota\beta}L_{\alpha,\beta}\mathfrak{S}_{\alpha} \xivdLeftrightarrow{\psi}R_{\beta^{c}}=\sum_{\alpha\geq\iota\beta}L_{\alpha, \beta}\mathfrak{R}\mathfrak{S}_{\alpha}.\] The immaculate poset also represents a poset of the standard row-strict immaculate tableaux as a result of the equivalence between standard immaculate tableaux and standard row-strict immaculate tableaux, thus results for the row-strict skew case follow closely to those of the dual immaculate functions. **Definition 7.10**.: Let \(\alpha\) and \(\beta\) be compositions with \(\beta\subseteq\alpha\). A _skew row-strict immaculate tableau_ is a skew shape \(\alpha/\beta\) filled with positive integers such that the entries in the first column are weakly increasing from top to bottom and the entries in each row strictly increase from left to right. **Definition 7.11**.: For compositions \(\alpha,\beta\) such that \(\beta\subseteq\alpha\), the _skew row-strict dual immaculate functions_ are defined as \[\mathfrak{R}\mathfrak{S}_{\alpha/\beta}^{*}=\sum_{\gamma}\langle\mathfrak{R} \mathfrak{S}_{\beta}H_{\gamma},\mathfrak{R}\mathfrak{S}_{\alpha}^{*}\rangle M_{ \gamma},\] where the sum runs over all \(\gamma\in\mathfrak{P}\) such that \(|\alpha|-|\beta|=|\gamma|\). As with the skew dual immaculate functions, these functions connect to the multiplication of the row-strict immaculate functions and the comultiplication of the row-strict dual immaculate functions. **Theorem 7.12**.: _[_23_]_ _Let \(\alpha\) and \(\beta\) be compositions with \(\beta\subseteq\alpha\). Then,_ \[\mathfrak{R}\mathfrak{S}_{\alpha/\beta}^{*}=\sum_{T}x^{T},\] _where the sum runs over all skew row-strict immaculate tableaux \(T\) of shape \(\alpha/\beta\). Moreover,_ \[\mathfrak{R}\mathfrak{S}_{\alpha/\beta}^{*}=\psi(\mathfrak{S}_{\alpha/\beta}^{*} )=\sum_{\gamma}\langle\mathfrak{R}\mathfrak{S}_{\beta}R_{\gamma},\mathfrak{R} \mathfrak{S}_{\alpha}^{*}\rangle F_{\gamma}=\sum_{\gamma}\langle\mathfrak{R} \mathfrak{S}_{\beta}\mathfrak{R}\mathfrak{S}_{\gamma},\mathfrak{R}\mathfrak{S }_{\alpha}^{*}\rangle\mathfrak{R}\mathfrak{S}_{\gamma}^{*},\] _where the sums run over all compositions \(\gamma\in\mathfrak{P}\) such that \(|\alpha|-|\beta|=|\gamma|\)._ Comultiplication on the row-strict dual immaculate functions is also defined in terms of skew shapes. **Definition 7.13**.: For a composition \(\alpha\), \[\Delta(\mathfrak{R}\mathfrak{S}_{\alpha}^{*})=\sum_{\beta}\mathfrak{R} \mathfrak{S}_{\beta}^{*}\otimes\mathfrak{R}\mathfrak{S}_{\alpha/\beta}^{*},\] where the sum runs over all compositions \(\beta\) such that \(\beta\subseteq\alpha\). ### Colored row-strict dual immaculate functions in \(QSym_{A}\) To generalize these definitions and results to the colored case, we first define a lift of the involution \(\psi\) to \(QSym_{A}\) and \(NSym_{A}\). Note that we technically define two separate dual involutions \(\psi\), one on \(QSym_{A}\) and one on \(NSym_{A}\), but we treat them as a single map that works on both spaces. **Definition 7.14**.: For a sentence \(J\), define the linear maps \(\psi:QSym_{A}\to QSym_{A}\) and \(\psi:NSym_{A}\to NSym_{A}\) by \[\psi(F_{J})=F_{J^{c}}\quad\text{and}\quad\ \psi(R_{J})=R_{J^{c}}.\] **Proposition 7.15**.: _The maps \(\psi\) are involutions, and the duality between \(QSym_{A}\) and \(NSym_{A}\) is invariant under \(\psi\), meaning that_ \[\langle G,F\rangle=\langle\psi(G),\psi(H)\rangle.\] _Furthermore, the map \(\psi:NSym_{A}\to NSym_{A}\) is an isomorphism._ Proof.: To see that \(\psi\) is invariant under duality, it suffices to observe that \(\langle R_{I},F_{J}\rangle=\langle R_{I^{c}},F_{J^{c}}\rangle=\langle\psi(R_{ I}),\psi(F_{J})\rangle\). The map \(\psi\) is an involution because \(\psi(\psi(F_{I}))=F_{(I^{c})^{c}}=F_{I}\) and \(\psi(\psi(R_{I}))=R_{(I^{c})^{c}}=R_{I}\) and the map extends linearly. Next, we show that \(\psi\) is an isomorphism on \(NSym_{A}\). For sentences \(I\) and \(J\), we have \(R_{I}R_{J}=R_{I.J}+R_{I\odot J}\)[11] and thus \(\psi(R_{I}R_{J})=\psi(R_{I.J})+\psi(R_{I\odot J})\). Observe that \((I\cdot J)^{c}=I^{c}\odot J^{c}\) and \((I\odot J)^{c}=I^{c}\cdot J^{c}\). Therefore, \(\psi(R_{I}R_{J})=R_{I^{c}\odot J^{c}}+R_{I^{c}\cdot J^{c}}=R_{I^{c}}R_{J^{c}}= \psi(R_{I})\psi(R_{J})\). Note that \(\psi:QSym_{A}\to QSym_{A}\) is not an isomorphism because it fails to preserve multiplication. Now, we prove that \(\psi\) maps the complete homogenous basis to the elementary basis in \(NSym\) and vice versa, which will allow us to apply \(\psi\) to both these bases. **Proposition 7.16**.: _For a sentence \(J\), \(\psi(E_{J})=H_{J}\)._ Proof.: First, for a sentence \(J\), we expand \(E_{J}\) in terms of the colored ribbon basis as \[E_{J}=\sum_{K\preceq J}(-1)^{|J|-\ell(K)}H_{K}=\sum_{K\preceq J}(-1)^{|J|-\ell (K)}\left[\sum_{I\succeq K}R_{I}\right]=\sum_{I}\left[\sum_{K\preceq J,I}(- 1)^{|J|-\ell(K)}\right]R_{I}.\] Next, we split the sum into two pieces according to \(I\): one where \(I\succ J^{c}\) and the other where \(I\preceq J^{c}\), \[E_{J}=\sum_{I\succ J^{c}}\left[\sum_{K\preceq J,I}(-1)^{|J|-\ell(K)}\right]R_ {I}+\sum_{I\preceq J^{c}}\left[\sum_{K\preceq J,I}(-1)^{|J|-\ell(K)}\right]R_ {I}.\] In the first case, observe that \(I\succ J^{c}\) implies that \(J\succ I\). Thus, \(K\preceq J,I\) becomes \(K\preceq J\). Also notice that because \(J\) is constant we can write \((-1)^{|J|}=(-1)^{|J|-\ell(J)}(-1)^{\ell(J)}\) and factor the first term out of the sum. In the second case, \(I\preceq J^{c}\) means that \(K\preceq I,J\) becomes \(K\preceq J,J^{c}\). The only way for \(K\) to be a refinement of a sentence and its complement is if \(K\) is a sentence made up of only single letters. That is, \(|K|=\ell(K)\). Thus the inner sum has only one summand, which is \((-1)^{|J|-\ell(K)}=(-1)^{|J|-|K|}=1\). As a result, the equation simplifies as \[E_{J}=\sum_{I\succ J^{c}}(-1)^{|J|-\ell(J)}\left[\sum_{J^{c}\preceq K\leq I}( -1)^{\ell(J)-\ell(K)}\right]R_{I}+\sum_{I\preceq J^{c}}R_{I}.\] By properties of the Mobius function [11], the coefficient of first section is \(0\) for all \(K\) and we are left with \[E_{J}=\sum_{I\preceq J^{e}}R_{I}.\] Therefore, applying \(\psi\) to \(E_{J}\) and noticing that \(I\preceq J^{c}\) if and only if \(J\preceq I^{c}\), yields \[\psi(E_{J})=\sum_{I\preceq J^{e}}\psi(R_{I})=\sum_{I\preceq J^{e}}R_{I^{c}}= \sum_{J\preceq I^{e}}R_{I^{c}}=H_{J}.\qed\] We continue by defining and studying colored row-strict immaculate tableaux. Their combinatorics in relation to those of the colored immaculate tableaux will allow us to define the colored row-strict dual immaculate basis and verify its relationship to the colored dual immaculate basis via \(\psi\). **Definition 7.17**.: A _colored row-strict immaculate tableau_ (CRSIT) of shape \(I\) is a colored composition diagram of shape \(I\) in which the sequence of integer entries is strictly increasing from left to right in each row, and weakly increasing top to bottom in the leftmost column. The _type_ of a colored row-strict immaculate tableau \(T\) is the sentence \(C=(u_{1},\ldots,u_{g})\) such that for each \(i\in[g]\) the word \(u_{i}\) lists the colors of all boxes in \(T\) filled with the integer \(i\) in the order they appear when entries in \(T\) are read from left to right and top to bottom. A _standard colored row-strict immaculate tableau_ is a colored row-strict immaculate tableau of size \(n\) with the integer entries \(1,\ldots,n\) each appearing exactly once. To _standardize_ a colored row-strict tableau, replace its integer entries with the numbers \(1,2,\ldots\) based on the order they appear in the type, first replacing all entries equal to \(1\), then \(2\), etc. just as in the standardization of non-colored row-strict immaculate tableaux. We also use the same notion of _row-strict descents_ and the _row-strict descent set_\(Des^{rs}\) from row-strict immaculate tableaux, but define an additional concept of colored row-strict descent composition. **Definition 7.18**.: The _colored row-strict descent composition_ of a standard colored row-strict immaculate tableau \(U\), denoted \(co_{A}^{rs}(U)\), is the sentence obtained by reading the colors in each box in order of their number and splitting into a new word after each row-strict descent. **Example 7.19**.: A few CRSIT of shape \((ab,bca)\) along with their types and standardization, as well as the row-strict descent sets and colored row-strict descent compositions of these standardizations, are: [MISSING_PAGE_POST] Proof.: Let \(T\) be a colored row-strict immaculate tableau of shape \(J\) that standardizes to the standard colored row-strict immaculate tableau \(S\). The flattening of the type of \(T\) must be a refinement of the colored row-strict descent composition of \(S\), which can be shown by applying the same reasoning used in the proof of Proposition 4.14. In fact, each sentence \(B\) that flattens to a refinement of \(co_{A}^{rs}(S)\) corresponds to a unique colored row-strict immaculate tableau of type \(B\) that standardizes to \(S\). Therefore, \[F_{co_{A}^{rs}(S)}=\sum_{T_{S}}x_{T_{S}},\] where the sum runs over all colored row-strict immaculate tableaux \(T_{S}\) of shape \(J\) that standardize to \(S\). It follows that \[\mathfrak{HS}_{J}^{*}=\sum_{T}x_{T}=\sum_{S}\sum_{T_{S}}x_{T_{S}}=\sum_{S}F_{ co_{A}^{rs}(S)},\] where the sums run over all CRSIT \(T\) of shape \(J\), all standard CRSIT \(S\) of shape \(J\), and all CRSIT \(T_{S}\) of shape \(J\) that standardize to \(S\). **Theorem 7.23**.: _Let J be a sentence. Then,_ \[\psi(\mathfrak{S}_{J}^{*})=\mathfrak{HS}_{J}^{*}.\] Proof.: For a sentence \(J\), \[\psi(\mathfrak{S}_{J}^{*})=\psi(\sum_{U}F_{co_{A}(U)})=\sum_{U}F_{(co_{A}(U))^ {rs}}.\] The complement of the colored descent composition of a standard colored immaculate tableau \(U\) splits exactly where \(U\) does not have a descent. These are exactly the locations of the row-strict descents in \(U\), thus \((co_{A}(U))^{c}=co_{A}^{rs}(U)\), and \[\psi(\mathfrak{S}_{J}^{*})=\sum_{U}F_{(co_{A}^{rs}(U))}=\mathfrak{HS}_{J}^{*}.\qed\] Because \(\{\mathfrak{S}_{J}^{*}\}_{J}\) is a basis and \(\psi\) is an involution, Theorem 7.23 also implies the following. **Corollary 7.24**.: \(\{\mathfrak{HS}_{J}^{*}\}_{J}\) _is a basis for \(QSym_{A}\)._ Using \(\psi\), we extend each of our results on the colored dual immaculate functions to the colored row-strict dual immaculate functions. **Definition 7.25**.: For sentences \(J,C\) and weak sentence \(B\), define \(K_{J,B}^{rs}\) as the number of colored row-strict immaculate tableaux of shape J and type B, and \(L_{J,C}^{rs}\) as the number of standard colored row-strict immaculate tableaux of shape J with row-strict descent composition C. **Proposition 7.26**.: _For a sentence \(J\),_ \[\mathfrak{HS}_{J}^{*}=\sum_{B}K_{J,B}^{rs}M_{B}\qquad\text{and}\qquad \mathfrak{HS}_{J}^{*}=\sum_{C}L_{J,C}^{rs}F_{C},\] _where the sums run over sentences \(B\) and \(C\) such that \(|B|=|J|\) and \(|C|=|J|\)._ The above proposition follows from Definition 7.20 in the manner of Theorem 4.20. The results of Section 4.3 also extend nicely to the row-strict case under the involution \(\psi\). **Definition 7.27**.: The _colored row-strict immaculate descent graph_, denoted \({}^{rs}\mathfrak{D}_{A}^{n}\) is the edge-weighted directed graph with the set of sentences on \(A\) of size \(n\) as its vertex set and an edge from each sentence \(I\) to \(J\) if there exists a standard colored row-strict immaculate tableau of shape \(I\) with colored row-strict descent composition \(J\). The edge from \(I\) to \(J\) is weighted with the coefficient \(L_{I,J}^{rs}\). Due to the differing definitions of descents and descent compositions in row-strict tableaux, the neighbors of \(I\) in \({}^{rs}\mathfrak{D}_{A}^{n}\) are exactly the (sentence) complements of \(I\)'s neighbors in \(\mathfrak{D}_{A}^{n}\) and the complement of \(I\) itself. Here, we say two vertices are _neighbors_ if they are adjacent by an edge in either direction. **Example 7.28**.: The standard colored row-strict immaculate tableaux of shape \((ab,cbb)\) have colored row-strict descent compositions \((a,bc,b,b)\), \((ac,bb,b)\), \((ac,b,bb)\), and \((ac,b,bb)\), so \((ab,cbb)\) has outgoing edges to these sentences in \({}^{rs}\mathfrak{D}_{\{a,b,c\}}^{5}\). Notice that if we take the complement of each of these sentences we get \((ab,cbb)\), \((a,cb,bb)\), \((a,cbbb)\), and \((a,cbb,b)\) itself and the sentences to which it has outgoing edges to in \(\mathfrak{D}_{\{a,b,c\}}^{5}\), as seen in Figure 1. **Proposition 7.29**.: _For a sentence \(I\), the colored fundamental functions expand into the colored row strict immaculate basis as_ \[F_{I}=\sum_{J}L^{rs(-1)}_{I,J}\mathfrak{R}\mathfrak{S}^{*}_{J}\qquad\text{ with coefficients }\qquad L^{rs(-1)}_{I,J}=\sum_{\mathcal{P}}(-1)^{\ell(\mathcal{P})} prod(\mathcal{P}),\] _where the sums run over all sentences \(J\) below \(I\) in \({}^{rs}\mathfrak{D}^{n}_{A}\) and all directed paths \(\mathcal{P}\) from \(I\) to \(J\) in \({}^{rs}\mathfrak{D}^{n}_{A}\)._ The proof follows that of Theorem 4.29 using Proposition 7.26 in place of Theorem 4.25. Similarly, this proposition specializes to the non-colored case in the same manner as Corollary 4.31. ### Colored row-strict immaculate functions We define the colored row-strict immaculate functions as the image of the colored immaculate functions under \(\psi\), and thus also as the basis dual to the colored row-strict dual immaculate functions. **Definition 7.30**.: For a sentence \(J\), the _colored row-strict immaculate function_ is defined as \[\mathfrak{R}\mathfrak{S}_{J}=\psi(\mathfrak{S}_{J}).\] Equivalently, due to the invariance of \(\psi\) under duality, we have \(\langle\mathfrak{R}\mathfrak{S}_{I},\mathfrak{R}\mathfrak{S}^{*}_{J}\rangle= \delta_{I,J}\). Applying \(\psi\) to the colored immaculate functions yields row-strict versions of our earlier results and colored generalizations of the results in Theorem 7.9. Note that certain results from Theorem 7.9 are not generalized here because we lack the corresponding result on the colored immaculate functions or due to the fact that \(\psi\) is not an isomorphism on \(QSym_{A}\). The non-colored analogues of \(\psi\) are automorphisms on both \(QSym\) and \(NSym\). **Theorem 7.31**.: _For words \(w\) and \(v\), sentences \(J\) and \(C\), and \(f\in NSym_{A}\)_ 1. _Right Pieri rule:_ \[\mathfrak{S}_{J}H_{w}=\sum_{J\subset_{w}K}\mathfrak{S}_{K} \xleftrightarrow{\psi}\mathfrak{R}\mathfrak{S}_{J}E_{w}=\sum_{J\subset_{w}K} \mathfrak{R}\mathfrak{S}_{K}.\] 2. _Colored complete homogeneous and colored elementary expansions:_ \[H_{C}=\sum_{J}K_{J,C}\mathfrak{S}_{J}\xleftrightarrow{\psi}E_{C}=\sum_{J}K_{ J,C}\mathfrak{R}\mathfrak{S}_{J},\qquad H_{C}=\sum_{J}K^{rs}_{J,C}\mathfrak{R} \mathfrak{S}_{J}\xleftrightarrow{\psi}E_{C}=\sum_{J}K^{rs}_{J,C}\mathfrak{S}_ {J}.\] 3. _Colored ribbon expansions:_ \[R_{C}=\sum_{J}L_{J,C}\mathfrak{S}_{J}\xleftrightarrow{\psi}R_{C^{*}}=\sum_{J} L_{J,C}\mathfrak{R}\mathfrak{S}_{J}.\] The application of Proposition 2.14 to Proposition 7.29 also yields the following result. The analogous result is also true in \(NSym\), as in Corollary 5.20. **Corollary 7.32**.: _For a sentence \(J\), the colored row-strict immaculate functions expand into the colored ribbon basis as_ \[\mathfrak{R}\mathfrak{S}_{J}=\sum_{I}L^{rs(-1)}_{I,J}R_{I}\qquad\text{with coefficients }\qquad L^{rs(-1)}_{I,J}=\sum_{\mathcal{P}}(-1)^{\ell(\mathcal{P})} prod(\mathcal{P})\] _where the sums run over all \(I\) above \(J\) in \({}^{rs}\mathfrak{D}^{n}_{A}\) and all paths \(\mathcal{P}\) from \(I\) to \(J\) in \({}^{rs}\mathfrak{D}^{n}_{A}\)._ ### The colored immaculate poset and skew colored row-strict dual immaculate functions The set of standard colored immaculate tableaux is equal to the set of standard colored row-strict immaculate tableaux, meaning that many of our results on the colored immaculate poset immediately extend to the row-strict setting. **Definition 7.33**.: For sentences \(I\) and \(J\) where \(J\subseteq_{L}I\), a _skew colored row-strict immaculate tableau_ of shape \(I/J\) is a colored skew shape \(I/J\) filled with positive integers such that the sequences of entries in the first column is weakly increasing top to bottom and the sequence of integers in each row is strictly increasing left to right. **Definition 7.34**.: For sentences \(I\) and \(J\) where \(J\subseteq_{L}I\), define the _skew colored row-strict dual immaculate function_ as \[\mathfrak{R}\mathfrak{S}_{I/J}^{*}=\sum_{K}\langle\mathfrak{R}\mathfrak{S}_{J}H _{K},\mathfrak{R}\mathfrak{S}_{I}^{*}\rangle M_{K},\] where the sum runs over all sentences \(K\in\mathfrak{P}_{A}\) such that \(|I|-|J|=|K|\). Applying \(\psi\) to the equations in Proposition 6.8 yields the following results. **Theorem 7.35**.: _For sentences \(I\) and \(J\) with \(J\subseteq_{L}I\),_ \[\mathfrak{R}\mathfrak{S}_{I/J}^{*}=\sum_{K}\langle\mathfrak{R}\mathfrak{S}_{ J}R_{K},\mathfrak{R}\mathfrak{S}_{I}^{*}\rangle F_{K}=\sum_{K}\langle \mathfrak{R}\mathfrak{S}_{J}\mathfrak{R}\mathfrak{S}_{K},\mathfrak{R} \mathfrak{S}_{I}^{*}\rangle\mathfrak{R}\mathfrak{S}_{K}^{*},\] _where the sums run over all sentences \(K\in\mathfrak{P}_{A}\) such that \(|I|-|J|=|K|\)._ **Proposition 7.36**.: _For sentences \(I\) and \(J\) such that \(J\subseteq_{L}I\),_ \[\psi(\mathfrak{S}_{I/J}^{*})=\mathfrak{R}\mathfrak{S}_{I/J}^{*}.\] Proof.: Let \(I\) and \(J\) be sentences such that \(J\subseteq_{L}I\). Then, \[\psi(\mathfrak{R}\mathfrak{S}_{I/J}^{*}) =\sum_{K}\langle\mathfrak{R}\mathfrak{S}_{J}R_{K},\mathfrak{R} \mathfrak{S}_{I}^{*}\rangle\psi(F_{K})=\sum_{K}\langle\mathfrak{R}\mathfrak{S} _{J}R_{K},\mathfrak{R}\mathfrak{S}_{I}^{*}\rangle F_{K^{c}}\] \[=\sum_{K}\langle\psi(\mathfrak{S}_{J}R_{K^{c}}),\psi(\mathfrak{S} _{I}^{*})\rangle F_{K^{c}}\quad\text{by Theorem \ref{thm:J}}\] \[=\sum_{K}\langle\mathfrak{S}_{J}R_{K^{c}},\mathfrak{S}_{I}^{*} \rangle F_{K^{c}}=\mathfrak{S}_{I/J}^{*}.\quad\text{By Proposition \ref{thm:J}}\] Comultiplication on the colored row-strict immaculate basis can be defined in terms of skew colored row-strict immaculate functions. The proof follows that of Proposition 6.10 using Theorem 7.35. **Proposition 7.37**.: _Let \(I\) be a sentence. Then,_ \[\Delta(\mathfrak{R}\mathfrak{S}_{I}^{*})=\sum_{J}\mathfrak{R}\mathfrak{S}_{J} ^{*}\otimes\mathfrak{R}\mathfrak{S}_{I/J}^{*},\] _where the sum runs over all sentences \(J\) such that \(J\subseteq_{L}I\)._ Multiplication and antipode of the colored row-strict dual immaculate functions are closely related to the multiplication and antipode of colored dual immaculate functions, and thus also remain open. ### Future work Our future work on this project will take three directions. First, we hope to continue exploring properties of the colored immaculate and dual immaculate bases by looking at multiplicative structures, potential Jacobi-Trudi formulas, possible rim hook generalizations, and expansions to and from more bases. Second, we will continue to generalize other Schur-like bases to \(QSym_{A}\) and \(NSym_{A}\), specifically the quasisymmetric shin functions and Young quasisymmetric Schur functions, as well as their duals. Finally, we are interested in defining and studying the colored generalization of the symmetric functions that would be a subset of \(QSym_{A}\) and the image of \(NSym_{A}\) under a forgetful map.
2309.10357
Deep Mutual Learning across Task Towers for Effective Multi-Task Recommender Learning
Recommender systems usually leverage multi-task learning methods to simultaneously optimize several objectives because of the multi-faceted user behavior data. The typical way of conducting multi-task learning is to establish appropriate parameter sharing across multiple tasks at lower layers while reserving a separate task tower for each task at upper layers. Since the task towers exert direct impact on the prediction results, we argue that the architecture of standalone task towers is sub-optimal for promoting positive knowledge sharing. Accordingly, we propose the framework of Deep Mutual Learning across task towers, which is compatible with various backbone multi-task networks. Extensive offline experiments and online AB tests are conducted to evaluate and verify the proposed approach's effectiveness.
Yi Ren, Ying Du, Bin Wang, Shenzheng Zhang
2023-09-19T06:36:27Z
http://arxiv.org/abs/2309.10357v1
# Deep Mutual Learning across Task Towers for Effective Multi-Task Recommender Learning ###### Abstract. Recommender systems usually leverage multi-task learning methods to simultaneously optimize several objectives because of the multi-faceted user behavior data. The typical way of conducting multi-task learning(MTL) is to establish appropriate parameter sharing across multiple tasks at lower layers while reserving a separate task tower for each task at upper layers. With such design, the lower layers intend to explore the structure of task relationships and mine valuable information to be used by the task towers for accurate prediction. Since the task towers exert direct impact on the prediction results, we argue that the architecture of standalone task towers is sub-optimal for promoting positive knowledge sharing. First, for each task, attending to the input information of other task towers is beneficial. For instance, the information useful for predicting the "like" task is also valuable for the "buy" task. Furthermore, because different tasks are inter-related, the training labels of multiple tasks should obey a joint distribution. It is undesirable for the prediction results for these tasks to fall into the low density areas. Accordingly, we propose the framework of **D**eep **M**utual **L**earning across task towers(**DML**), which is compatible with various backbone multi-task networks. At the entry layer of the task towers, the shared component of Cross Task Feature Mining(_CTFM_) is introduced to transfer input information across the task towers while still ensuring one task's loss will not impact the inputs of other task towers. Moreover, for each task, dedicated network component called Global Knowledge Distillation(_GKD_) are utilized to distill valuable knowledge from the global results of the upper layer task towers to enhance the prediction consistency. Extensive offline experiments and online A/B tests are conducted to evaluate and verify the proposed approach's effectiveness. Recommender Systems; Multi-Task Learning; Parameter Sharing + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + lower-level branches. SNR[(11)] modularizes the shared low-level layers into parallel sub-networks and uses a transformation matrix multiplied by a scalar coding variable to learn their connections to upper-level layers to alleviate the task conflict and negative transfer issue. MSSM[(5)] learns differing combinations of feature fields for each expert and designs finer-grained sharing patterns among tasks through a set of coding variables that selectively choose which cells to route for a given task. But the learned routing parameters of these methods are static for all the samples, which can hardly warrant optimal performance. Finally, the methods of _dynamic gating_ learn optimized weights for each task based on the input sample to effectively combine the outputs of lower-level networks and achieve success in industrial applications. The MMoE [(12)] model adapts the Mixture-of-Experts (MoE)[(9)] structure to multi-task learning by sharing the expert sub-networks across all tasks, while also maintaining separate gating network optimized for each task. And Zhao et al. [(26)] extend the MMoE model [(12)] and apply it to learn multiple ranking objectives in Youtube video recommender systems. Moreover, PLE [(17)] achieves superior performance for news recommendation by assigning both shared expert sub-networks among tasks and dedicated expert sub-networks for each task. AITM [(24)] is the most similar method, which also augments the arcitecture of task towers. Nevertheless, as a concrete implementation, it is not validated to enhance the performance of various multi-task models. Moreover, it can only work for the tasks with sequential dependence relations. Admittedly, the aforementioned methods achieve impressive performance. However, as the task towers exert a direct effect on the prediction results, the standalone task towers tend not to be the most effective design for promoting positive knowledge transfer by exploiting the task relationships. First, for each task, the information selected by the relevant tasks is extremely valuable. Accordingly, we introduce the shared component of Cross Task Feature Mining(_CTFM_), which utilizes delicate attention mechanisms to extract relevant information from other tasks at the entry layer of the task tower. With the common attention mechanisms, the explicit task-specific information distilled by lower-level networks are mingled together and one task's loss will undesirably affect the inputs of other task towers, which is the task awareness missing problem and can hinder the learning of lower-level networks. In contrast to the usual attention mechanisms, our design can ensure appropriate information separation. We argue that reserving explicit task-specific knowledge has a positive effect on performance, which is validated in the experimental section. Second, because the tasks for recommender systems are related, the training labels of multiple tasks should obey a joint distribution. The prediction results for these tasks should not densely fall into the low-density areas. Therefore, a dedicated network named Global Knowledge Distillation(_GKD_) is introduced for each task to distill valuable global knowledge from the results of the upper layer task towers. For each task, the distilled global information helps to ensure consistent predictions with other tasks. We summarize our main contributions below. * We propose the framework of **Deep Mutual Learning** across task towers(**DML**), which is compatible with various backbone multi-task models. * The proposed novel sharing structure helps to enhance effective knowledge transfer across different tasks. * We conduct offline experiments and online A/B testing to evaluate and understand the effectiveness of our method. ## 2. Methodology In this section, we first introduce the problem of multi-objective ranking for recommender systems. Second, we describe the general design of DML. Finally, we elaborate on the introduced components. ### Multi-Objective Ranking for Recsys Given a set of candidates with \(N\) items \(\mathcal{C}=\{i_{n}\}_{1\leq n\leq N}\), the ranking model for recommender systems is to rank and recommend the top \(M\) items \(\mathcal{S}=\{i_{m}\}_{1\leq m\leq M}\subseteq\mathcal{C}\) for user \(u\) so as to optimize the overall utility and enhance user experience. First, for each pair of user \(u\) and item \(i_{n}\), the input feature \(x_{n}\) is derived. Second, a multi-task learning model is utilized to estimate \(K\) objectives corresponding to multiple user feedback signals. Furthermore, to compute the overall reward, we need to merge the multiple predictions with a function \(\Phi\) shown in equation (2) to derive the item's final reward score for greedy ranking. \[o_{n}^{1},o_{n}^{2},...,o_{n}^{K}=MTLModel(x_{n};\theta) \tag{1}\] \[r_{n}=\Phi(o_{n}^{1},o_{n}^{2},...,o_{n}^{K}) \tag{2}\] where \(\theta\) denotes the model parameters and \(\Phi\) is usually a function manually tuned to reflect the reward of recommending item \(i_{n}\) to user \(u\) based on the business goals. With the estimated reward scores \(\{r_{n}\}_{1\leq n\leq N}\) for the items in \(\mathcal{C}\), the ranking model can recommend the item sequence \(\mathcal{S}\) consisting of the top \(M\) items to the user \(u\). ### Overall Design of DML Please refer to figure 1 for the overall network architecture of DML. With the existing MTL algorithms[(25)], the equation (1) can be Figure 1. Overall Model Structure of DML further decomposed as below. For simplicity, we omit the subscript \(n\) in this section. \[l^{1},l^{2},...,l^{K}=G(x;\theta_{l}) \tag{3}\] \[o^{k}=F^{k}(l^{k};\theta_{h}^{k}) \tag{4}\] where \(G\) represents the lower level networks that encodes \(x\) to \(K\) different latent spaces with partially or fully shared parameters \(\theta_{l}\). And \(F^{k}\) is the upper-level network for task \(k\), which accepts \(l^{k}\) as input to model objective \(k\) with task-specific higher level parameter \(\theta_{h}^{k}\). Multiple candidate models (Hidler and Kastner, 1995; Kastner, 1995; Kastner, 1995; Kastner, 1996; Kastner, 1997; Kastner, 1998; Kastner, 1999) are proposed to enhance \(G\) with different parameter sharing designs. In this research, rather than \(G\), we focus on the enhancement of upper-level networks for improved prediction performance. First, the shared component of _CTFM_ is introduced, which leverages the attention mechanism to extract relevant information from the inputs of other task towers (the results of Equation (3)) as a complement to the target task. Please note that this attention is well-designed to solve the task-awareness missing issue, for which the excessive encouragement of knowledge sharing is not conducive to the extraction of task-specific knowledge. With our design, the gradients computed from the target task's loss will not impact the inputs of other task towers. \[\hat{l}^{1},\hat{l}^{2},...,\hat{l}^{K}=CTFM(l^{1},l^{2},...,l^{K};\theta_{s}) \tag{5}\] where shared parameters \(\theta_{s}\) is employed across different tasks. Moreover, a separate multi-layer network is introduced for each task to process each element of \(\{\hat{l}^{k}\}_{1\leq k\leq K}\) and generate the hidden representation, based on which accurate prediction can be made. \[h^{k}=H^{k}(\hat{l}^{k};\theta_{t_{0}}^{k}) \tag{6}\] where \(H^{k}\) denotes the task-specific MLP for task \(k\) with separate parameters \(\theta_{t_{0}}^{k}\). Finally, for each task, a dedicated component named _GKD_ is utilized to distill information from the hidden representations for both itself and other tasks to promote prediction consistency across tasks and more precisely model the corresponding objective. \[o^{k}=GKD^{k}(h^{k},\{h^{l}\}_{1\leq j\leq K};\theta_{t_{1}}^{k}) \tag{7}\] where \(GKD^{k}\) is the dedicated component for task \(k\). In contrast to _CTFM_ at Equation (5), we utilize task-specific parameters \(\theta_{t_{1}}^{k}\) here as specialization is usually helpful for upper layer networks. Furthermore, proper operation is implemented to ensure the prediction error of one task does not impact the hidden representations of other tasks. ### Cross Task Feature Mining Please refer to Figure 2(a) for the detailed process of _CTFM_. From the perspective of task towers, the outputs from lower-level networks at Equation (3) can be recognized as the mined features for them. First, for each task, the features mined for related tasks can be leveraged to enhance its prediction performance. Thus, trainable task embeddings, namely \(\{t^{k}\}_{1\leq k\leq K}\), are derived for the tasks to facilitate the learning of general task relations. Second, the importance of features from related tasks can vary per sample. Accordingly, we stack together the items of the set \(\{l^{k}+t^{k}\}_{1\leq k\leq K}\) to derive the matrix \(Mat_{o}\in\mathcal{R}^{K\times d_{0}}\) where \(d_{0}\) is the size of \(l^{k}\) and \(t^{k}\). Third, we use the projection parameters \(W_{q},W_{k},W_{o}\in\mathcal{R}^{d_{0}\times d_{0}}\) to transform \(Mat_{o}\) to the query, key, and value matrix of \(Mat_{q},Mat_{k},Mat_{o}\in\mathcal{R}^{K\times d_{0}}\). And gradient backpropagation from \(Mat_{k}\) and \(Mat_{o}\) to \(Mat_{o}\) is forbidden. Moreover, the scaled dot-product attention (Kastner, 1995) is performed on \(Mat_{q}\), \(Mat_{k}\), and \(Mat_{o}\) to compute the result matrix \(Mat_{r}\in\mathcal{R}^{K\times d_{0}}\). Finally, we add \(Mat_{o}\) to \(Mat_{r}\) for residual connection and further split based on the first axis to return \(\{\hat{l}^{k}\}_{1\leq k\leq K}\). Please note that the aforementioned networks are shared among the tasks to encourage generalizable modeling with parameter sharing. The usual attention mechanisms will cause the task awareness missing problem and can hinder the learning of lower-level networks. Instead, our design can ensure appropriate information separation and reserve the explicit task-specific knowledge by stopping the gradient backflow from \(Mat_{k}\) and \(Mat_{o}\) to \(Mat_{o}\). ### Global Knowledge Distillation Please refer to Figure 2(b) for the detailed process of _GKD_. In contrast to _CTFM_, each task is assigned dedicated parameters for global knowledge distillation. Acting as the last step, we would like to facilitate more flexible modeling by promoting task specialization here. This module accepts the hidden representations for both the corresponding task and other tasks as input. First, a multilayer perceptron(MLP) is utilized to extract valuable global knowledge (\(GK^{k}\in\mathcal{R}^{d_{i}}\)) for the target task \(k\) from the concatenation of all these hidden representations (\(\{h^{j}\}_{1\leq j\leq K}\)). Since the design goal here is to train task-specific MLPs to distill relevant global knowledge while not impacting the hidden representations, the gradient Figure 2. Introduced Components of DML backpropagation from \(GK^{k}\) to the MLP's input is prohibited. Second, we input \(GK^{k}\) and \(H^{k}\) to another MLP with Sigmoid as its last activation function. Then, the weights of \(GK^{k}\)'s different latent dimensions can be dynamically adapted for each sample. The weights are denoted by \(GW^{k}\in\mathcal{R}^{d_{i}}\). Moreover, the weighted global knowledge (\(WGK^{k}\in\mathcal{R}^{d_{i}}\)) is computed with the hadamard product of \(GK^{k}\) and \(GW^{k}\). Finally, \(WGK^{k}\) and \(H^{k}\) are concatenated together as the input for the last MLP to make predictions for task \(k\). ## 3. Experiments In this section, we conduct extensive offline experiments1 and online A/B testing to prove DML's effectiveness. Footnote 1: The code can be found at: [https://github.com/renyj533/mtl-consistency/tree/main](https://github.com/renyj533/mtl-consistency/tree/main). ### Experimental Settings for Public Data #### 3.1.1. Datasets We evaluate our methods on two public datasets. * **MovieLens-1M**(Mori et al., 2017): One of the currently released MovieLens datasets, which contains 1 million movie ratings from 6,040 users on 3,416 movies. * **Amazon**(Mori et al., 2017): A series of datasets consisting of product reviews from Amazon.com. We use the sub-category of "Electronics" including 1.7 million reviews from 192,403 users on 63,001 items. For ML-1M, we introduce the binary classification task of positive rating prediction (\(>\)=4) and the regression task of rating estimation. These two tasks are strictly correlated. For electronics, following (Kang et al., 2018), we first augment the dataset by randomly sampling un-rated items for every user. Moreover, we make sure the number of the un-rated items is the same as the number of the rated items for each user. Furthermore, we introduce two binary classification tasks, namely rating prediction (whether a rating exists) and positive rating prediction. Compared with the tasks of ML-1M, the negative transfer is more likely to occur as the task relationship here is more complex(The pearson correlation coefficient (Kang et al., 2018) of these two labels is around 0.7). Both data are randomly split into the training set, validation set, and test set by the ratio of 8:1:1. #### 3.1.2. Evaluation Metrics The merge function \(\Phi\) in Equation (2) assumes that the model can estimate accurate interaction probabilities for binary classification tasks (e.g. clicking) and absolute values for regression tasks (e.g. watch time). Therefore, instead of the ranking metrics, such as NDCG (Kang et al., 2018) and MRR (Kang et al., 2018), we use the metrics of AUC (Kang et al., 2018) for classification tasks and Mean Squared Error (MSE) (Kang et al., 2018) for regression tasks. Please note that many other recommendation literature, such as (Kang et al., 2018; Wang et al., 2018; Wang et al., 2018), also use similar metrics. For AUC, a bigger value indicates better performance. While, for MSE, it is the smaller the better. #### 3.1.3. Models As _soft parameter sharing_ methods need resources to store and train multiple sets of parameters, they are not widely applied in recommender systems. Thus, we select base models to cover the other three categories. The models include Shared-Bottom(SB)(Shi et al., 2017), MSSM(Kirsh et al., 2017), MMOE(Mori et al., 2018), and PLE(Ple, 2018). MSSM is a recent method belonging to the _Customized Routing_ category and achieves better results than SNR(Kang et al., 2018) and Cross-Stitch(Cros-Stitch, 2018). Though with the same category of _Dynamic Gating_, both MMOE and PLE are tested owing to their popularity. For each base model, we will verify whether DML can achieve additional gains. For reference, we also provide the performance of single task models. #### 3.1.4. Implementation Details For each feature, we use the embedding size 8. As suggested by the original papers, we use 1 level bottom sub-networks for MMOE, MSSM, and SB while 2 levels for PLE. For SB, a sub-network of 1 layer structure with 128 output dimensions is shared by the tasks. For other multi-task models, each bottom level includes three sub-networks, which have the same aforementioned architecture. For MSSM and PLE, task-specific and shared sub-networks are designated. For multi-task models, each task tower is of the three layers MLP structure (128,80,1) and each task is assigned equal loss weight. For the single-task model, each task utilizes the four layers MLP structure (128,128,80,1). For the first two MLPs at Figure 2(b), we utilize the one layer structure with 80 as the output dimension. For the last MLP at Figure 2(b), a one layer structure with 1 as the output size is used. If not explicitly specified, RELU (Kang et al., 2018) is used as the default activation function. All models are implemented with tensorflow (Abadi et al., 2016) and optimized using the Adam (Kingmae et al., 2014) optimizer with learning rate 0.001 and mini-batch size 512. We run 20 times for each test to report the results. \begin{table} \begin{tabular}{l l l l|l l} \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{ML-1M} & \multicolumn{2}{c}{Electronics} \\ \cline{2-6} & \(AUC\) & \(MSE\) & \(\begin{matrix}Consistent\\ Ratio\end{matrix}\) & \(AUC_{rate}\) & \(AUC_{pos}\) \\ \hline \hline Single Task & 0.8066 & 0.7741 & 0.7154 & 0.7608 & 0.7334 \\ \hline SB & 0.8100 & 0.7724 & 0.7530 & 0.7876 & 0.7608 \\ SB+DML & 0.8115* & 0.7648* & **0.7649*** & 0.7890* & 0.7631* \\ \hline MSSM & 0.8128 & 0.7651 & 0.7519 & 0.7883 & 0.7627 \\ MSSM+DML & 0.8141* & 0.7611* & 0.7637* & 0.7892* & 0.7641* \\ \hline MMOE & 0.8105 & 0.7688 & 0.7507 & 0.7888 & 0.7628 \\ MMOE+DML & 0.8135* & 0.7591* & 0.7596* & **0.7897*** & **0.7644*** \\ \hline PLE & 0.8122 & 0.7606 & 0.7514 & 0.7885 & 0.7627 \\ PLE+DML & **0.8151*** & **0.7533*** & 0.7631* & 0.7893* & 0.7640* \\ \hline \end{tabular} \end{table} Table 1. The overall performance. The bold-face font denotes the winner in that column. Moreover, the *** symbol denotes introducing DML achieves significant (p ¡ 0.05 for one-tailed t-test) gain over the corresponding baseline. \begin{table} \begin{tabular}{l l l l|l l} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{ML-1M} & \multicolumn{2}{c}{Electronics} \\ \cline{2-5} & \(AUC\) & \(MSE\) & \(AUC_{rate}\) & \(AUC_{pos}\) \\ \hline \hline _MSSM_ & 0.8128 & 0.7651 & 0.7883 & 0.7627 \\ _MSSM_ + \(CTFM\) & 0.8134 & 0.7654 & 0.7891 & 0.7636 \\ _MSSM_ + \(GKD\) & 0.8129 & 0.7636 & 0.7886 & 0.7633 \\ _MSSM_ + \(DML_{\text{e0}}\) & 0.8138 & 0.7625 & 0.7884 & 0.7631 \\ _MSSM_ + \(DML\) & **0.8141** & **0.7611** & **0.7892** & **0.7641** \\ \hline _PLE_ & 0.8122 & 0.7606 & 0.7885 & 0.7627 \\ _PLE_ + \(CTFM\) & 0.8133 & 0.7603 & 0.7890 & 0.7633 \\ _PLE_ + \(GKD\) & 0.8137 & 0.7574 & 0.7887 & 0.7634 \\ _PLE_ + \(DML_{\text{e0}}\) & 0.8141 & 0.7536 & 0.7886 & 0.7629 \\ _PLE_ + \(DML\) & **0.8151** & **0.7533** & **0.7893** & **0.7640** \\ \hline \end{tabular} \end{table} Table 2. Further Analysis Results ### Overall Performance for Public Data Please refer to Table 1 for the overall results. First, DML achieves significant gains across all these tested multi-task models on the two public datasets, which shows DML's wide compatibility. Second, DML-enhanced PLE and MMOE get the best performance for MovieLens and Electronics respectively. Considering their wide application in recommender systems, the results are as expected. Third, the multi-task models perform better than the single task models thanks to the knowledge transfer between tasks. Besides AUC and MSE, DML should help to foster task consistency with _CTFM_ and _GKD_. As the tasks of MovieLens are rigorously correlated, we verify whether DML really enhances task consistency on this data. First, we construct pairs of samples with different rating scores and count the pair numbers. Second, we count the number of pairs, for which the prediction scores of both tasks are in the same pair order as the rating score. The enhancement of the pair order consistency among the two prediction scores and rating score should positively contribute to the performance. Then, we can compute the metric of 'Consistency Ratio'. The listed data in Table 1 agree with our anticipation. (For Shared-Bottom, we also observe more pairs, for which predictions of both tasks are in rating score's reverse order. This can explain its worse performance in spite of the better consistency ratio.) ### Further Analysis on Public Data We select the two latest algorithms of PLE[(17)] and MSSM[(5)] to appraise the value of \(DML\)'s components, namely \(CTFM\) and \(GKD\). Without the stop gradient operation, \(CTFM\) will be very similar to the common attention mechanism. To prove the benefit of \(CTFM\)'s design, we also add the assessment for \(DML_{e0}\), which reserve the design of \(GKD\) while remove the gradient blocking operation of \(CTFM\). Please refer to Table 2 for the evaluation results. First, \(CTFM\) and \(GKD\) both contribute considerable gains over the base model. Second, as the integrated model, \(DML\) enhances the performance further. Third, \(DML_{e0}\) is consistently worse than \(DML\), which corroborates the value of reserving task-awareness. Compared with \(CTFM\) and \(GKD\), \(DML_{e0}\) performs better on MovieLens while much worse on Electronics. The task relationship of Electronics is more complex and negative transfer across tasks usually exhibits more severe impact due to task conflicts. In this case, compared with vanilla attention, \(CTFM\) obtains substantial gains. ### Online A/B Testing DML is applied to the ranking stage[(4)] of an industrial large-scale news recommender system. PLE [(17)] is utilized as the base model. And the main prediction tasks are the binary classification task of Click Through Rate (CTR) and the regression task of item watch time. First, after the model converge by training with billions of samples, the AUC metric for CTR consistently increases 0.12% and the MSE metric for watch time decreases 0.14%. Moreover, the most important online metrics include effective PV(count of Page Views with watch time exceeding a threshold) and total watch time. We randomly distributed online users to two buckets with the base PLE model or PLE+DML model and evaluated the performance for two weeks. DML achieves significant (p\(<\)0.05) gains over the base model by 1.22% for effective PV and 0.61% for total watch time. DML has been deployed to our online environment based on the results. ## 4. Conclusion In this papaer, we propose the framework of **D**eep **M**utual **L**earning across task towers**(**DML**), which is compatible with various backbone multi-task networks. Extensive offline experiments help to verify DML's effectiveness on multiple real-world datasets and across various base models. Moreover, thorough ablation studies are carried out to verify and understand the value of each newly introduced module. Finally, DML achieves significant online gains and has already been deployed to the online platform.
2309.09612
Quantifying ionization in hot dense plasmas
Ionization is a problematic quantity in that it does not have a well-defined thermodynamic definition, yet it is a key parameter within plasma modelling. One still therefore aims to find a consistent and unambiguous definition for the ionization state. Within this context we present finite-temperature density functional theory calculations of the ionization state of carbon in CH plasmas using two potential definitions: one based on counting the number of continuum electrons, and another based on the optical conductivity. Differences of up to 10\% are observed between the two methods. However, including "Pauli forbidden" transitions in the conductivity reproduces the counting definition, suggesting such transitions are important to evaluate the ionization state.
Thomas Gawne, Sam M. Vinko, Justin S. Wark
2023-09-18T09:33:41Z
http://arxiv.org/abs/2309.09612v3
# Quantifying ionization in hot dense plasmas ###### Abstract Ionization is a problematic quantity in that it does not have a well-defined thermodynamic definition, yet it is a key parameter within plasma modelling. One still therefore aims to find a consistent and unambiguous definition for the ionization state. Within this context we present finite-temperature density functional theory calculations of the ionization state of carbon in CH plasmas using two potential definitions: one based on counting the number of continuum electrons, and another based on the optical conductivity. Differences of up to 10% are observed between the two methods. However, including "Pauli forbidden" transitions in the conductivity reproduces the counting definition, suggesting such transitions are important to evaluate the ionization state. ## I Introduction Dense plasmas comprise complex, inherently quantum states of matter, covering a wide variety of different temperatures and densities. An exact treatment of a plasma would involve solving a many-body Schrodinger or Dirac equation that includes the full interactions between the particles. Given the extraordinarily large number of electrons and ions in realistic plasmas, this is impossible. In lie of directly solving the many-body equation, simpler models that only deal with the relevant physics required to make a set of predictions are used. Well-known examples include finite-temperature density functional theory (FT-DFT) [1; 2], collisional-radiative atomic-kinetics models [3], and hydrodynamic modelling [4]. Of course, it is important to be sure that plasma models are able to make good predictions. This is important for the general understanding of plasmas [5; 6; 7], and is especially pertinent in light of recent successes towards ignition in inertial confinement fusion [8; 9], where strong plasma modelling continues to be critical to improving gain [10]. Within plasma modelling there are a number of parameters that are used to describe a plasma and make predictions about experimental observables; often the temperature \(T\), density \(\rho\), and the ionization state of the ions \(Z\). However, unlike the first two parameters, \(Z\) does not have a well-defined thermodynamic definition [11]. This is not a trivial issue. The ionization state is an input parameter to many equations, so the choice of definition will have a cascading effect on the evolution of a plasma. It is also important to understand how the choice of definition used in plasma modelling relates to experimental observations. As plasma models deal with time-dependent ionization dynamics, they must have an unambiguous definition of ionization. To that end, the ionization state is often defined by the number of electrons bound to ions, with the remaining electrons considered purely free. The number of electrons free from an ion is its charge state (i.e. \(Z=0\) for a neutral atom), and the mean charge state of the system is used to represent its mean ionization state (MIS). This definition is then fed directly into equations governing the physical properties of the plasma. For example, in this bound-or-free electron picture, continuum lowering is thought to be treatable using models of ionization potential depression (IPD) such as the Stewart-Pyatt (SP) model [12]. However, in the past decade, a substantial body of research has emerged indicating a lack of consensus between widely-employed IPD models and experimental observations in dense plasma systems [13; 14; 15; 16; 17], raising concerns over our predictive capabilities in this challenging regime. An experiment at the National Ignition Facility (NIF) [16] found that the MIS of hot dense CH plasmas inferred from x-ray Thomson scattering (XRTS) measurements was substantially higher than predicted by plasma modelling. The discrepancy was attributed to problems with the IPD models used in evaluating the ionization, as simply increasing the MIS seemed to reproduce the XRTS data. However, more recent experimental measurements in hot dense Be [17] found that XRTS spectra could not be reproduced by artificially increasing the MIS, suggesting the reason for the discrepancies between these experiments and plasma models is more complex. The approach of assuming bound-or-free electrons provides a conceptually simple and intuitive definition of ionization. Therefore, it is often used in plasma models where the effect of delocalized electrons needs to be considered, such as in collisional-radiative atomic-kinetics simulations [18], IPD models, in the Chihara decomposition [19; 20] for XRTS modelling, and in atomic cascade calculations [21]. In the limit that electronic states can be distinguished as strongly localized and around ions or fully delocalized, the bound-or-free approach should be adequate to represent the MIS of a system. However, recent first principles calculations have shown that such a simple separation of electronic states is generally not possible in hot dense systems [22]. At the same time, it is not clear that the mean charge state should always be representative of the ionization state accessed in spectroscopy experiments involving high energy density systems. The motivation of this work is in exploring whether a definition of ionization based on bound or free electrons can be a suitable definition. For the present investigation, the equivalent definition of the mean charge state in a first-principles DFT calculation is to count the number of electrons in the continuum (\(c\)) bands \(N_{\text{cond}}\): \[\left\langle Z\right\rangle_{\text{count}}=\frac{1}{N_{i}}\sum_{\mathbf{k}}w_{\mathbf{k} }\sum_{n\in c}f(\epsilon_{\mathbf{k},n})=\frac{N_{\text{cond}}}{N_{i}}\,, \tag{1}\] where \(N_{i}\) is the number of ions, \(\epsilon_{\mathbf{k},n}\) is the eigenenergy of the Bloch state \(\ket{\psi_{\mathbf{k},n}}\), \(n\) is the band number, \(\mathbf{k}\) is the crystal momentum, \(w_{\mathbf{k}}\) is the \(k\)-point weight, and \(f(\epsilon_{\mathbf{k},n})\) is the Fermi-Dirac occupation number of the state and includes the state's degeneracy. Recently, attempts have been made to define ionization based on other physical properties of systems [23; 24]. One such definition [23] is based on the real component of the frequency-dependent optical conductivity \(\sigma(\omega)\). As the conductivity can be measured experimentally, it is proposed that this definition, outlined below, would provide direct access to the MIS. To derive the MIS, the optical conductivity is assumed to be described by the Kubo-Greenwood (KG) formula [25; 26]: \[\begin{split}\sigma(\omega)&=\frac{\pi\hbar e^{2}}{ m_{e}V}\sum_{\mathbf{k}}w_{\mathbf{k}}\sum_{n,m}\left[f(\epsilon_{\mathbf{k},m})-f( \epsilon_{\mathbf{k},n})\right]g_{nm}^{\mathbf{k}}\\ &\times\delta\left(\epsilon_{\mathbf{k},n}-\epsilon_{\mathbf{k},m}-\hbar \omega\right)\,,\end{split} \tag{2}\] where \(V\) is the volume of the system, and \(g_{nm}^{\mathbf{k}}\) are the dipole transition matrix elements: \[g_{nm}^{\mathbf{k}}=\frac{\hbar^{2}}{3m_{e}}\frac{\left|\left\langle\psi_{\mathbf{k},m}\right|\nabla\left|\psi_{\mathbf{k},n}\right\rangle\right|^{2}}{\epsilon_{\bm {k},n}-\epsilon_{\mathbf{k},m}}\,. \tag{3}\] For Bloch states \(\ket{\psi_{\mathbf{k},n}}=e^{i\mathbf{k}\cdot\mathbf{r}}\ket{u_{\mathbf{k},n}}\), it can be shown that [27; 28]: \[\begin{split}\frac{\hbar^{2}}{im_{e}}\bra{\psi_{\mathbf{k},m}}\nabla \ket{\psi_{\mathbf{k},n}}&=\delta_{m,n}\nabla_{\mathbf{k}}\epsilon_{\bm {k},m}\\ &+\left(\epsilon_{\mathbf{k},n}-\epsilon_{\mathbf{k},m}\right)\bra{u_{\bm {k},m}}\nabla_{\mathbf{k}}u_{\mathbf{k},n}\,.\end{split} \tag{4}\] The Thomas-Reiche-Kuhn (TRK) sum rule [29; 30; 31] states that, for a complete basis set, the sum of the dipole matrix elements is unity. In momentum form, this is: \[2\sum_{\begin{subarray}{c}n\\ \epsilon_{\mathbf{k},n}\neq\epsilon_{\mathbf{k},m}\end{subarray}}g_{nm}^{\mathbf{k}}=2 \sum_{n}g_{nm}^{\mathbf{k}}\tau(\epsilon_{\mathbf{k},n}-\epsilon_{\mathbf{k},m})=1\,, \tag{5}\] where \(\tau(\epsilon_{\mathbf{k},n}-\epsilon_{\mathbf{k},m})=1-\delta(\epsilon_{\mathbf{k},n}- \epsilon_{\mathbf{k},m})\) ensures the \(\epsilon_{\mathbf{k},n}=\epsilon_{\mathbf{k},m}\) terms are excluded. The number of electrons in the system \(N_{e}\) can therefore be recovered by including a sum over the occupation numbers: \[\begin{split} N_{e}&=\sum_{\mathbf{k}}w_{\mathbf{k}}\sum _{m}f(\epsilon_{\mathbf{k},m})\\ &=2\sum_{\mathbf{k}}w_{\mathbf{k}}\sum_{m}f(\epsilon_{\mathbf{k},m})\sum_{n} g_{nm}^{\mathbf{k}}\tau(\epsilon_{\mathbf{k},n}-\epsilon_{\mathbf{k},m})\\ &=\sum_{\mathbf{k}}w_{\mathbf{k}}\sum_{n,m}\left[f(\epsilon_{\mathbf{k},n})- f(\epsilon_{\mathbf{k},m})\right]g_{mn}^{\mathbf{k}}\tau(\epsilon_{\mathbf{k},n}-\epsilon_{\mathbf{k},m}) \,,\end{split} \tag{6}\] where the relationship \(g_{nm}^{\mathbf{k}}=-g_{mn}^{\mathbf{k}}\) for \(n\neq m\) and index swapping are used to derive the last line. A similar-looking sum can be constructed by integrating the KG conductivity over all frequencies: \[\begin{split} S&=\frac{2m_{e}V}{\pi e^{2}}\int_{0}^ {\infty}\sigma(\omega)d\omega\\ &=\sum_{\mathbf{k}}w_{\mathbf{k}}\sum_{n,m}\left[f(\epsilon_{\mathbf{k},m})- f(\epsilon_{\mathbf{k},n})\right]g_{nm}^{\mathbf{k}}\,,\end{split} \tag{7}\] where the delta function in Eq. (2) is used to derive the second line. Ref. [23] has proposed that by splitting the conductivity into transitions between conduction (\(c\)) and valence (\(v\)) states, so that \(\sigma=\sigma^{c\to c}+\sigma^{v\to c}+\sigma^{v\to v}\), the MIS can be found by only considering the \(c\to c\) conductivity: \[\left\langle Z\right\rangle_{\text{TRK}}=\frac{2m_{e}V}{\pi e^{2}N_{i}}\int_{0} ^{\infty}\sigma^{c\to c}(\omega)d\omega=\frac{N_{\text{eff}}}{N_{i}}\,, \tag{8}\] where \(N_{\text{eff}}\) is the number of conduction electrons calculated in this scheme. Note that this is akin to deciding which states should be considered bound or free. However, care is needed: the sums in \(S\) include the \(\epsilon_{\mathbf{k},n}=\epsilon_{\mathbf{k},m}\) terms, whereas the TRK sum rule excludes them. When \(n=m\), Eq. (4) shows that \(\bra{\psi_{\mathbf{k},n}}\nabla\ket{\psi_{\mathbf{k},n}}\) is not always zero, but is proportional to the curvature of the band at \(\mathbf{k}\). Meanwhile, in the limit of zero energy difference, the difference in the eigenenergies in \(g_{nn}^{\mathbf{k}}\) and the occupation numbers in Eq. (2) resolve to the gradient of the Fermi-Dirac distribution at \(\epsilon_{\mathbf{k},n}\), so the \(n=m\) terms are finite and not necessarily zero [32]. Therefore, if the \(g_{nn}^{\mathbf{k}}\) terms are included, \(S\geq N_{e}\)[32]. In practice, converging the sums \(N_{e}\) and \(S\) requires a huge number of bands, even at moderate temperatures, so it can appear that \(S<N_{e}\). If the \(n=m\) terms are included, the number of electrons extracted from \(\sigma^{c\to c}\) (which includes \(g_{nn}^{\mathbf{k}}\) terms) may appear larger than it should when compared with the equivalent TRK sum. The conductivity-based definition in Eq. (8) has recently been applied to predict the MIS found via XRTS measurements of hot dense Be plasmas generated in implosions at the NIF [17]. The conductivity measure was found to agree well with the XRTS-inferred MIS at the highest temperatures and densities produced (\(T=150\) eV, \(\rho\geq 30\) g cm\({}^{-3}\)), while the counting measure using the SP model under-predicts the MIS. However, at lower temperatures and densities (\(T=100\) eV, \(\rho\sim 10\) g cm\({}^{-3}\)) the counting measure agrees well with the experimental data, while the conductivity definition appears to over-predict the MIS. Furthermore, at mass densities \(\rho\lesssim 10\) g cm\({}^{-3}\), differences in the predictions between the counting and conductivity definitions grow further as the temperature decreases, even in conditions where the counting definition is expected to be applicable [17]. Calculating the effective ionization state of a system using the \(c\to c\) conductivity has been explored previously in condensed matter physics. Notably, when applied to experimental measurements of the conductivity in alkali metals in the early 70's, it was found that \(N_{\rm eff}>1\)[33], causing consternation since this implies that the ionization state exceeds the number of electrons in the conduction band, \(N_{\rm cond}\). Kobayashi and Watabe [34] showed how this problem could be resolved by completing the TRK sum by including Pauli forbidden transitions in the calculation of the ionization state. An equivalent term does not appear to have been included in the ionization calculations of Refs. [17; 23]. In this letter, we calculate the MIS of hot dense CH plasmas from first-principles using FT-DFT, using both the counting definition in Eq. (1) and the conduction-based definition of Eq. (8). Differences of up to 10% are seen in the MIS between these two methods, with the differences becoming larger at low temperatures. We extend Kobayashi and Watabe's arguments to finite temperatures, and examine the effect on the ionization state predicted by the conductivity. Like Kobayashi and Watabe, we find that including \(v\to c\) transitions recovers the counting result. ## II Counting the conduction electrons To calculate the effective number of conduction electrons, the TRK sum over states is split into a sum over conduction states (\(n\in c\)) and valence states (\(n\in v\)): \[\begin{split} N_{e}&=\sum_{\mathbf{k}}w_{\mathbf{k}}\sum_{n,m}\left[f(\epsilon_{\mathbf{k},n})-f(\epsilon_{\mathbf{k},m})\right]g^{\mathbf{k}}_{mn} \tau(\epsilon_{\mathbf{k},n}-\epsilon_{\mathbf{k},m})\\ &=\sum_{\mathbf{k}}w_{\mathbf{k}}\left(\sum_{n\in c}+\sum_{n\in v}\right) \left(\sum_{m\in c}+\sum_{m\in v}\right)g^{\mathbf{k}}_{mn}\\ &\quad\times\left[f(\epsilon_{\mathbf{k},n})-f(\epsilon_{\mathbf{k},m}) \right]\tau(\epsilon_{\mathbf{k},n}-\epsilon_{\mathbf{k},m})\,,\end{split} \tag{9}\] It is proposed that only the \(c\to c\) transitions need to be used to calculate the number of conduction electrons [34; 23]: \[\begin{split} N_{\rm eff}&=\sum_{\mathbf{k}}w_{\mathbf{k}} \sum_{n\in c}\sum_{m\in c}\left[f(\epsilon_{\mathbf{k},n})-f(\epsilon_{\mathbf{k},m}) \right]g^{\mathbf{k}}_{mn}\\ &\quad\times\tau(\epsilon_{\mathbf{k},n}-\epsilon_{\mathbf{k},m})\\ &=2\sum_{\mathbf{k}}w_{\mathbf{k}}\sum_{n\in c}f(\epsilon_{\mathbf{k},n}) \sum_{m\in c}g^{\mathbf{k}}_{mn}\tau(\epsilon_{\mathbf{k},n}-\epsilon_{\mathbf{k},m})\,. \end{split} \tag{10}\] However, Eq. (10) does not include a complete sum rule [34]. To do so, the valence states need to be included again using: \[\begin{split} N_{\rm eff}&=2\sum_{\mathbf{k}}w_{\mathbf{k}} \sum_{n\in c}f(\epsilon_{\mathbf{k},n})\left(\sum_{m}-\sum_{m\in v}\right)g^{\mathbf{k }}_{mn}\\ &\quad\times\tau(\epsilon_{\mathbf{k},n}-\epsilon_{\mathbf{k},m})\\ &=\sum_{\mathbf{k}}w_{\mathbf{k}}\sum_{n\in c}f(\epsilon_{\mathbf{k},n})\\ &\quad-2\sum_{\mathbf{k}}w_{\mathbf{k}}\sum_{n\in c}f(\epsilon_{\mathbf{k},n} )\sum_{m\in v}g^{\mathbf{k}}_{mn}\tau(\epsilon_{\mathbf{k},n}-\epsilon_{\mathbf{k},m})\\ &\equiv N_{\rm cond}-\Delta N_{\rm eff}\,.\end{split} \tag{11}\] The \(\Delta N_{\rm eff}\) term relates the number of ionized electrons from the conduction definition to the counting definition. \(\Delta N_{\rm eff}<0\), so it represents an increase in the number of conduction electrons from the optical conductivity as compared with just counting them. In the cold limit, this term only includes transitions between the valence states (which are fully occupied) and the occupied conduction states. In other words, this term only includes forbidden transitions due to Pauli blocking [34]. In the finite-temperature limit, this term does not quite represent transitions between Pauli blocked states as there is no statistical weighting accounting for the thermal occupation of the valence states at high temperatures. This derivation reveals the link between the conductivity-based definition of ionization proposed in Ref. [23] and the simple electron counting method. ## III Results and discussion We proceed to examine the degree to which the \(c\to c\) conductivity over-predicts the ionization state via FT-DFT simulations of CH plasmas at different temperatures, performed using the Abinit v8.10.3 package [35; 36; 37]. We choose CH so that we can compare our calculations with experimental measurements of the MIS from Ref. [16]. We simulate supercells containing 32 atoms (C\({}_{16}\)H\({}_{16}\)), with the lattice parameters chosen to give a mass density of 6.74 g cm\({}^{-3}\). The electron temperature was varied from 1-140 eV. The density and temperatures were chosen to match the experimental conditions of Ref. [16]. In order to have a well-converged TRK sum rule the vast majority of occupied bands need to be calculated explicitly. For each calculation the number of bands used ensured the sum rule was at least 97% complete (4500-7000 bands). To ensure there are no high-level symmetries in the system the ions were randomly placed in the cell. Although this effects the specific shapes of the density of states and the conductivity, the MIS was found to be largely independent of the exact ion positions. The _Atompaw_ code [38] was used to generate projector augmented-wave (PAW) [39] potentials for the C and H atoms. Due to the high temperatures involved, all of the electrons are treated as valence. The PAW potentials were generated for atoms at zero-temperature, and previous studies have shown that such potentials can be applied to investigate thermal ionization effects [40]. A PAW core radius of 0.5 Bohr is used for both atoms. For the ion positions that were sampled, there was no overlap of the cores. The PBE form of the generalized gradient approximation [41] was used for the exchange-correlation functional. The mean ionization state of CH versus the electron temperature using the different definitions is plotted in Fig. (1). To avoid over-counting the electrons when integrating the conductivity, the TRK definitions were calculated using the sums over states, as in Eqs. (10) and (11). The error bars on the TRK calculations represent the uncertainty in the MIS due to the sum rule being incomplete. The red circles denote the MIS calculated using \(N_{\text{eff}}\), and the blue squares from the counting definition, Eq. (1). Across the entire temperature range the \(c\to c\) conductivity definition predicts a higher MIS than the counting definition, up to 10% at the lowest temperatures. This appears to be in agreement with previous observations [33; 34; 17]. Additionally, the red curve has an interesting behaviour whereby the MIS increases as the temperature decreases below 40 eV. This is in contrast to the blue curve which monotonically increases with temperature, and is approximately constant at \(\left\langle Z\right\rangle_{\text{count}}=5.0\) below 40 eV. When the correction term \(\Delta N_{\text{eff}}\) is included (green triangles), the MIS from the conductivity reproduces the counting definition. There is a small deviation still at 1 eV, though this is likely due to the sum rule still being incomplete. The reproduction of the counting MIS should not be surprising as the \(\Delta N_{\text{eff}}\) term directly connects \(N_{\text{eff}}\) to \(N_{\text{cond}}\). These results suggest that the \(c\to c\) conductivity-based definition, as proposed in Ref. [23], does not provide the correct ionization state as it does not use a complete sum rule. We note that the correction term \(\Delta N_{\text{eff}}\) can be substantial, even at relatively high temperatures. We can compare our results with experimental measurements of the CH ionization state from Ref. [16]. Their hydrodynamic simulations suggested a mean mass density of 6.74 g cm\({}^{-3}\), at which their XRTS spectrum could be fitted with a mean temperature of \(86\pm 20\) eV and a MIS of \(5.92\pm 0.15\). This ionization state is substantially larger than is predicted by our calculations using the counting definition. The MIS calculated from \(N_{\text{eff}}\) is closer to the experimental data, though it is is still lower across the inferred temperature range. Once the correction \(\Delta N_{\text{eff}}\) is applied, the conductivity-derived ionization is further from the experimental data. In Ref. [17], the MIS from \(N_{\text{eff}}\) appears to agree well with the XRTS spectrum of Be at very high densities and temperatures. We ascribe this apparent improvement in the conductivity definition over the electron counting definition in these extreme conditions precisely to the fact that \(N_{\text{eff}}\) overestimates the number of continuum electrons as compared with \(N_{\text{count}}\). At lower temperatures it becomes apparent that \(N_{\text{eff}}\) overestimates the MIS. Including a term involving transitions between valence and occupied conduction states - transitions which at low temperatures are Pauli forbidden [34] - allows for the conductivity to reproduce the number of electrons in the continuum. Our results suggest these "Pauli forbidden" transitions are important in evaluating the MIS from the conductivity at finite temperatures. Of course, there remains the question of what are the additional electrons that XRTS measurements seem to suggest there should be compared with the counting definition. Ref. [17] showed that simply increasing the MIS would still not reproduce their XRTS data as the elastic feature would need to be larger than was measured. Instead, they found that the discrepancies in the XRTS spectrum can be explained by the delocalization of the K-shell wavefunctions, caused by the continuum electron density screening the atomic nuclei. As the screening electron density increases with temperature and compression, so does the delocalization, resulting in greater deviations of the XRTS spectrum from modelling. Using a simple self-consistent screening model, Ref. [17] were able reproduce their XRTS spectra. In IPD models, the bound electrons are considered to be strongly localized around atomic sites, hence they cannot account for their gradual delocalization. Among other effects, the delocalization of the K-shell wavefunctions results in them overlapping, allowing their electrons to move between atomic sites. For completeness, we note that electrons moving within the K-shell states would be represented by the \(v\to v\) transitions in the optical conductivity, which are not included in \(N_{\text{eff}}\). In the present calculations, the contribution of these transitions is extremely small, but Figure 1: Mean ionization state of CH (\(N/N_{\text{CH}}\)) calculated using different methods. The red circles represent the MIS using \(N=N_{\text{eff}}\) as suggested in Ref. [23]. The green triangles use \(N=N_{\text{eff}}+\Delta N_{\text{eff}}\). The blue squares denote the counting definition, \(N=N_{\text{cond}}\). non-zero at temperatures above 40 eV. So while these contributions are not large enough to meaningfully increase the MIS directly, they still indicate the K-shell electrons are mobile at high temperatures. We would therefore agree that more detailed accounting for the effect of atomic electrons may explain the higher ionization seen in XRTS experiments compared with counting the continuum electrons. To conclude, we have examined the applicability of the bound-or-free electron model to describing the MIS in high energy-density systems by considering two potential definitions of ionization: one based on counting the number of electrons in the continuum, and another recently proposed definition based on the optical conductivity [23]. For the latter definition, it is shown that unless transitions between the valence and conduction states are included - transitions that at low temperatures are Pauli forbidden [34] - then this definition will over-count the number of electrons in the continuum. It is also shown that only counting the continuum electrons predicts a lower MIS compared to that inferred from XRTS measurements at high temperatures and densities. However, in such extreme conditions, electrons in states that may be typically classed as bound gradually delocalize [17]. In the calculations presented here, at sufficiently high temperatures, the valence-valence conductivity is non-zero, which implies the electrons in the valence bands are mobile. We therefore conclude that more detailed accounting of the contribution of atomic electrons to the MIS may explain the higher ionization states in XRTS experiments compared with counting the continuum electrons. While this work focuses on CH to allow us to link to experiments, the results are general and are readily applicable to other materials across the full range of plasma conditions. ## Acknowledgements T.G., J.S.W. and S.M.V. acknowledge support from AWE via the Oxford Centre for High Energy Density Science (OxCHEDS). S.M.V. acknowledges support from the Royal Society. J.S.W. and S.M.V. acknowledge support from the UK EPSRC under grants EP/P015794/1 and EP/W010097/1. S.M.V. is a Royal Society University Research Fellow.
2309.16732
Kaniadakis entropy-based characterization of IceCube PeV neutrino signals
Kaniadakis $\kappa$-thermostatistics is by now recognized as an effective paradigm to describe relativistic complex systems obeying power-law tailed distributions, as opposed to the classical (exponential-type) decay. It is founded on a non-extensive one-parameter generalization of the Bekenstein-Hawking entropy, which, in the cosmological framework, gives rise to modified Friedmann equations on the basis of the gravity-thermodynamic conjecture. Assuming the entropy associated with the apparent horizon of the Friedmann-Robertson-Walker (FRW) Universe follows Kaniadakis prescription, in this work we analyze the observed discrepancy between the present bound on the Dark Matter relic abundance and the IceCube high-energy ($\sim 1\,\mathrm{PeV}$) neutrinos. We show that this tension can be alleviated in the minimal model of Dark Matter decay with Kaniadakis-governed Universe evolution, while still considering the 4-dimensional Yukawa coupling between Standard Model and Dark Matter particles. This argument phenomenologically supports the need for a Kaniadakis-like generalization of the Boltzmann-Gibbs-Shannon entropy in the relativistic realm, opening new potential scenarios in high-energy astroparticle physics.
Massimo Blasone, Gaetano Lambiase, Giuseppe Gaetano Luciano
2023-09-27T14:28:41Z
http://arxiv.org/abs/2309.16732v1
# Kaniadakis entropy-based characterization of IceCube PeV neutrino signals ###### Abstract Kaniadakis \(\kappa\)-thermostatistics is by now recognized as an effective paradigm to describe relativistic complex systems obeying power-law tailed distributions, as opposed to the classical (exponential-type) decay. It is founded on a non-extensive one-parameter generalization of the Bekenstein-Hawking entropy, which, in the cosmological framework, gives rise to modified Friedmann equations on the basis of the gravity-thermodynamic conjecture. Assuming the entropy associated with the apparent horizon of the Friedmann-Robertson-Walker (FRW) Universe follows Kaniadakis prescription, in this work we analyze the observed discrepancy between the present bound on the Dark Matter relic abundance and the IceCube high-energy (\(\sim 1\,\)PeV) neutrinos. We show that this tension can be alleviated in the minimal model of Dark Matter decay with Kaniadakis-governed Universe evolution, while still considering the 4-dimensional Yukawa coupling between Standard Model and Dark Matter particles. This argument phenomenologically supports the need for a Kaniadakis-like generalization of the Boltzmann-Gibbs-Shannon entropy in the relativistic realm, opening new potential scenarios in high-energy astroparticle physics. ## I Introduction The IceCube Neutrino Observatory, a neutrino telescope within the glacial ice of the Geographic South Pole, extends over \(1\,\)km\({}^{3}\) of ice from roughly \(10^{3}\,\)m under the surface [1]. It was originally designed to search for neutrino sources in the TeV regime to explore the highest-energy astrophysical processes [1; 2]. Interestingly enough, some unexpected neutrino-initiated cascade events were also collected with PeV energies. While being initially attributed to astrophysical objects [3; 4], these exotic signals were later understood to be likely unrelated to known hot-spots, like supernova remnants or active galactic nuclei [5]. Although gamma-ray bursts remain potential source candidates [6; 7], the most credited assumption is that these neutrinos may have been produced by the heavy decaying Dark Matter (DM) [8; 9; 10; 11; 12; 13; 14; 15]. In [16] Chianese and Merle speculated on the decay of a hypothetical thermal relic density of PeV scale DM via the minimal (4-dimensional) DM-neutrino Yukawa-like coupling \(\mathcal{L}_{d=4}=y\,\bar{L}\cdot H\,\chi\), where \(\bar{L}\), \(H\) and \(\chi\) are the left-handed lepton doublet, Higgs doublet and the DM particle, respectively, while \(y\) quantifies the interaction. Here, we have dropped for simplicity the index for the mass eigenstates of the three active neutrinos1. However, considerations on the optimal lifetime \(\tau\sim 10^{28}\,\)sec of PeV DM [26; 27] reveal that such a coupling fails to account for both the PeV DM relic abundance and the decay rate needed for IceCube [16; 15; 28]. Though alternative mechanisms have been subsequently invoked, including the existence of a secluded DM sector [29], freeze-out with resonantly enhanced annihilation [30] or freeze-in [31; 32; 33], no definitive solution to the problem of IceCube PeV neutrinos has yet emerged. Footnote 1: For a recent discussion on whether to consider mass or flavor states as active part of neutrino interactions, see [17; 18; 19; 20; 21; 22; 23; 24; 25]. The DM model of [16] is framed in the standard General Relativity (GR). Nevertheless, empirical evidences from Type Ia Supernovae, CMB radiation and large-scale structures indicate that Einstein's theory and the ensuing cosmological Friedmann equations are to be properly corrected to comply with phenomenology and, in particular, to explain the late-time accelerating expansion of the Universe and the inflationary scenario [34]. In the light of the gravity-thermodynamic conjecture [35], it is known that the Friedmann equations ruling the Universe evolution can be derived from the first law of thermodynamics on the apparent horizon of the Universe [36; 37; 38; 39; 40; 41; 42; 43; 44] along with the holographic principle and the Bekenstein-Hawking entropy [45; 46]. Recently, arguments from different perspectives have converged on the idea that the conventional entropy-area law should be somehow generalized due to quantum gravitational [47; 48; 49; 50; 51; 52] and/or non-extensive [53; 54; 55; 56; 57; 58] corrections. Among the most prominent examples of the latter class, Kaniadakis entropy arises from the effort to extend the classical Boltzmann-Gibbs statistics to the special relativistic context [59; 60; 61; 62]. In turn, the associated distribution computed through the maximum entropy principle is a one-parameter continuous deformation of the Maxwell-Boltzmann function, exhibiting power-law tails instead of the canonical exponential behavior. Kaniadakis framework has so far been tested successfully for many high-energy systems, such as cosmic rays [60], plasma [63] and open stellar clusters [64]. In parallel, one advantage of Kaniadakis entropy in Cosmology is the non-trivial impact it has on the predicted history of the Universe, which gets modified toward improving the \(\Lambda\)CDM model phenomenologically [65; 66; 67; 68; 69; 70; 71; 72] (see also [73] for a recent review of Kaniadakis entropy applications in Gravity and Cosmology). In particular, it is found that the Hubble expansion rate acquires the form \(H(T)=H_{GR}(T)Z_{\kappa}(T)\), where \(H_{GR}\) is the rate obtained in the standard Cosmology based on GR, while \(Z_{\kappa}\) contains Kaniadakis induced effects. Typically, relativistic corrections are expected to be modulated over time, in such a way that \(Z_{\kappa}(T)\neq 1\) in the earliest stages of the Universe existence (at the pre-BBN epoch, which is not directly constrained by cosmological observations), while it tends to unity at late-time, recovering classical GR. Although not considered in the original Kaniadakis framework, this behavior can be taken into account by assuming a running \(\kappa\)-parameter decreasing over time, as speculated in [70]. Starting from the above premises, in this work we focus on the study of the observed discrepancy between the current bound on the Dark Matter relic abundance and the IceCube high-energy neutrino events in Kaniadakis Cosmology. In this respect, we would like to remark that alternative modified cosmologies based on deformed entropic scenarios have been proposed in [74; 75; 76; 77] based on information theory or quantum gravitational considerations. It is important to stress that the specific rationale behind the present analysis is that the PeV neutrinos revealed by IceCube are highly relativistic and, hence, more suited to be described in a picture that involves relativistic (Kaniadakis-like) statistical laws too. In this context, we assert that the IceCube tension can be alleviated in a Universe governed by Kaniadakis-Cosmology implied Friedmann equations, while still employing the minimal 4-dimensional interaction \(\mathcal{L}_{d=4}\) defined above. We stress that this is a virtue of Kaniadakis formalism, which cannot be accounted for by the usual Cosmology. The remainder of the work is structured as follows: in the next Section we briefly review Kaniadakis statistics. Sec. III is devoted to discuss the modified Cosmology based on Kaniadakis horizon entropy, while in Sec. IV we apply Kaniadakis paradigm to the IceCube high-energy neutrino tension. Conclusions and outlook are finally summarized in Sec. V. Throughout the work, we use natural units \(\hbar=c=k_{B}=1\), while we keep the gravitational constant \(G\) explicit. In this way, we have \(G=1/M_{p}^{2}\), with \(M_{p}\) being the Planck mass. ## II Kaniadakis statistics: a review In this Section we discuss mathematical and physical basics of Kaniadakis statistics. For more details on the subject, see [59; 60; 61]. It is known that the Maxwell-Boltzmann (MB) distribution is taken as a foundation of the classical statistical mechanics, rather than stemming from it. In fact, such a distribution emerges within the Newtonian mechanics, as suggested by numerical simulations of classical molecular dynamics [78]. The question naturally arises as to whether MB distribution is also obtained within a framework governed by the special relativity laws at microscopic dynamical level. This problem has been addressed in [59], based on the evidence that the relativistic cosmic rays exhibit a power-law tailed spectrum, in contrast to the MB exponential behavior [60]. The same feature has also been observed in other high-energy systems, such as the plasma in a superthermal radiation field [63], nuclear collisions [79] and open stellar clusters [64]. These evidences advise on the need to suitably generalize the classical Boltzmann-Gibbs-Shannon (BGS) entropic functional in the relativistic realm. In [60; 61] it has been shown the Lorentz transformations of the special relativity arguably impose the following one-parameter deformation of Boltzmann-Gibbs entropy \[S_{\kappa}=-\sum_{i}n_{i}\ln_{\kappa}n_{i}\,, \tag{1}\] where the \(\kappa\)-logarithm is defined by \[\ln_{\kappa}x\equiv\frac{x^{\kappa}-x^{-\kappa}}{2\kappa}\,. \tag{2}\] The generalized Boltzmann factor for the \(i\)-th level of the system of energy \(E_{i}\) takes the form \[n_{i}=\alpha\exp_{\kappa}\left[-\beta\left(E_{i}-\mu\right)\right], \tag{3}\] where \[\exp_{\kappa}(x) \equiv \left(\sqrt{1+\kappa^{2}\,x^{2}}\,+\,\kappa\,x\right)^{1/\kappa}\,, \tag{4}\] and \[\alpha = \left[(1-\kappa)/(1+\kappa)\right]^{1/2\kappa}\,, \tag{5}\] \[1/\beta = \sqrt{1-\kappa^{2}}\,T\,. \tag{6}\] The deformed entropy in Eq. (1) is known as _Kaniadakis entropy_. Deviations from the classical framework are quantified by the dimensionless parameter \(-1<\kappa<1\), in such a way that the standard statistics is recovered in the Galilean \(\kappa\to 0\) limit. For later convenience, we remind that Kaniadakis entropy can be equivalently expressed as [80; 81] \[S_{\kappa}=-\sum_{i=1}^{W}\frac{P_{i}^{1+\kappa}-P_{i}^{1-\kappa}}{2\kappa}\,, \tag{7}\] where \(P_{i}\) denotes the probability of the system to be in the \(i\)-th microstate and \(W\) is the total number of configurations. To have more contact with the physical language, it is now convenient to introduce the following functions \[u(q) = \frac{q}{\sqrt{1+\kappa^{2}q^{2}}}\,, \tag{8}\] \[\mathcal{W}(q) = \frac{1}{\kappa^{2}}\sqrt{1+\kappa^{2}q^{2}}-\frac{1}{\kappa^{2}}\,,\] (9) \[\varepsilon(\mathcal{W}) = \mathcal{W}+\frac{1}{\kappa^{2}}\,, \tag{10}\] which correspond to the (auxiliary) dimensionless velocity, kinetic and total energy of a given one-particle system, respectively. Here, we have denoted the dimensionless momentum by \(q\). The above relations can be easily inverted to give \[q(u) = \frac{u}{\sqrt{1-\kappa^{2}u^{2}}}\,, \tag{11}\] \[\mathcal{W}(\varepsilon) = \varepsilon-\frac{1}{\kappa^{2}}\,,\] (12) \[\varepsilon(q) = \frac{1}{\kappa^{2}}\sqrt{1+\kappa^{2}q^{2}}\,. \tag{13}\] At this stage we can define the physical velocity \(v\), momentum \(p\) and total energy \(E\) through [61] \[\frac{v}{u}=\frac{p}{mq}=\sqrt{\frac{E}{m\epsilon}}=\kappa c\equiv v_{*}\,. \tag{14}\] Similarly, the kinetic energy is given by \[W=E-mc^{2}\,, \tag{15}\] with \(m\) being the rest mass of the system. In order for these variables to be consistently defined in the Galilean limit too, we have to require \[\lim_{c\to\infty,\kappa\to 0}v_{*}<\infty\,. \tag{16}\] In so doing, insertion of the physical variables into Eqs. (8)-(10) allows us to recover the standard momentum/energy formulas of a particle in the special-relativistic regime, i.e. \[p=\gamma mv\,,\quad E=\gamma mc^{2}\,, \tag{17}\] where \(\gamma=1/\sqrt{1-v^{2}/c^{2}}\) is the relativistic Lorentzian factor. A comment is in order here: besides Kaniadakis formulation, other relativistic generalizations of the MB distribution have been proposed in the literature. Among these, Maxwell-Juttner velocity distribution [82] represents the first attempt toward the construction of a relativistic statistical theory. Such a model, however, is developed by naively replacing the relativistic energy-velocity relation into the Maxwell-Boltzmann factor. In turn, this gives rise to a hybrid distribution, which still maximizes the classical BGS entropy. On the other hand, Kaniadakis distribution (4) is derived ab initio from an entropic functional compatible with the special relativity, thus providing a self-consistent relativistic statistical framework. ## III Modified cosmology through Kaniadakis horizon entropy Let us now export Kaniadakis paradigm to the black-hole framework. This step will then be useful for the holographic (and, hence, cosmological) application of Kaniadakis entropy. Toward this end, we assume equiprobable states \(P_{i}=1/W\) in Eq. (7) and use the property that the Boltzmann-Gibbs-Shannon entropy is \(S\propto\log W\). Since the Bekenstein-Hawking entropy is \(S_{BH}=A/(4G)\), we have \(W=\exp\left[A/(4G)\right]\). By plugging into Eq. (7), we find \[S_{\kappa}=\frac{1}{\kappa}\sinh\left(\kappa\,\frac{A}{4G}\right), \tag{18}\] which indeed recovers the standard Bekenstein-Hawking entropy \(S_{BH}\) for \(\kappa\to 0\). Notice that, since the above function is even, i.e. \(S_{\kappa}=S_{-\kappa}\), we can safely restrict to the \(\kappa>0\) domain for our next considerations. In addition, given that deviations from the Bekenstein-Hawking formula are expected to be small, it is reasonable to approximate Eq. (18) for \(\kappa\ll 1\) as \[S_{\kappa}=S_{BH}+\frac{\kappa^{2}}{6}S_{BH}^{3}+\mathcal{O}(\kappa^{4})\,, \tag{19}\] where the first term is the usual entropy, while the second one provides the leading-order Kaniadakis correction. We can now proceed with the derivation of the \(\kappa\)-modified Friedmann equations. For this purpose, we follow [67] and describe the 4-dim. background by a homogeneous and isotropic (Friedmann-Robertson-Walker) flat geometry with metric \[ds^{2}=-dt^{2}+a^{2}(t)\left(dr^{2}+r^{2}d\Omega^{2}\right), \tag{20}\] where \(a(t)\) denotes the time-dependent scale factor and \(\mathrm{d}\Omega^{2}=\mathrm{d}\theta^{2}+\sin^{2}\theta\mathrm{d}\varphi^{2}\) is the angular part of the metric on the two sphere. Moreover, we assume that the Universe is filled with a matter perfect fluid of mass density \(\rho_{m}\) and pressure \(p_{m}=w\rho_{m}\) at equilibrium, where \(-1\leq w\leq 1/3\) is the equation-of-state parameter. As a next step, we apply the gravitational thermodynamics conjecture to the Universe apparent horizon of radius \(r_{a}=1/H=a/\dot{a}\) and effective temperature \(T=1/(2\pi r_{a})\). Practically, this consists in using the first law of thermodynamics \[dU=TdS-WdV\,, \tag{21}\] on the horizon of the Universe, which is conceived as a (spherical) expanding thermodynamic system. Here, \(W=(\rho_{m}-p_{m})/2\) is the work density due to the change in the apparent horizon radius of the Universe, while \(dU\) and \(dV\) are the corresponding increase in internal energy and volume, respectively. Observing that \(dU=-dE\), where \(E=\rho_{m}V\) is the total energy content inside the Universe of volume \(V=4\pi r_{a}^{3}/3\), Eq. (21) can be equivalently cast as \[dE=-TdS+WdV\,. \tag{22}\] We now follow [83], but with the generalized Kani-adakis entropy (18) instead of the Bekenstein-Hawking one. Omitting standard textbook calculations, we get from Eq. (22) [67] \[-4\pi G\left(\rho_{m}+p_{m}\right)=\cosh\left(\kappa\frac{\pi}{GH^{2}}\right) \dot{H}\,, \tag{23}\] where the overdot indicates derivative respect to the cosmic time \(t\). Furthermore, by imposing the conservation equation \[\nabla_{\mu}T^{\mu\nu}=0\,, \tag{24}\] for the matter fluid of stress-energy tensor \[T_{\mu\nu}=(\rho_{m}+p_{m})u_{\mu}u_{\nu}+p_{m}\,g_{\mu\nu}\,, \tag{25}\] and four-velocity \(u_{\mu}\), we are led to \[\dot{\rho}_{m}=-3H(\rho_{m}+p_{m})\,. \tag{26}\] After substitution into Eq. (23), integration of both sides gives [67] \[\frac{8\pi G}{3}\rho_{m}=\cosh\left(\kappa\frac{\pi}{GH^{2}}\right)H^{2}- \frac{\kappa\pi}{G}\text{shi}\left(\kappa\frac{\pi}{GH^{2}}\right), \tag{27}\] where we have set the integration (i.e. cosmological) constant to zero and we have defined \[\text{shi}(x)\equiv\int_{0}^{x}\frac{\sinh(x^{\prime})}{x^{\prime}}\,dx^{ \prime}\,. \tag{28}\] The relations (23) and (27) are the modified Friedmann equations underlying Kaniadakis Cosmology. They represent the central ingredient for the investigation of the evolution of the Universe. We emphasize that the extra \(\kappa\)-dependent corrections give rise to fascinating cosmic scenarios with a richer phenomenology comparing to the standard \(\Lambda\)CDM model. For instance, in [65] a holographic dark energy description based on Eqs. (23) and (27) has served to explain the current accelerated expansion of the Universe, while in [70] the baryogenesis and primordial Lithium abundance problems have been successfully addressed. It is easy to check that the General Relativity framework is correctly recovered in the Bekenstein-Hawking entropy \(\kappa\to 0\) limit. In passing, we mention that modified Friedmann equations in alternative entropic scenarios have also been studied in Tsallis [84; 85; 86; 74; 87] and Barrow [88; 89; 90; 91; 92; 93] Cosmologies, motivated by non-extensive and quantum gravitational considerations, respectively [94]. Along this line, the IceCube PeV neutrino discrepancy has been examined in [95] in Tsallis Cosmology to constrain the related entropic parameter. In this sense, our next analysis resembles that of [95] and, more general, of [96; 97] in extended theories of gravity. Here, however, we stress that corrections brought about in the Friedmann equations arise from a genuinely relativistic deformation of the entropy-area law, rather than a modification of the gravitational interaction. ## IV High-Energy Neutrino Signals from Icecube In this Section we present the useful features related to DM relic abundance and IceCube data. To describe the interaction between Standard Model and Dark Matter particles, we use the minimal (4-dimensional) Yukawa-like coupling \[\mathcal{L}_{d=4}=y_{\sigma\chi}\bar{L}_{\sigma}\cdot H\chi\,, \tag{29}\] where \(\sigma=e,\mu,\tau\) labels the eigenstates of the three active neutrinos, \(L_{\sigma}\) and \(H\) are the left-handed lepton and Higgs doublets, respectively, \(\chi\) the DM particle and \(y_{\sigma\chi}\) the (dimensionless) Yukawa coupling constants. Computations are first developed in the conventional Cosmology, showing that it is unable to reconcile the current bound on DM relic abundance and IceCube high-energy events of neutrinos. We then argue that this controversy can be avoided, provided that the background evolution is described by Kaniadakis entropy-based Cosmology. ### Standard Cosmology Following [98; 99; 95], we consider the so called DM _freeze-in_ production, which means that DM particles are never in thermal equilibrium due to their weak interactions and are produced from the hot thermal bath. If we define the DM abundance by \(Y_{\chi}=n_{\chi}/s\), where \(n_{\chi}\) is the number density of DM particles, \(s=2\pi^{2}g_{*}(T)T^{3}/45\) the entropy density and \(g_{*}(T)\simeq 106.75\) the effective number of degrees of freedom, the evolution equation for DM particles in the traditional Cosmology reads [98] \[\frac{dY_{\chi}}{dT}=-\frac{1}{H_{GR}(T)Ts}\frac{g_{\chi}}{\left(2\pi\right) ^{3}}\int C\frac{d^{3}p_{\chi}}{E_{\chi}}\,, \tag{30}\] where \(H_{GR}\) is the standard Hubble rate of General Relativity, \(g_{\chi}=2\) the two helicity projections of DM and \(\bar{C}\) the general collision term. The momentum and energy scale of DM have been denoted by \(p_{\chi}\) and \(E_{\chi}\), respectively. For constant \(g_{*}\), the DM relic abundance can be written as [98] \[\Omega_{DM}h^{2}=\left|\frac{2m_{\chi}^{2}s_{0}h^{2}}{\rho_{c}}\int_{0}^{\infty }\frac{dx}{x^{2}}\left(-\frac{dY_{\chi}}{dT}\Big{|}_{T=\frac{m_{\chi}}{x}} \right)\right|\,, \tag{31}\] where \(x\equiv m_{\chi}/T\), \(m_{\chi}\) is the DM mass scale and \(h\) the dimensionless Hubble constant. Furthermore, the present value of the entropy density and the critical density have been indicated by \[s_{0} = 2\pi^{2}g_{*}T_{0}^{3}/45\simeq 2891.2/\text{cm}^{3}\,, \tag{32}\] \[\rho_{c} = 1.054\times 10^{-5}h^{2}\,\text{GeV}/\text{cm}^{3}\,, \tag{33}\] respectively. For the observed DM abundance, Eq. (31) gives the value [100] \[\left.\Omega_{DM}h^{2}\right|_{obs}=0.1188\pm 0.0010\,. \tag{34}\] Now, the most relevant processes that are induced by the interaction (29) and contribute to the DM production are the _inverse decays_ \[i)\quad\nu_{\sigma}+H^{0}\to\chi\,,\quad l_{\sigma}+H^{+}\to\chi\,, \tag{35}\] and the _Yukawa production_ processes \[i)\ \ t+\bar{t}\to\bar{\nu}_{\sigma}+\chi\,. \tag{36}\] While the former are kinematically allowed, provided that \(m_{\chi}>m_{H}+m_{\nu,l}\) and have probabilities proportional to \(|y_{\sigma\chi}|^{2}\), the latter depend on the factor \(|y_{\sigma\chi}y_{t}|^{2}\), where \(t\) is the top quark and \(y_{t}\) the Yukawa coupling constant between the top quark and Higgs boson. Thus, the evolution of DM particles induced by the interaction (29) becomes [98] \[\frac{dY_{\chi}}{dT}=\frac{dY_{\chi}}{dT}\Big{|}_{ij}+\frac{dY_{ \chi}}{dT}\Big{|}_{ii)}\,, \tag{37}\] where \[\frac{dY_{\chi}}{dT}\Big{|}_{ij} = -\frac{m_{\chi}^{2}\Gamma_{\chi}}{\pi^{2}H_{GR}(T)s}K_{1}\left( \frac{m_{\chi}}{T}\right)\,, \tag{38}\] \[\frac{dY_{\chi}}{dT}\Big{|}_{ii)} = -\frac{1}{512\pi^{6}H_{GR}(T)s}\int d\bar{s}d\Omega\] (39) \[\sum_{\sigma}\frac{W_{t\bar{t}\to\bar{\nu}_{\sigma}\chi}+2W_{t \nu_{\sigma}\to t\chi}}{\sqrt{\bar{s}}}K_{1}\left(\frac{\sqrt{\bar{s}}}{T} \right).\] Here, \(\bar{s}\) represents the centre-of-mass energy, \(W_{ij\to kl}\) are the scattering probabilities of the related processes, \(K_{1}(x)\) is the modified Bessel function of the second kind and \[\Gamma_{\chi}=\sum_{\sigma}\frac{|y_{\sigma\chi}|^{2}}{8\pi}m_{\chi} \tag{40}\] the interaction rate. As argued in [98], the very dominant processes in the DM production are the inverse decays (35). Accordingly, the DM relic abundance is approximately \[\left.\Omega_{DM}h^{2}\right|_{ij}\simeq 0.1188\frac{\sum_{\sigma}|y_{ \sigma\chi}|^{2}}{7.5\times 10^{-25}}\,. \tag{41}\] Therefore, the observed value (34) is reproduced, provided that \(\sum_{\sigma}|y_{\sigma\chi}|^{2}\simeq 7.5\times 10^{-25}\). This is, however, at odds with the condition required to fit the IceCube high-energy neutrino events. Indeed, let us notice that the stability of DM particles imposes that the lifetime \(\tau_{\chi}=\Gamma_{\chi}^{-1}\) has to be longer than the age of the Universe, i.e. \(\tau_{\chi}>t_{U}\simeq 4.35\times 10^{17}\,\mathrm{sec}\). Furthermore, the IceCube spectrum sets the (nearly model-independent) more stringent lower bound \(\tau_{\chi}\gtrsim\tau_{\chi}^{b}\simeq 10^{28}\,\mathrm{sec}\)[16]. By plugging the aforementioned estimate \(\sum_{\sigma}|y_{\sigma\chi}|^{2}\simeq 7.5\times 10^{-25}\) into Eq. (40), one obtains \(\Gamma_{\chi}\simeq 4.5\times 10^{4}\frac{m_{\chi}}{\mathrm{PeV}}\mathrm{sec}^{-1}\), which in turn implies \(\tau_{\chi}\simeq 2.2\times 10^{-5}\frac{\mathrm{PeV}}{m_{\chi}}\mathrm{sec}\). For \(m_{\chi}\simeq 1\mathrm{PeV}\), we then have \(\tau_{\chi}\simeq 2.2\times 10^{-5}\,\mathrm{sec}\), in contrast with what stated above. On the other hand, in order to be compatible with the DM decay lifetime \(\tau_{\chi}\simeq 10^{28}\,\mathrm{sec}\) required by IceCube, we should have \[\sum_{\sigma}|y_{\sigma\chi}|^{2}_{IceCube}\simeq 1.6\times 10^{-57}\,, \tag{42}\] which is by far (roughly 33 orders of magnitude) lower than the value needed to explain the DM relic abundance. The above considerations make it clear that the IceCube high energy events and the DM relic abundance are inconsistent with the DM production as far as the latter is ascribed to the interaction (29) and the cosmological background evolves according to the Einstein field equations. ### Kaniadakis entropy-based Cosmology Let us now explore how the above picture is modified in Kaniadakis Cosmology. To extract analytical solution, it proves convenient to perform Taylor expansion of the Friedmann equation (27) for small \(\kappa\), which is indeed the case according to the discussion below Eq. (18). Observing that \[\cosh(x) = 1+\frac{x^{2}}{2}+\frac{x^{4}}{24}+\mathcal{O}(x^{6})\,, \tag{43}\] \[\mathrm{shi}(x) = x+\frac{x^{3}}{18}+\frac{x^{5}}{600}+\mathcal{O}(x^{7})\,, \tag{44}\] we get to the leading order \[\frac{8\pi G}{3}\rho_{m}\simeq H^{2}-\kappa^{2}\frac{\pi^{2}}{ 2\left(GH\right)^{2}}\,. \tag{45}\] This equation can be solved with respect to \(H\) to obtain \[H \simeq \left[\frac{4\pi G\rho_{m}}{3}+\frac{\pi\left(64G^{6}\rho_{m}^{2} +18G^{2}\kappa^{2}\right)^{\frac{1}{2}}}{6G^{2}}\right]^{\frac{1}{2}} \tag{46}\] \[\simeq H_{GR}+\sqrt{\frac{27\pi}{2}}\frac{\kappa^{2}}{64\left(G^{7}\rho_ {m}^{3}\right)^{\frac{1}{2}}}\,,\] where we have only considered the solution that, for \(\kappa\to 0\), recovers the correct limit \[H_{GR}=\sqrt{\frac{8\pi G}{3}\rho_{m}}\,. \tag{47}\] As explained in Sec. I, in order to isolate corrections arising from modified gravity, it is useful to factorize the Hubble rate (46) as \[H(T)=H_{GR}(T)Z_{\kappa}(T)\,, \tag{48}\] where the information on the modified Kaniadakis entropy is contained in the extra factor \[Z_{\kappa}(T)\simeq 1+\frac{9\kappa^{2}}{256\left(G^{2}\rho_{m}\right)^{2}}\,. \tag{49}\] Some comments are in order: first, we notice that the \(\kappa\to 0\) limit of Eq. (49) gives \(Z_{\kappa}=1\), as expected. Though being derived in a different way, Eq. (49) is consistent with the result of [70]. Moreover, we can relate the matter density and temperature as \[\rho_{m}=\frac{\pi^{2}g_{*}(T)}{30}T^{4}\,, \tag{50}\] where \(g_{*}(T)\simeq 106.75\) as defined in the previous Section. The usage of the modified Hubble rate (48) allows us to recast the evolution equation (38) of DM particles produced by the inverse decays as \[\frac{dY_{\chi}}{dT}\Big{|}_{ij}=-\frac{m_{\chi}^{2}\Gamma_{\chi}}{\pi^{2}H(T )s}K_{1}\left(\frac{m_{\chi}}{T}\right)\,, \tag{51}\] where now \[H(T)s\simeq\frac{64\pi^{4}g_{*}^{2}T^{8}+2025\,T_{*}^{8}\kappa^{2}}{2160\sqrt{ 5\pi g_{*}\,T^{3}\,T_{*}}}\,,\quad T_{*}=M_{p}=\frac{1}{\sqrt{G}}\,, \tag{52}\] to the leading order in \(\kappa\). Here, \(s\) is the entropy density defined at the beginning of Sec. IV.1. Employing Eqs. (51) and (52) and following the same computations as in Sec. IV.1, the \(\kappa\)-modified DM relic abundance (31) becomes \[\Omega_{DM}h^{2}\ =\ \left|\frac{2m_{\chi}^{2}s_{0}h^{2}}{\rho_{c} }\int_{0}^{\infty}\frac{dx}{x^{2}}\left(-\frac{dY_{\chi}}{dT}\Big{|}_{T=\frac{ m_{\chi}}{x}}\right)\right| \tag{53}\] \[\simeq\frac{3.5\,h^{2}s_{0}\Gamma_{X}T_{*}}{\pi^{\frac{17}{2}}\,g _{*}^{\frac{1}{2}}\,m_{\chi}^{3}\rho_{c}}\left|64\pi^{4}g_{*}^{2}m_{\chi}^{8}- 6.6\times 10^{9}\,T_{*}^{8}\kappa^{2}\right|\,,\] where we have used [101] \[\int_{0}^{\infty}x^{n}K_{1}(x)\,dx=2^{n-1}\Gamma\left(1+\frac{n}{2}\right) \Gamma\left(\frac{n}{2}\right),\quad\Re[n]>0\,. \tag{54}\] By further substituting Eq. (40), we get \[\Omega_{DM}h^{2} \simeq \frac{0.4\,h^{2}s_{0}T_{*}}{\pi^{\frac{19}{2}}g_{*}^{\frac{1}{2} }m_{\chi}^{8}\rho_{c}}\sum_{\sigma}|y_{\sigma\chi}|^{2} \tag{55}\] \[\times\left|64\pi^{4}g_{*}^{2}m_{\chi}^{8}-6.6\times 10^{9}\,T_{*} ^{8}\kappa^{2}\right|\,.\] For comparison with observational data, it is useful to cast the above expression as \[\Omega_{DM}h^{2}\simeq 0.1188\left(\frac{106.75}{g_{*}}\right)^{\frac{3}{2}}\frac{ \sum_{\sigma}|y_{\sigma\chi}|^{2}}{1.6\times 10^{-57}}\,\Pi_{\kappa}\,, \tag{56}\] where we have defined \[\Pi_{\kappa} \simeq 6.3\times 10^{-61}\,\frac{h^{2}s_{0}T_{*}}{\rho_{c}}\left|1- \frac{10^{6}T_{*}^{8}\kappa^{2}}{g_{*}^{2}m_{\chi}^{8}}\right| \tag{57}\] \[\simeq 1.7\times 10^{-52}\frac{T_{*}}{1\,\mathrm{GeV}}\left|1-\frac{10^{6 }T_{*}^{8}\kappa^{2}}{g_{*}^{2}m_{\chi}^{8}}\right|\,.\] In the second step we have used Eqs. (32) and (33) for \(s_{0}\) and \(\rho_{c}\), respectively. From Eq. (56) and (57), it follows that the DM relic abundance (34) and the IceCube data (42) are successfully and simultaneously explained in Kaniadakis Cosmology, provided that \[\Pi_{\kappa}\simeq 1\,. \tag{58}\] The behavior of Eq. (57) versus the Kaniadakis parameter \(\kappa\) is plotted in Fig. 1 for \(m_{\chi}\simeq 1\,\mathrm{PeV}=10^{6}\,\mathrm{GeV}\) and the energy scale \(T_{*}=M_{p}\simeq 10^{19}\,\mathrm{GeV}\). We observe that the condition (58) is satisfied, provided that \[\kappa\simeq 2.5\times 10^{-37}\,, \tag{59}\] which substantiates a posteriori our working assumption \(\kappa\ll 1\). It should be noted that, for the considered values of \(m_{\chi}\) and \(T_{*}\), a resolution of the problem going beyond the leading order approximation would be advisable. This, however, does not undermine the conceptual validity of our assertion, that is the need for a relativistic generalization of the statistical framework (and, in particular, of the entropy-area law) to explain the IceCube PeV neutrino spectrum and DM relic abundance. It is worth discussing the estimate (59) in connection with other cosmological bounds on \(\kappa\) from recent literature2 (see Tab. 1). While being lower than the value \(\kappa\simeq 0.2\) needed to fit the cosmic rays spectrum [60], the obtained \(\kappa\) is appreciably non-vanishing if compared, for example, with constraints from Baryon Acoustic Oscillations [68], cosmological constant and Type Ia Supernova measurements [68], Hubble, strong lensing systems and HII galaxies data [69], and \({}^{7}Li\)-abundance observations [70]. This suggests that, in principle, the IceCube PeV neutrinos could be more sensitive to the effects of the Kaniadakis entropy (18) than other systems/cosmic scenarios, providing a valuable playground to test Kaniadakis prescription in perspective. Footnote 2: Notice that the estimates in [68; 69] are exhibited in terms of the re-scaled Kaniadakis parameter \(\beta=\kappa\frac{M_{*}^{2}}{H_{0}^{2}}\), where \(H_{0}\) is the present Hubble rate. Although not contemplated in the original Kaniadakis formalism, the gap between our result and other cosmological bounds on \(\kappa\) could be explained by allowing the entropic parameter to be running. This assumption can be understood in the following picture: in the same way as the energy content (that is, the matter degrees of the freedom) of the Universe is described by a dynamic fluid evolving from an initially relativistic to a semi- or non-relativistic system as the temperature cools down, we can think of the holographic entropy (i.e. the horizon degrees of freedom) as undergoing a transition from a relativistic (Kaniadakis-type, \(|\kappa|>0\)) to a classical (Boltzmann-Gibbs-type, \(\kappa=0\)) description for decreasing redshift. In this framework, the departure (18) from the classical entropy would be quantified by a decreasing function of the time (or, equivalently, by an increasing function of the energy scale). This dynamical behavior would also be necessary to satisfy the requirement that \(Z_{\kappa}(T)\) can in principle depart from unity at the pre-BBN epoch, where we still do not have direct constraints by cosmological observations, but it must recover GR (i.e. \(Z_{\kappa}(T)=1\)) in the late stages of the Universe evolution for phenomenological consistency (see also the discussion in the Introduction). We recall that a similar scenario with a varying deformation entropic parameter has been conjectured in [102; 57; 103] in the context of non-extensive Tsallis entropy and in [104] for the case of Barrow entropy. In particular, in [102] it is observed that the renormalization of a quantum theory entails a scale-dependence of the degrees of freedom. In the standard theory of fields, massive modes decouple and the degrees of freedom decrease in the low energy regime. On the other hand, in gravity theory the situation is more cumbersome, as the degrees of freedom could increase if the space-time fluctuations become large in the ultraviolet regime, while they decrease if gravity is topological, which may be compatible with holography. Either way, presuming that a deformation of the standard entropy-area law is needed, it would be reasonable to assume a dynamic deformation parameter to account for these features both at high-energy (inflation) and low-energy (late-time Universe) scale. Clearly, more work to consolidate this picture is required, especially in view of formulating a new relativistic thermodynamics that incorporates a running non-extensive entropic parameter in a self-consistent way. ## V Conclusion and discussion It is a fact that the standard Boltzmann-Gibbs theory cannot be applied to systems where the partition function diverges, and (large-scale) gravitational systems are known to belong to this class. In these footsteps, recent works proposed a generalization of the holographic dark energy scenario and the cosmological Friedmann equations equipped with the Kaniadakis entropy, which is a one-parameter deformation of Boltzmann-Gibbs entropy incorporating special relativity. Motivated by these insights, in the present work we addressed the observed discrepancy between the present bound on the Dark Matter relic abundance and the IceCube high-energy neutrino data in Kaniadakis entropy-based Cosmology. Our strategy was to keep the canonical (4-dimensional) Yukawa-like coupling unchanged, while modifying the description of the Universe evolution by using the \(\kappa\)-deformed entropy in Eq. (1) (or, equivalently, Eq. (18)). By resorting to the generalized Friedmann equations (23)-(27) and solving the evolution equation of DM particles, we proved that the IceCube neutrino tension can be alleviated in this framework, provided one properly constrains the scaling exponent \(\kappa\). This is line with other results in recent literature, which show that Kaniadakis entropy works better than the classical Boltzmann-Gibbs one for a vast class of relativistic and/or complex systems, such as cosmic rays, plasma, open stellar clusters, nuclear collisions processes, etc. Since PeV neutrinos fall within this class of systems, the use of a relativistically motivated statistics appears natural and all the more necessary. \begin{table} \begin{tabular}{|c|c c|} \hline Estimate (\(|\kappa|\)) & Physical framework & Ref. \\ \hline \(6\times 10^{-129}\) & Baryon Acoustic Oscillation (BAO) & [68] \\ \hline \(3\times 10^{-125}\) & CC+SNIa+BAO & [68] \\ \hline \(1.2\times 10^{-124}\) & Cosmological constant (CC) & [68] \\ \hline \(1.3\times 10^{-124}\) & Type Ia supernova (SNIa) & [68] \\ \hline \(3.6\times 10^{-123}\) & Hubble data & [69] \\ \hline \(4.4\times 10^{-123}\) & Strong lensing systems & [69] \\ \hline \(3.7\times 10^{-123}\) & HII galaxies & [69] \\ \hline \(8.1\times 10^{-84}\) & \({}^{7}Li\)-abundance & [70] \\ \hline \end{tabular} \end{table} Table 1: Some bounds on Kaniadakis entropic parameter from Cosmology and Astroparticle physics. Further aspects remain to be investigated: first, our analysis was performed in the approximation of small departure from the Boltzmann-Gibbs statistics. Although this assumption does not undermine the conceptual basis of our study - since \(\kappa\ll 1\) is the expected scenario - a more reliable estimation of Kaniadakis parameter should be inferred by exact calculations. This is also requested by the fact that relativistic symmetries are exactly preserved only by the full Kaniadakis entropy. Due to the peculiar form of Eq. (18), such a task involves more computational effort, which will be conducted in a future extension of this work. As additional perspectives, it would be suggestive to compare our approach (and possibly find a connection) with other studies that adopt a different modus operandi to explain the IceCube PeV neutrino spectrum. For instance, in [105] and [106] exotic types of interactions are used. In particular, in [105] the authors take into account secret interactions of neutrinos with the cosmic background, while in [106] photohadronic coupling of the Fermi accelerated high energy protons are considered with the synchrotron background photons in the nuclear region of high energy blazars and Active Galactic Nuclei. Finally, a challenging goal is to further explore the possibility to allow for a running \(\kappa\). In this sense, it could be helpful to search for signatures of Kaniadakis entropy in the very early Universe, where the effects of a potential departure from Boltzmann-Gibbs entropy might be amplified. Preliminary clues can be offered by the study of imprints of the inflationary tensor perturbations [107] propagated during the hypothetical Kaniadakis cosmic era in experiments on primordial gravitational waves. These lines of research are under active investigation and will be presented elsewhere. **Data Availability Statement** All data that have been used in our analysis have already been freely released and have been published by the corresponding research teams. In our text we properly give all necessary References to these works, and hence no further data deposit is needed. ###### Acknowledgements. GGL acknowledges the Spanish "Ministerio de Universidades" for the awarded Maria Zambrano fellowship and funding received from the European Union - NextGenerationEU. He is also grateful for participation to the LISA Cosmology Working group. GL thanks MUR and INFN for support. GL and GGL acknowledge the participation to the COST Action CA18108 "Quantum Gravity Phenomenology in the Multimessenger Approach".
2309.10343
Endpoint theory for the compactness of commutators
In this paper, we establish a Minkowski-type inequality for weak Lebesgue space, which allows us to obtain a characterization of relative compactness in these spaces. Furthermore, we are the first to investigate the compactness results of commutators at the endpoint. The paper provides a comprehensive study of the compactness properties of commutators of Calder\'{o}n-Zygmund operators in Hardy and $L^{1}(\mathbb{R}^n)$ type spaces. Additionally, we provide factorization theorems for Hardy spaces in terms of singular integral operators in the $L^1(\mathbb{R}^n)$ space.
Dinghuai Wang, Xi Hu, Shuai Qi
2023-09-19T06:06:07Z
http://arxiv.org/abs/2309.10343v1
# Endpoint theory for the compactness of commutators ###### Abstract. In this paper, we establish a Minkowski-type inequality for weak Lebesgue space, which allows us to obtain a characterization of relative compactness in these spaces. Furthermore, we are the first to investigate the compactness results of commutators at the endpoint. The paper provides a comprehensive study of the compactness properties of commutators of Calderon-Zygmund operators in Hardy and \(L^{1}(\mathbb{R}^{n})\) type spaces. Additionally, we provide factorization theorems for Hardy spaces in terms of singular integral operators in the \(L^{1}(\mathbb{R}^{n})\) space. Key words and phrases:Characterization; Commutator; Endpoint theory; Minkowski-type inequality; Relative compactness 2010 Mathematics Subject Classification: Primary 46B50, 46E30; Secondary 42B20 This work was supported by National Natural Science Foundation of China(Nos. 12101010,11771023) and Natural Science Foundation of China of Anhui Province (No. 2108085QA19). ###### Contents * 1 Introduction * 2 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.2 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.3 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.4 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.5 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.6 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.7 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.8 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.9 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.2 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.3 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.4 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.5 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.6 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.7 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.8 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.9 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.2 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.3 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.4 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.5 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.6 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.7 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.8 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.9 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.2 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.3 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.4 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.5 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.6 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.7 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.8 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.9 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.2 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.3 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.4 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.5 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.6 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.7 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.8 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.9 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.2 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.3 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.4 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.5 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.6 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.7 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.8 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.9 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.2 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.3 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.4 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.5 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.6 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.7 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.8 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.9 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.2 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.3 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.4 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.5 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.6 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.7 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.8 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.9 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.2 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.3 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.4 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.5 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.6 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.7 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.8 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.9 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.2 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.3 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.4 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.5 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.6 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.7 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.8 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.9 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.10 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces * 2.1 The \(L^{1}(\mathbb{R}^{n})\) type and Hardy spaces **Theorem 1.1**.: _Let \(0<p<\infty\). A subset \(\mathcal{F}\) of \(L^{p,\infty}(\mathbb{R}^{n})\) is relatively compact if and only if the following three conditions hold:_ 1. _norm boundedness uniformly_ (1.1) \[\sup_{f\in\mathcal{F}}\|f\|_{L^{p,\infty}(\mathbb{R}^{n})}<\infty;\] 2. _translation continuty uniformly_ (1.2) \[\lim_{a\to 0}\sup_{y\in B(O,a)}\|f(\cdot+y)-f(\cdot)\|_{L^{p,\infty}(\mathbb{ R}^{n})}=0\text{ uniformly inf}\in\mathcal{F};\] 3. _control uniformly away from the origin_ (1.3) \[\lim_{A\to\infty}\|f\chi_{E_{A}}\|_{L^{p,\infty}(\mathbb{R}^{n})}=0\text{ uniformly inf}\in\mathcal{F},\] _where_ \(E_{A}=\{x\in\mathbb{R}^{n}:|x|>A\}\)_._ Also, Perez in [34] considered endpoint estimates related to Hardy-types spaces and introduced a subspace of \(H^{1}(\mathbb{R}^{n})\) for which \([b,T]\) is a bounded operator. Now, we give the corresponding compactness result as follows. **Theorem 1.2**.: _Assuming that \(b\in\mathrm{CMO}(\mathbb{R}^{n})\) and \(T\) is a Calderon-Zygmund operator, then the commutator \([b,T]\) is a compact operator mapping from \(H^{1}_{b}(\mathbb{R}^{n})\) to \(L^{1}(\mathbb{R}^{n})\)._ Alternatively, Janson [21] in 1978 employed a Fourier expansion technique to establish that for \(0<\alpha<1\), \(b\in Lip_{\alpha}(\mathbb{R}^{n})\) if and only if the commutator \([b,T]\) with a smooth kernel is bounded from \(L^{p}(\mathbb{R}^{n})\) to \(L^{q}(\mathbb{R}^{n})\) for \(1<p<q<\infty\) with \(1/q=1/p-\alpha/n\). In a recent study, Nogayamaand and Sawano in [33], Guo, He, Wu, and Yang [18] explored a \(\mathrm{CMO}\) type space \(\mathrm{CMO}_{\alpha}(\mathbb{R}^{n})\) and the compactness of commutators of singular or fractional integral operators, where \(1<p<q<\infty\) and \(1/q=1/p-\alpha/n\). We extend their work to establish the corresponding compactness results for the commutator in \(L^{1}(\mathbb{R}^{n})\) and Hardy spaces. **Theorem 1.3**.: _Assuming that \(0<\alpha<1\), \(b\in\mathrm{CMO}_{\alpha}(\mathbb{R}^{n})\), and \(T\) is a Calderon-Zygmund operator, then the commutator \([b,T]\) is a compact operator mapping from \(H^{1}(\mathbb{R}^{n})\) to \(L^{\frac{n}{n-\alpha}}(\mathbb{R}^{n})\)._ **Theorem 1.4**.: _Suppose that \(0<\alpha<1\) and \(T\) is a Calderon-Zygmund operator that is homogeneous. Then the commutator_ \[[b,T]:L^{1}(\mathbb{R}^{n})\to L^{\frac{n}{n-\alpha},\infty}(\mathbb{R}^{n})\] _is compact if and only if \(b\in\mathrm{CMO}_{\alpha}(\mathbb{R}^{n})\)._ This paper is organized as follows. This paper discusses the compactness of commutators of Calderon-Zygmund operators in various function spaces. The main results are the compactness criteria for commutators in weak Lebesgue spaces, Hardy spaces, and \(L^{1}(\mathbb{R}^{n})\). In Section 2, the paper introduces the necessary definitions and notation for the framework being used. This section can be skipped by readers familiar with the subject. Section 3 presents the proof of the compactness criteria for commutators in weak Lebesgue spaces. Section 4 focuses on the compactness results for commutators in Hardy spaces. In Section 5, the paper establishes the characterization of compactness of commutators in \(L^{1}(\mathbb{R}^{n})\) space. This section also includes some basic lemmas on Hardy factorization. The Appendix contains counterexamples and an Minkowski-type inequality for weak Lebesgue space. ## 2. Preliminaries Let \(|E|\) denote the Lebesgue measure of a measurable set \(E\subset\mathbb{R}^{n}\). Throughout this paper, the letter \(C\) denotes constants which are independent of main variables and may change from one occurrence to another. By \(A\lesssim B\) we mean that \(A\lesssim CB\) with a positive constant \(C\) independent of the appropriate quantities. If \(A\lesssim B\) and \(B\lesssim A\), we write \(A\approx B\) and say that \(A\) and \(B\) are equivalent. ### Calderon-Zygmund operators Let \(K(x,y),x,y\in\mathbb{R}^{n}\), be a locally integrable function, defined away from the diagonal \(\{x=y\}\). Then, we say that \(K\) is a Calderon-Zygmund kernel if it satisfies the following size and smoothness conditions: \[|K(x,y)|\leq\frac{C_{0}}{|x-y|^{n}},\] for some constant \(C_{0}>0\) and for all \((x,y)\in\mathbb{R}^{2n}\) away from the diagonal; \[|K(x,y)-K(x^{\prime},y)|+|K(y,x)-K(y,x^{\prime})|\leq\frac{C_{0}|x-x^{\prime}| ^{\gamma}}{|x-y|^{n+\gamma}}\] for some \(\gamma>0\) and \(|x-x^{\prime}|\leq\frac{1}{2}|x-y|\). Suppose \(T\) is a operator mapping from \(\mathcal{S}(\mathbb{R}^{n})\) to \(\mathcal{S}^{\prime}(\mathbb{R}^{n})\), where we denote by \(\mathcal{S}(\mathbb{R}^{n})\) the spaces of all Schwartz function on \(\mathbb{R}^{n}\) and by \(\mathcal{S}^{\prime}(\mathbb{R}^{n})\) its dual space. We further assume \(T\) is associated with the Calderon-Zygmund kernel, \[T(f)(x)=\int_{\mathbb{R}^{n}}K(x,y)f(y)dy, \tag{2.1}\] whenever \(f\in\mathbb{R}^{n}\) and \(x\notin supp(f)\). If \(T\) is bounded from \(L^{2}(\mathbb{R}^{n})\) to \(L^{2}(\mathbb{R}^{n})\), then \(T\) is called a Calderon-Zygmund operator. We state our main results as follows. In addition, we say that \(T\) is homogeneous if the kernel \(K\) satisfies \[K(x,y)\geq\frac{C}{M^{n}r^{n}}\qquad\text{or}\qquad K(x,y)\leq-\frac{C}{M^{n }r^{n}} \tag{2.2}\] for all \(x\in B_{1},y\in B_{2}\), where \(B_{1}=(x_{1},r),B_{2}=(x_{2},r)\) are the disjoint balls satisfying the condition that \(|x_{1}-x_{2}|\approx Mr\) with \(r>0\) and \(M>10\). ### The BMO type spaces Let \(0\leq\alpha<1\). A locally integrable function \(f\) is said to belong to Campanato space \(\mathrm{BMO}_{\alpha}(\mathbb{R}^{n})\) if there exists a constant \(C>0\) such that for any cube \(Q\subset\mathbb{R}^{n}\), \[\frac{1}{|Q|}\int_{Q}|f(x)-f_{Q}|dx\leq C|Q|^{\alpha/n},\] where \(f_{Q}=\frac{1}{|Q|}\int_{Q}f(x)dx\) and the minimal constant \(C\) is defined by \(\|f\|_{\mathrm{BMO}_{\alpha}(\mathbb{R}^{n})}\). The Campanato spaces extend the notion of functions of bounded mean oscillation and allow a full characterization \(Lip_{\alpha}(\mathbb{R}^{n})\). The Lipschitz (Holder) spaces and Campanato spaces are related by the following equivalences: \[\|f\|_{Lip_{\alpha}(\mathbb{R}^{n})}:=\sup_{x,h\in\mathbb{R}^{n},h\neq 0} \frac{|f(x+h)-f(x)|}{|h|^{\alpha}}\approx\|f\|_{\mathcal{C}_{\alpha,q}},\quad 0 <\alpha<1.\] The equivalence can be found in [12], or [38] for the general case. In particular, \(\mathrm{BMO}_{0}(\mathbb{R}^{n})=\mathrm{BMO}(\mathbb{R}^{n})\), the spaces of bounded mean oscillation. The crucial property of BMO functions is the John-Nirenberg inequality [22], \[|\{x\in Q:|f(x)-f_{Q}|>\lambda\}|\leq c_{1}|Q|e^{-\frac{c_{2}\lambda}{\|f\|_{ \mathrm{BMO}(\mathbb{R}^{n})}}},\] where \(c_{1}\) and \(c_{2}\) depend only on the dimension. A well-known immediate corollary of the John-Nirenberg inequality is as follows: \[\|f\|_{\mathrm{BMO}(\mathbb{R}^{n})}\approx\sup_{Q}\frac{1}{|Q|}\Big{(}\int_{ Q}|f(x)-f_{Q}|^{p}dx\Big{)}^{1/p},\] for each \(1<p<\infty\). In fact, the equivalence also holds for \(0<p<1\). See, for example, the work of Stromberg [23](or [20] and [39] for the general case). ### The Hardy spaces The theory of Hardy spaces is vast and complicated, it has been systematically developed and plays an important role in harmonic analysis and PDEs, see [9, 14, 28]. A bounded tempered distribution \(f\) is in the Hardy space \(H^{\rho}(\mathbb{R}^{n})\) if the Poisson maximal function \[M(f;P)=\sup_{t>0}|(P_{t}*f)(x)|\] lies in \(L^{\rho}(\mathbb{R}^{n})\). We first recall the atomic decomposition of Hardy spaces. Let \(0<\rho\leq 1\leq q\leq\infty,\rho\neq q\) and the integer \(l=[n(\frac{1}{\rho}-1)]\) (\([x]\) indicates the integer part of \(x\)). Then \(l=0\) if \(\frac{n}{n+1}<\rho\leq 1\). **Definition 2.1**.: A function \(a\in L^{q}(\mathbb{R}^{n})\) is called a \((\rho,q,l)\) atom for \(H^{\rho}(\mathbb{R}^{n})\) if there exists a cube \(Q\) such that 1. \(a\) is supported in \(Q\); 2. \(\|a\|_{L^{q}(\mathbb{R}^{n})}\leq|Q|^{\frac{1}{q}-\frac{1}{\rho}}\); 3. \(\int_{\mathbb{R}^{n}}a(x)x^{\nu}dx=0\) for all multi-indices \(\alpha\) with \(0\leq|\nu|\leq l\). Here, (_i_) means that an atom must be a function with compact support, (_ii_) is the size condition of atoms, and (_iii_) is called the cancellation moment condition. The atomic Hardy space \(H^{\rho,q,l}(\mathbb{R}^{n})\) is defined by \[H^{\rho,q,l}(\mathbb{R}^{n})=\Big{\{}f\in\mathcal{S}^{\prime}:f=^ {S^{\prime}}\sum_{k}\lambda_{k}a_{k}(x),\text{each }a_{k}\text{ is a }(\rho,q,l)-\text{atom},\] \[\text{and }\sum_{k}|\lambda_{k}|^{\rho}<\infty\Big{\}}.\] Setting \(H^{\rho,q,l}(\mathbb{R}^{n})\) norm of \(f\) by \[\|f\|_{H^{\rho,q,l}(\mathbb{R}^{n})}=\inf\big{(}\sum_{k}|\lambda_{k}|^{\rho} \big{)}^{1/\rho},\] where the infimum is taken over all decompositions of \(f=\sum_{k}\lambda_{k}a_{k}\) above. Note that \(H^{\rho,q,l}(\mathbb{R}^{n})=H^{\rho}(\mathbb{R}^{n})\) was proved by Coifman [9] for \(n=1\) and Latter [28] for \(n>1\). This indicates that each element in \(H^{\rho}(\mathbb{R}^{n})\) can be decomposed into a sum of atoms in a certain way. Note that in [13], the authors show that the dual of \(H^{\rho}(\mathbb{R}^{n})\) is \(Lip_{\alpha}(\mathbb{R}^{n})\); a key fact that will be used later on in this paper. **Definition 2.2**.: A function \(a\) is a \(b-\)atom if there is a cube \(Q\) for which 1. \(\text{supp }(a)\subset Q\); 2. \(\|a\|_{L^{\infty}}\leq\frac{1}{|Q|}\); 3. \(\int_{Q}a(y)dy=0\); 4. \(\int_{Q}a(y)b(y)dy=0\). The space \(H^{1}_{b}(\mathbb{R}^{n})\) consists of the subspace of \(L^{1}(\mathbb{R}^{n})\) of function \(f\) which can be written as \(f=\sum_{j}\lambda_{j}a_{j}\) where \(a_{j}\) are \(b-\)atoms and \(\lambda_{j}\) are complex numbers with \(\sum_{j}|\lambda_{j}|<\infty\). ### Relatively compact sets in quasi-Banach function spaces We first recall some basic definitions of function spaces. In this paper, we only consider the class of Lebesgue measurable functions, denoted by \(L^{m}\), where \(m\) means the Lebesgue measure on \(\mathbb{R}^{n}\). **Definition 2.3**.: A (quasi-)normed space \((E,\|\cdot\|_{E})\) with \(E\subset L(m)\) is called a (quasi-)Banach function space (Q-BFS) if it satisfies the following conditions: 1. if \(\|f\|_{E}=0\Longleftrightarrow f=0\)\(a.e.\); 2. if \(f\in E\), then \(\||f|\|_{E}=\|f\|_{E}\); 3. if \(0\leq g\leq f\), then \(\|g\|_{E}\leq\|f\|_{E}\); 4. if \(0\leq f_{n}\uparrow f\), then \(\|f_{n}\|_{E}\uparrow\|f\|_{E}\); 5. if \(A\subset\mathbb{R}^{n}\) is bounded, then \(\chi_{A}\in E\). Moreover, we recall the following two definitions. **Definition 2.4**.: (Absolutely continuous quasi-norm). Let \(E\) be a \(Q\)-BFS. A function \(f\) in \(E\) is said to have absolutely continuous quasi-norm in \(E\) if \(\|f\chi_{A_{n}}\|_{E}\to 0\) as \(A_{n}\to 0\). The set of all functions in \(E\) with absolutely continuous quasi-norm is denoted by \(E_{a}\). if \(E=E_{a}\), then the space \(E\) is said to have absolutely continuous quasi-norm. We point out that the dominated convergence theorem holds in \(Q\)-BFS with absolutely continuous quasi-norm; see [2, Proposition 3.9]. **Definition 2.5**.: (Uniformly absolutely continuous quasi-norm (UAC)). Let \(K\) be a \(Q\)-BFS and let \(K\subset E_{a}\). Then \(K\) is said to have uniformly absolutely continuous quasi-norm (\(K\subset UAC(E)\)) if for every sequence \(\{A_{k}\}_{k=1}^{\infty}\) with \(A_{k}\to\emptyset\), \(\|f\chi_{A_{k}}\|_{E}\to 0\) holds uniformly for all \(f\in K\). ### Morrey space and its predual Morrey spaces describe local regularity more precisely than \(L^{q}(\mathbb{R}^{n})\) spaces and can be seen as a complement of \(L^{q}(\mathbb{R}^{n})\). **Definition 2.6**.: Let \(0\leq\alpha<n\) and \(1<q<\infty\), The Morrey space \(L^{q,\alpha}(\mathbb{R}^{n})\) is defined by \[L^{q,\alpha}(\mathbb{R}^{n})=\{f\in L^{q}_{loc}(\mathbb{R}^{n}):\|f\|_{L^{q, \alpha}(\mathbb{R}^{n})}<\infty\},\] with \[\|f\|_{L^{q,\alpha}(\mathbb{R}^{n})}:=\sup_{x\in\mathbb{R}^{n},r>0}\bigg{(}r^{ -\alpha}\int_{B(x,r)}|f(y)|^{q}dy\bigg{)}^{\frac{1}{q}}<\infty.\] where the supremum is taken over all balls \(B(x,r)\) in \(\mathbb{R}^{n}\). Following Blasco, Ruiz and Vega [1], we define the function called a _block_. **Definition 2.7**.: Let \(\alpha\in[0,n)\), \(1<q<\infty\), and \(\frac{1}{q}+\frac{1}{q^{\prime}}=1\). A function \(b(x)\) is called a \((q,\alpha)\)-block, if there there exists a ball \(B(x_{0},r)\) such that \[supp(b)\subset B(x_{0},r),\qquad\|b\|_{L^{q}}\leq r^{-\frac{\alpha}{q^{\prime }}}.\] We further recall the definition of \(\mathcal{B}^{q,\alpha}(\mathbb{R}^{n})\) via \((q,\alpha)\)-blocks from [1]. It was shown in [1] that \(\mathcal{B}^{q,\alpha}(\mathbb{R}^{n})\) is a Banach space, and the dual space of \(\mathcal{B}^{q,\alpha}\) is \(L^{q^{\prime},\alpha}(\mathbb{R}^{n})\). **Definition 2.8**.: Let \(\alpha\in[0,n)\), \(1<q<\infty\). The space \(\mathcal{B}^{q,\alpha}(\mathbb{R}^{n})\) is defined by setting \[\mathcal{B}^{q,\alpha}(\mathbb{R}^{n})=\bigg{\{}g\in L^{1}_{c}(\mathbb{R}^{n} ):g=\sum_{j=1}^{\infty}m_{j}b_{j},\sum_{j=1}^{\infty}|m_{j}|<\infty\bigg{\}},\] where \(\{b_{j}\}_{j\geq 1}\) are \((q,\alpha)\)-block. Furthermore, for every \(g\in\mathcal{B}^{q,\alpha}(\mathbb{R}^{n})\), let \[\|g\|_{\mathcal{B}^{q,\alpha}(\mathbb{R}^{n})}=\inf\biggr{\{}\sum_{j=1}^{ \infty}|m_{j}|\biggr{\}},\] where the infimum is taken over all possible decompositions of \(g\) as above. ## 3. Characterization of relative compactness in the weak Lebesgue spaces Let \(L^{0}(m)\) denote the class of functions in \(L(m)\) that are finite almost everywhere, with the topology of convergence in measure on sets of finite measure. We recall that \(Q\)-BFS is continuously embedded in \(L^{0}(m)\). **Lemma 3.1**.: _(Lemma 3.3 in [2]). Let \(E\) be a \(Q\)-BFS. Then \(E\) is continuously embedded in \(L^{0}(m)\). In particular, if \(f_{k}\) tends to \(f\) in \(E\), then \(f_{k}\) tends to \(f\) in measure on sets of finite measure and hence some sequence convergence pointwise to \(f\) almost everywhere._ **Lemma 3.2**.: _(Theorem 3.17 in [2]). Let \(E\) be a Q-BFS and let \(K\subset E_{a}\). Then \(K\) is relatively compact in \(E\) if and only if it is locally relatively compact in measure and \(K\subset UAC(E)\)._ Now, we give the proof of characterization that a subset in \(L^{p,\infty}(\mathbb{R}^{n})\) is a strongly pre-compact set, which is in itself interesting. **Proof of Theorem 1.1.** We will initially present the proof for the sufficiency. For the case \(1<p<\infty\), we define the mean value of \(f\) in \(\mathcal{F}\) by \[S_{a}(f)(x)=\frac{1}{|B(0,a)|}\int_{|y|\leq a}f(x+y)dy,\] where \(a>0\). By the Minkowski-type inequality for \(L^{p,\infty}(\mathbb{R}^{n})\) with \(1<p<\infty\) (see Proposition 6.3 in Appendix), we have \[\|S_{a}f-f\|_{L^{p,\infty}(\mathbb{R}^{n})} \leq\left\|\frac{1}{|B(0,a)|}\right|\int_{|y|\leq a}f(\cdot+y)-f( \cdot)dy\Big{|}\right\|_{L^{p,\infty}(\mathbb{R}^{n})} \tag{3.1}\] \[\lesssim\frac{1}{|B(0,a)|}\int_{|y|\leq a}\|f(\cdot+y)-f(\cdot)\| _{L^{p,\infty}(\mathbb{R}^{n})}dy\] \[\lesssim\sup_{|y|\leq a}\|f(\cdot+y)-f(\cdot)\|_{L^{p,\infty}( \mathbb{R}^{n})}.\] It follows from (1.1), (1.2) and (3.1) that \[\lim_{a\to 0}\|S_{a}f-f\|_{L^{p,\infty}(\mathbb{R}^{n})}=0,\text{ uniformly in }f\in\mathcal{F} \tag{3.2}\] and the set \(\{S_{a}f:f\in\mathcal{F}\}\subset L^{p,\infty}(\mathbb{R}^{n})\) satisfies \[\sup_{f\in\mathcal{F}}\|S_{a}f\|_{L^{p,\infty}(\mathbb{R}^{n})}\lesssim 1.\] By (1.3), for any \(0<\epsilon<1\), there exist \(N>0\) and \(A\) such that \[1<\epsilon^{-N}/4<A^{n/p}<\epsilon^{-N/2}, \tag{3.3}\] and for every \(f\in\mathcal{F}\), \[\|f_{E_{A}}\|_{L^{p,\infty}(\mathbb{R}^{n})}<\epsilon/8. \tag{3.4}\] Now we prove that for each fixed \(a\), the set \(\{S_{a}f:f\in\mathcal{F}\}\) is a strongly precompact set in \(\mathfrak{C}(E_{A}^{c})\), where \(E_{A}^{c}=\{x\in\mathbb{R}^{n}:|x|\leq A\}\) and \(\mathfrak{C}(E_{a}^{c})\) denotes the continuous function space on \(E_{A}^{c}\) with uniform norm. By Ascoli-Arzela theorem, it suffices to show that \(\{S_{a}f:f\in\mathcal{F}\}\) is bounded and equicontinuous in \(\mathfrak{C}(E_{A}^{c})\). In fact, from Kolmogorov's inequality (see [15, Lemma 2.8, p. 485]), we have \[\|f\|_{L^{q}(Q,\frac{dx}{|Q|})}\leq C\|f\|_{L^{p,\infty}(Q,\frac{dx}{|Q|})} \tag{3.5}\] for any cube \(Q\) and \(0<q<p<\infty\). Applying Holder's inequality and (3.5) for \(f\in\mathcal{F}\) and \(x\in E_{A}^{c}\), we have \[|S_{a}f(x)| \leq\left\{\frac{1}{|B(0,a)|}\int_{|y|\leq a}|f(x+y)|^{q}dy\right\} ^{1/q}\] \[=\left\{\frac{1}{|B(0,a)|}\int_{|y-x|\leq a}|f(y)|^{q}dy\right\} ^{1/q}\] \[\leq C\|f\|_{L^{p,\infty}(\mathbb{R}^{n})},\] where \(1<q<p<\infty\) and the constant \(C\) is independent of \(f\) and \(x\) here. On the other hand, for any \(x_{1},x_{2}\in E_{A}^{c}\), by a direct computation, we obtain \[\begin{split}|S_{a}f(x_{1})-S_{a}f(x_{2})|&\leq\frac {1}{|B(0,a)|}\int_{|y|\leq a}|f(x_{1}+y)-f(x_{2}+y)|dy\\ &\leq\left\{\frac{1}{|B(0,a)|}\int_{|y|\leq a}|f(x_{1}+y)-f(x_{2} +y)|^{q}dy\right\}^{1/q}\\ &\leq C\|f(\cdot+x_{2}-x_{1})-f(\cdot)\|_{L^{p,\infty}(\mathbb{R }^{n})}.\end{split} \tag{3.6}\] Thus, (1.2) and (3.6) show the equicontinuity of \(\{S_{a}f:f\in\mathcal{F}\}\). Next we show that for small enough \(a\), the set \(\{S_{a}f:f\in\mathcal{F}\}\) is also a strongly pre-compact set in \(L^{p,\infty}(\mathbb{R}^{n})\). To do this, we need only to prove that the set \(\{S_{a}f:f\in\mathcal{F}\}\) is a totally bounded set in \(L^{p,\infty}(\mathbb{R}^{n})\). Because the set \(\{S_{a}f:f\in\mathcal{F}\}\) is a totally bounded set in \(\mathfrak{C}(E_{A}^{c})\), hence for the above \(\epsilon\) and \(N\), there exist \(\{f_{1},f_{2},\cdots,f_{m}\}\subset\mathcal{F}\), such that \(\{S_{a}f_{1},S_{a}f_{2},\cdots,S_{a}f_{m}\}\) is a finite \(\epsilon^{N+1}\)-net in \(\{S_{a}f:f\in\mathcal{F}\}\) in the norm of \(\mathfrak{C}(E_{A}^{c})\). We then know that for any \(f\in\mathcal{F}\), there is \(1\leq j\leq m\) such that \[\sup_{y\in E_{A}^{c}}|S_{a}f(y)-S_{a}f_{j}(y)|<\epsilon^{N+1}. \tag{3.7}\] Below we show that \(\{S_{a}f_{1},S_{a}f_{2},\cdots,S_{a}f_{m}\}\) is also a finite \(\epsilon-\)net of \(\{S_{a}f:f\in\mathcal{F}\}\) in the norm of \(L^{p,\infty}(\mathbb{R}^{n})\) if \(a\) is small enough. For any \(f\in\mathcal{F}\), there exists \(f_{j}(1\leq j\leq m)\) such that \[\|S_{a}f-S_{a}f_{j}\|_{L^{p,\infty}(\mathbb{R}^{n})} =\|\big{(}S_{a}f-S_{a}f_{j}\big{)}\chi_{E_{A}}\|_{L^{p,\infty}( \mathbb{R}^{n})}+\|\big{(}S_{a}f-S_{a}f_{j}\big{)}\chi_{E_{A}^{c}}\|_{L^{p, \infty}(\mathbb{R}^{n})}\] \[=:I_{1}+I_{2}.\] We first give the estimate for \(I_{1}\). By (3.2) and the above \(\epsilon\), there exists a constant \(\delta>0\) such that if \(a<\delta\), then \[\|\big{(}S_{a}f-f\big{)}\chi_{E_{A}}\|_{L^{p,\infty}(\mathbb{R}^{n})}\leq \epsilon/8,\qquad\|\big{(}S_{a}f_{j}-f_{j}\big{)}\chi_{E_{A}}\|_{L^{p,\infty}( \mathbb{R}^{n})}\leq\epsilon/8.\] Applying (3.3) and (3.4) and the estimates above, we have \[I_{1} \leq\|\big{(}S_{a}f-f\big{)}\chi_{E_{A}}\|_{L^{p,\infty}( \mathbb{R}^{n})}+\|f\chi_{E_{A}}\|_{L^{p,\infty}(\mathbb{R}^{n})}\] \[\qquad+\|f_{j}\chi_{E_{A}}\|_{L^{p,\infty}(\mathbb{R}^{n})}+\| \big{(}S_{a}f_{j}-f_{j}\big{)}\chi_{E_{A}}\|_{L^{p,\infty}(\mathbb{R}^{n})}\] \[\leq\epsilon/2.\] For \(I_{2}\), the inequalities (3.3) and (3.7) give us that \[I_{2}\leq A^{n/p}\sup_{y\in E_{A}^{c}}|S_{a}f(y)-S_{a}f_{j}(y)|\leq\epsilon/2.\] Therefore, we show that \(\{S_{a}f_{1},S_{a}f_{2}\cdots,S_{a}f_{m}\}\) is also a finite \(\epsilon-\)net of \(\{S_{a}f:f\in\mathcal{F}\}\) in the norm of \(L^{p,\infty}(\mathbb{R}^{n})\) if \(a\) is small enough. Finally, let us show that the set \(\mathcal{F}\) is a relative compact set in \(L^{p,\infty}(\mathbb{R}^{n})\). Taking any sequence \(\{f_{j}\}_{j=1}^{\infty}\) in \(\mathcal{F}\), by the relative compactness of \(\{S_{a}f:f\in\mathcal{F}\}\) in \(L^{p,\infty}(\mathbb{R}^{n})\), there exists a subsequence \(\{S_{a}f_{j_{k}}\}_{k=1}^{\infty}\) of \(\{S_{a}f_{j}:f_{j}\}\) that is convergent in \(L^{p,\infty}(\mathbb{R}^{n})\). So, for any \(\epsilon>0\) there exists \(K\in\mathbb{N}\) such that for any \(k>K\) and \(m\in\mathbb{N}\), \[\|S_{a}f_{j_{k}}-S_{a}f_{j_{k+m}}\|_{L^{p,\infty}(\mathbb{R}^{n})}<\epsilon.\] By (3.2), we have \[\|f_{j_{k}}-f_{j_{k+m}}\|_{L^{p,\infty}(\mathbb{R}^{n})}\] \[\leq\|S_{a}f_{j_{k}}-f_{j_{k}}\|_{L^{p,\infty}(\mathbb{R}^{n})}+ \|S_{a}f_{j_{k}}-S_{a}f_{j_{k+m}}\|_{L^{p,\infty}(\mathbb{R}^{n})}+\|S_{a}f_{j _{k+m}}-f_{j_{k+m}}\|_{L^{p,\infty}(\mathbb{R}^{n})}\] \[\leq 3\epsilon.\] This shows that the subsequence \(\{f_{j_{k}}\}_{k=1}^{\infty}\) converges in \(L^{p,\infty}(\mathbb{R}^{n})\), since \(L^{p,\infty}(\mathbb{R}^{n})\) is a qusi-Banach space. Therefore, the set \(\mathcal{F}\) is a relative compact set in \(L^{p,\infty}(\mathbb{R}^{n})\). For \(0<p\leq 1\), we only need to consider the case that \(\mathcal{F}\) consists of only nonnegative functions. Denote \(\mathcal{F}^{p/2}:=\{f^{p/2}:f\in\mathcal{F}\}\). We claim that \(\mathcal{F}^{p/2}\) is relatively compact in \(L^{2,\infty}(\mathbb{R}^{n})\). We only check condition \((ii)\) by \[\|f^{p/2}(\cdot+y)-f^{p/2}(\cdot)\|_{L^{2,\infty}(\mathbb{R}^{n})} \leq\||f(\cdot+y)-f(\cdot)|^{p/2}\|_{L^{2,\infty}(\mathbb{R}^{n})}\] \[=\|f(\cdot+y)-f(\cdot)\|_{L^{p,\infty}(\mathbb{R}^{n})}^{p/2},\] Next, we prove that the relative compactness of \(\mathcal{F}^{p/2}\) in \(L^{2,\infty}(\mathbb{R}^{n})\) implies the relative compactness of \(\mathcal{F}\) in \(L^{p,\infty}(\mathbb{R}^{n})\). For any sequence \(\{f_{k}\}_{k=1}^{\infty}\) of \(\mathcal{F}\), there exists a subsequence of \(\{f_{k}^{p/2}\}_{k=1}^{\infty}\), denote by \(\{f_{k_{i}}^{p/2}\}_{i=1}^{\infty}\), which tends to \(f^{p/2}\) in \(L^{2,\infty}(\mathbb{R}^{n})\). Using Lemma 3.1, \(f_{k_{i}}^{p/2}\) tends to \(f^{p/2}\) locally in measure, we further choosing the diagonal subsequence of \(\{f_{k_{i}}^{p/2}\}_{i=1}^{\infty}\), still denoted by \(\{f_{k_{i}}^{p/2}\}_{i=1}^{\infty}\), pointwise tends to \(f^{p/2}\) a.e. From this, \(f_{k_{i}}\to f\) pointwise a.e., which further implies that \(f_{k_{i}}\to f\) locally in measure. On the other hand, since \(\mathcal{F}^{p/2}\) is relatively compact in \(L^{2,\infty}(\mathbb{R}^{n})\), then \(\mathcal{F}^{p/2}\subset UAC(L^{2,\infty}(\mathbb{R}^{n}))\) by Lemma 3.2. One can easily verify that \(\mathcal{F}^{p/2}\subset UAC(L^{2,\infty}(\mathbb{R}^{n}))\) implies \(\mathcal{F}\subset UAC(L^{p,\infty}(\mathbb{R}^{n}))\). Now, we have verified that \(\mathcal{F}\subset UAC(L^{p,\infty}(\mathbb{R}^{n}))\) and \(F\) is locally relative compact in measure. The relative compactness of \(\mathcal{F}\) follows by Lemma 3.2. To prove necessity, we first assume that condition \((i)\) in Theorem 1.1 is violated. Then there exists a sequence \(\{f_{m}\}\) of functions belonging to \(\mathcal{F}\) such that the quasi-distance \[\rho(f_{m},0)=\|f_{m}\|_{L^{p,\infty}(\mathbb{R}^{n})}\] tends to \(+\infty\), By \[\rho(f_{m},0)\leq C\big{(}\rho(f_{m},f)+\rho(f,0)\big{)},\] we have \(\rho(f_{m},f)\to+\infty\) as \(m\to+\infty\). Hence, the set \(\mathcal{F}\) is not compact. We now assume that condition \((ii)\) does not hold. Then there exist \(\delta>0\), a sequence \[f_{1},f_{2},\cdots,f_{m},\cdots\] of functions belonging to \(\mathcal{F}\), and a sequence \(a_{m}>0\), \[\lim_{m\to+\infty}a_{m}=0\] such that \[\sup_{y\in B(O,a_{m})}\|f_{m}(\cdot+y)-f_{m}(\cdot)\|_{L^{p,\infty}(\mathbb{ R}^{n})}\geq\delta\] for any \(m\). Clearly \[\delta\leq \sup_{y\in B(O,a_{m})}\|f_{m}(\cdot+y)-f_{m}(\cdot)\|_{L^{p, \infty}(\mathbb{R}^{n})}\] \[\leq C\sup_{y\in B(O,a_{m})}\|f_{m}(\cdot+y)-f(\cdot+y)\|_{L^{p, \infty}(\mathbb{R}^{n})}\] \[+C\sup_{y\in B(O,a_{m})}\|f(\cdot+y)-f(\cdot)\|_{L^{p,\infty}( \mathbb{R}^{n})}\] \[+C\|f_{m}(\cdot)-f(\cdot)\|_{L^{p,\infty}(\mathbb{R}^{n})}\] \[\leq 2C\|f_{m}(\cdot)-f(\cdot)\|_{L^{p,\infty}(\mathbb{R}^{n})}+C\sup _{y\in B(O,a_{m})}\|f(\cdot+y)-f(\cdot)\|_{L^{p,\infty}(\mathbb{R}^{n})}.\] Since \[\lim_{m\to+\infty}\sup_{y\in B(O,a_{m})}\|f(\cdot+y)-f(\cdot)\|_{L^{p,\infty}( \mathbb{R}^{n})}=0,\] it follows that \[\varliminf_{m\to+\infty}\|f_{m}(\cdot)-f(\cdot)\|_{L^{p,\infty}(\mathbb{R}^{n})} \geq\frac{\delta}{2C}.\] Consequently, the sequence \(\{f_{m}\}\) and the set \(\mathcal{F}\) are not compact. Finally, if the condition \((iii)\) is not satisfied, there exist \(\delta>0\), a sequence \[f_{1},f_{2},\cdots,f_{m},\cdots\] of functions belonging to \(\mathcal{F}\), and a sequence \(A_{m}>0\), \[\lim_{m\to+\infty}A_{m}=+\infty,\] such that \[\|f_{m}\chi_{E_{A_{m}}}\|_{L^{p,\infty}(\mathbb{R}^{n})}\geq\delta\] for any \(m\). Therefore, \[\delta \leq\|f_{m}\chi_{E_{A_{m}}}\|_{L^{p,\infty}(\mathbb{R}^{n})}\] \[\leq C\|(f_{m}-f)\chi_{E_{A_{m}}}\|_{L^{p,\infty}(\mathbb{R}^{n} )}+C\|f\chi_{E_{A_{m}}}\|_{L^{p,\infty}(\mathbb{R}^{n})}.\] Note that \[\lim_{m\to+\infty}\|f\chi_{E_{A_{m}}}\|_{L^{p,\infty}(\mathbb{R}^{n})}=0,\] then \[\|(f_{m}-f)\chi_{E_{A_{m}}}\|_{L^{p,\infty}(\mathbb{R}^{n})}\geq\frac{\delta} {C}\] and the set \(\mathcal{F}\) is not compact. So the condition \((iii)\) must be hold. We finish the proof of Theorem 1.1. ## 4. Compactness of commutators in Hardy type spaces The characterization of relative compactness in the classical \(L^{p}(\mathbb{R}^{n})\) Lebesgue spaces was discovered by Kolmogorov (see [26, 36]) under some restrictive conditions. Then it was extended by Riesz [35]. The complete version of the classical Riesz-Kolmogorov theorem can be stated as follows. **Lemma 4.1**.: _(Classical Riesz-Kolmogorov theorem.) Let \(1\leq p<\infty\). A subset \(\mathcal{F}\) of \(L^{p}(\mathbb{R}^{n})\) is relatively compact if and only if the following three conditions hold:_ * _norm boundedness uniformly_ (4.1) \[\sup_{f\in\mathcal{F}}\|f\|_{L^{p}(\mathbb{R}^{n})}<\infty;\] * _translation continuity uniformly_ (4.2) \[\lim_{r\to 0}\sup_{y\in B(O,r)}\|f(\cdot+y)-f(\cdot)\|_{L^{p}(\mathbb{R}^{n})}=0 \text{ uniformly inf}\in\mathcal{F};\] _ * _control uniformly away from the origin_ (4.3) \[\lim_{\alpha\to\infty}\|f\chi_{E_{A}}\|_{L^{p}(\mathbb{R}^{n})}=0\text{ uniformly inf}\in\mathcal{F},\] _where_ \(E_{A}=\{x\in\mathbb{R}^{n}:|x|>A\}\)_._ As mentioned in the introduction, CMO is the closure in BMO of the space of \(C^{\infty}\) functions with compact support. In [43], it was shown that CMO can be characterized in the following way. **Lemma 4.2**.: _Let \(f\in\operatorname{BMO}(\mathbb{R}^{n}).\) Then \(f\in\operatorname{CMO}(\mathbb{R}^{n})\) if and only if the following conditions hold:_ 1. \(\lim_{\delta\to 0}\sup_{|Q|=\delta}\mathcal{O}(f;Q)=0\)_;_ 2. \(\lim_{R\to\infty}\sup_{|Q|=R}\mathcal{O}(f;Q)=0\)_;_ 3. \(\lim_{R\to\infty}\sup_{Q\cap[-d,d]^{n}=}\mathcal{O}(f;Q)=0,\)__ _where_ \[\mathcal{Q}(f;Q)=\frac{1}{|Q|}\int_{Q}|f(x)-f_{Q}|dx\qquad\text{and}\qquad f_ {Q}=\frac{1}{|Q|}\int_{Q}f(x)dx.\] Since then, there have been a lot of articles concerning the boundedness and the compactness of commutators on function spaces as well as their applications in PDEs, see [3, 4, 6, 5, 11, 19, 40]. Krantz and Li in [24] and [25] have applied commutator theory to give a compactness characterization of Hankel operators on holomorphic Hardy spaces \(H^{2}(D)\), where \(D\) is a bounded, strictly pseudoconvex domain in \(\mathbb{C}^{n}\). It is perhaps for this important reason that the boundedness of \([b,T]\) attracted one's attention among researchers in harmonic analysis and PDEs. The compactness criteria were studied by many authors in various settings. Meanwhile, it has played an important role in the compactness results of certain bounded operators in the field of harmonic analysis. Let \(\alpha\in[0,1]\). For the locally integral function \(f\) and cube \(Q\), we write \[\mathcal{O}_{\alpha}(f;Q)=\frac{1}{|Q|^{1+\alpha/n}}\int_{Q}|f(x)-f_{Q}|dx.\] Define by \(\operatorname{CMO}_{\alpha}(\mathbb{R}^{n})\) the \(C_{c}^{\infty}(\mathbb{R}^{n})\) closure in \(Lip_{\alpha}(\mathbb{R}^{n})\). The authors in [18] showed that **Lemma 4.3**.: _Let \(\alpha\in(0,1)\). A \(Lip_{\alpha}(\mathbb{R}^{n})\) function \(f\) belongs to \(\operatorname{CMO}_{\alpha}(\mathbb{R}^{n})\) if it satisfies the following three conditions the following three conditions:_ 1. \(\lim_{\delta\to 0}\sup_{|Q|=\delta}\mathcal{O}_{\alpha}(f;Q)=0\)_;_ 2. \(\lim_{R\to\infty}\sup_{|Q|=R}\mathcal{O}_{\alpha}(f;Q)=0\)_;_ 3. \(\lim_{R\to\infty}\sup_{Q\cap[-d,d]^{n}=}\mathcal{O}_{\alpha}(f;Q)=0.\)__ Now, we give the proofs of the compactness results in Hardy type spaces. **Proof of Theorem 1.2.** Assume that \(b\in\operatorname{CMO}(\mathbb{R}^{n})\) and \(E\) be a bounded set in \(H^{1}_{b}(\mathbb{R}^{n}).\) It is enough to show that \([b,T](E)\) is relatively compact in \(L^{1}(\mathbb{R}^{n})\). By the results in [34], the commutator \([b,T]\) maps form \(H^{1}_{b}(\mathbb{R}^{n})\) into \(L^{1}(\mathbb{R}^{n})\) with the estimates of the form \[\|[b,T](f)\|_{L^{1}(\mathbb{R}^{n})}\lesssim\|b\|_{\operatorname{BMO}(\mathbb{ R}^{n})}\|f\|_{H^{1}_{b}(\mathbb{R}^{n})}. \tag{4.4}\] On the other hand, since \(b\in\operatorname{CMO}(\mathbb{R}^{n})\), then for any \(\epsilon>0\) there exists a function \(b_{\epsilon}\in C_{c}^{\infty}(\mathbb{R}^{n})\) such that \[\|b-b_{\epsilon}\|_{\operatorname{BMO}(\mathbb{R}^{n})}<\epsilon.\] Perez in [34] proved that the commutator \([b,T]\) of the function \(b\in\operatorname{BMO}(\mathbb{R}^{n})\) is bounded from \(H^{1}_{b}(\mathbb{R}^{n})\) to \(L^{1}(\mathbb{R}^{n})\), which shows that \[\|[b,T](f)\|_{L^{1}(\mathbb{R}^{n})} \lesssim\|[b-b_{\epsilon},T](f)\|_{L^{1}(\mathbb{R}^{n})}+\|[b_{ \epsilon},T](f)\|_{L^{1}(\mathbb{R}^{n})}\] \[\lesssim\|b-b_{\epsilon}\|_{\operatorname{BMO}(\mathbb{R}^{n})} \|f\|_{H^{1}_{b}(\mathbb{R}^{n})}+\|[b_{\epsilon},T](f)\|_{L^{1}(\mathbb{R}^{n })}\] \[\lesssim\epsilon+\|[b_{\epsilon},T](f)\|_{L^{1}(\mathbb{R}^{n})}\] for all \(f\in E\). Moreover, for any \(b\)-atom \(a\), we have \[\Big{|}\int_{\mathbb{R}^{n}}a(y)b_{\epsilon}(y)dy\Big{|} =\Big{|}\int_{\mathbb{R}^{n}}a(y)\big{(}b_{\epsilon}(y)-b(y)\big{)} dy\Big{|}\] \[\leq\|a\|_{H^{1}(\mathbb{R}^{n})}\|b-b_{\epsilon}\|_{\operatorname {BMO}(\mathbb{R}^{n})}\leq\epsilon.\] Consequently, our task is to show that \(b_{\epsilon},T\) is relatively compact in \(L^{1}(\mathbb{R}^{n})\). Given the definition of the space \(H^{1}_{b}(\mathbb{R}^{n})\), our proof hinges on demonstrating that for a \(b-\)atom \(a\), the function \(b_{\epsilon},T\) fulfills the conditions (2)-(3) as stipulated in Lemma 4.1. Next, we show that \([b_{\epsilon},T](a)\) also satisfies (2). Indeed, suppose that \(\operatorname{supp}(b_{\epsilon})\subset B_{R_{\epsilon}}\) for some \(R_{\epsilon}>1\). Then, for any \(f\in E\) and \(x\in B^{c}_{R}\) with \(R>2R_{\epsilon}\), we get that \[b_{\epsilon}(x)T(a)(x)=0\] and for some \(y_{0}\in B(0,R_{\epsilon})\), we have \[|x-y_{0}|\approx|x-y|\approx|x|\] for any \(y\in B_{R_{\epsilon}}\), then we obtain \[\big{|}[b_{\epsilon},T](a)(x)\big{|}=\big{|}T(b_{\epsilon}a)(x)\big{|}\] \[=\Big{|}\int_{\mathbb{R}^{n}}K(x,y)b_{\epsilon}(y)a(y)dy\Big{|}\] \[=\Big{|}\int_{\mathbb{R}^{n}}K(x,y)b_{\epsilon}(y)a(y)dy-\int_{ \mathbb{R}^{n}}K(x,y_{0})b_{\epsilon}(y)a(y)dy\Big{|}\] \[\lesssim\int_{\mathbb{R}^{n}}\big{|}K(x,y)-K(x,y_{0})\big{|}|b_{ \epsilon}(y)||a(y)|dy\] \[\lesssim\frac{|y-y_{0}|^{\gamma}}{|x-y_{0}|^{n+\gamma}}\|a\|_{L^ {\infty}(\mathbb{R}^{n})}\|b_{\epsilon}\|_{L^{\infty}(\mathbb{R}^{n})}|Q|\] \[\lesssim|x|^{-n-\gamma}.\] It follows that \[\|[b_{\epsilon},T](a)\chi_{B_{R}^{c}}\|_{L^{1}(\mathbb{R}^{n})}\lesssim\int_{ |x|>R}|x|^{-n-\gamma}dx\lesssim R^{-\gamma}.\] This implies that \[\|[b_{\epsilon},T](a)(x)\chi_{B_{R}^{c}}(x)\|_{L^{1}(\mathbb{R}^{n})}\to 0, \text{ as }R\to\infty. \tag{4.5}\] Finally, we give the estimate for the condition (3). To do this, we prove that for any \(\epsilon>0\), there exists a sufficiently small \(|t|\) (independent of \(a\)), then \[\|[b_{\epsilon},T](a)(\cdot+t)-[b_{\epsilon},T](a)(\cdot)\|_{L^{1}(\mathbb{R}^ {n})}\lesssim\epsilon. \tag{4.6}\] We write \[[b_{\epsilon},T](a)(x+t)-[b_{\epsilon},T](a)(x)\] \[=\int_{\mathbb{R}^{n}}(b_{\epsilon}(x+t)-b_{\epsilon}(y))K(x+t,y )a(y)dy-\int_{\mathbb{R}^{n}}(b_{\epsilon}(x)-b_{\epsilon}(y))K(x,y)a(y)dy\] \[=\int_{|x-y|>\delta}(b_{\epsilon}(x+t)-b_{\epsilon}(x))K(x,y)a(y)dy\] \[\quad+\int_{|x-y|>\delta}(b_{\epsilon}(x+t)-b_{\epsilon}(x))\big{(} K(x+t,y)-K(x,y)\big{)}a(y)dy\] \[\quad+\int_{|x-y|\leq\delta}(b_{\epsilon}(y)-b_{\epsilon}(x))K(x, y)a(y)dy\] \[\quad+\int_{|x-y|\leq\delta}(b_{\epsilon}(x+t)-b_{\epsilon}(y))K( x+t,y)\big{)}a(y)dy\] \[=:I_{1}+I_{2}+I_{3}+I_{4},\] where, for a convenient choice of \(\delta>0\) to be specified later. If we now let \(T^{*}\) denote the maximal truncated bilinear singular integral operator \[T^{*}(f)(x)=\sup_{\delta>0}\Big{|}\int_{|x-y|>\delta}K(x,y)f(y)dy\Big{|},\] then \[|I_{1}| \leq|b_{\epsilon}(x+t)-b_{\epsilon}(x)|\Big{|}\int_{|x-y|>\delta}K (x,y)a(y)dy\Big{|}\] \[\lesssim\|\nabla b_{\epsilon}\|_{L^{\infty}(\mathbb{R}^{n})}|t|T^ {*}(a)(x).\] In [17], Grafakos and Torres proved that \(T^{*}\) maps from \(H^{1}(\mathbb{R}^{n})\) into \(L^{1}(\mathbb{R}^{n})\). Then \[\|I_{1}\|_{L^{1}(\mathbb{R}^{n})}\lesssim|t|. \tag{4.7}\] In order to estimate \(I_{2}\), thanks to the smoothness of the kernel \(K\) and the change of variables, we obtain \[|I_{2}| \lesssim\|b_{\epsilon}\|_{L^{\infty}(\mathbb{R}^{n})}|t|^{\gamma} \int_{|x-y|>\delta}\frac{|a(y)|}{|x-y|^{n+\gamma}}dy\] \[\lesssim\frac{|t|^{\gamma}}{\delta^{\gamma}}\|b_{\epsilon}\|_{L^ {\infty}(\mathbb{R}^{n})}M(a)(x)\] and \[\|I_{2}\|_{L^{1}(\mathbb{R}^{n})}\lesssim\frac{|t|^{\gamma}}{\delta^{\gamma}}. \tag{4.8}\] To estimate the third term, we use the size estimate of the Calderon-Zygmund kernel \(K\). We have \[|I_{3}| \lesssim\|\nabla b_{\epsilon}\|_{L^{\infty}(\mathbb{R}^{n})}\int _{|x-y|\leq\delta}\frac{|a(y)|}{|x-y|^{n-1}}dy\] \[\lesssim\|\nabla b_{\epsilon}\|_{L^{\infty}(\mathbb{R}^{n})} \delta M(a)(x)\] and \[\|I_{3}\|_{L^{1}(\mathbb{R}^{n})}\lesssim\delta. \tag{4.9}\] Similarly, we also obtain \[\|I_{4}\|_{L^{1}(\mathbb{R}^{n})}\lesssim\delta. \tag{4.10}\] Let us now define \(t_{0}=\frac{\epsilon^{2}}{1+\|b_{\epsilon}\|_{L^{\infty}(\mathbb{R}^{n})}+\| \nabla b_{\epsilon}\|_{L^{\infty}(\mathbb{R}^{n})}}\). For each \(0<|t|<t_{0}\) and select \(\delta=|t|/\epsilon\). Inequalities (4.7), (4.8), (4.9) and (4.10) imply (4.6). Combining this with the inequalities (4.4) and (4.5), we conclude that \([b,T]\) is a compact. **Proof of Theorem 1.3.** Assume that \(b\in\mathrm{CMO}_{\alpha}(\mathbb{R}^{n})\) and \(E\) be a bounded set in \(H^{1}(\mathbb{R}^{n}).\) It is enough to show that \([b,T](E)\) is relatively compact in \(L^{q}(\mathbb{R}^{n})\) with \(q=\frac{n}{n-q}>1.\) Since \(b\in\text{CMO}_{\alpha}(\mathbb{R}^{n})\), then for any \(\epsilon>0\) there exists a function \(b_{\epsilon}\in C^{\infty}_{c}(\mathbb{R}^{n})\) such that \[\|b-b_{\epsilon}\|_{Lip_{\alpha}(\mathbb{R}^{n})}<\epsilon.\] In [30], Lu, Wu and Yang proved that the commutator \([b,T]\) of the function \(b\in Lip_{\alpha}(\mathbb{R}^{n})\) is bounded from \(H^{1}(\mathbb{R}^{n})\) to \(L^{\frac{n}{n-\alpha}}(\mathbb{R}^{n})\), which shows that \[\|[b,T](f)\|_{L^{q}(\mathbb{R}^{n})} \lesssim\|[b-b_{\epsilon},T](f)\|_{L^{q}(\mathbb{R}^{n})}+\|[b_{ \epsilon},T](f)\|_{L^{q}(\mathbb{R}^{n})}\] \[\lesssim\|b-b_{\epsilon}\|_{Lip_{\alpha}(\mathbb{R}^{n})}\|f\|_{ H^{1}(\mathbb{R}^{n})}+\|[b_{\epsilon},T](f)\|_{L^{q}(\mathbb{R}^{n})}\] \[\lesssim\epsilon+\|[b_{\epsilon},T](f)\|_{L^{q}(\mathbb{R}^{n})}\] for all \(f\in E\). Then, it suffices to demonstrate that \([b_{\epsilon},T](E)\) is relatively compact in \(L^{q}(\mathbb{R}^{n})\). In addition, we obtain that \([b_{\epsilon},T](E)\) satisfies (1) in Lemma 4.1. Next, we show that \([b_{\epsilon},T](E)\) also satisfies (2). Indeed, suppose that \(\text{supp}(b_{\epsilon})\subset B_{R_{\epsilon}}\) for some \(R_{\epsilon}>1\). Then, for any \(f\in E\) and \(x\in B^{c}_{R}\) with \(R>2R_{\epsilon}\), we get that \[b_{\epsilon}(x)T(f)(x)=0\qquad\text{and}\qquad|[b_{\epsilon},T](f)(x)|=|T(b_{ \epsilon}f)(x)|.\] For \(x\in B^{c}_{R}\) and \(y\in B_{R_{\epsilon}}\), we get \(|x-y|\approx|x|\) and \[|[b_{\epsilon},T](f)(x)\chi_{B^{c}_{R}}(x)|\lesssim|x|^{-n}\chi_{B^{c}_{R}}(x )\|b_{\epsilon}f\|_{L^{1}(\mathbb{R}^{n})}\lesssim|x|^{-n}\chi_{B^{c}_{R}}(x )\|b_{\epsilon}\|_{\text{BMO}(\mathbb{R}^{n})}\|f\|_{H^{1}(\mathbb{R}^{n})}.\] It follows that \[\|[b_{\epsilon},T](f)(x)\chi_{B^{c}_{R}}(x)\|_{L^{q}(\mathbb{R}^{ n})} \leq\|[b_{\epsilon},T](f)(x)\chi_{B^{c}_{R}}(x)\|_{L^{q}(\mathbb{ R}^{n})}\] \[\lesssim\|f\|_{H^{1}(\mathbb{R}^{n})}\bigg{(}\int_{|x|>R}|x|^{-nq }dx\bigg{)}^{1/q}\] \[\lesssim R^{-n+n/q}\|f\|_{H^{1}(\mathbb{R}^{n})}.\] This implies that \(\|[b_{\epsilon},T](f)(x)\chi_{B^{c}_{R}}(x)\|_{L^{q}(\mathbb{R}^{n})}\to 0\), as \(R\to\infty\). To prove the condition (3), we prove that for every \(\delta>0\), if \(|t|\) is sufficiently small(merely depending on \(\delta\)), then for every \(f\in E\), \[\|[b_{\epsilon},T](f)(\cdot+t)-[b_{\epsilon},T](f)(\cdot)\|_{L^{q}(\mathbb{R}^ {n})}\lesssim\delta^{\eta}, \tag{4.11}\] where \(\eta=\min\{1-\alpha,\alpha+3\gamma\}\). We write \[[b_{\epsilon},T](f)(x+t)-[b_{\epsilon},T](f)(x)\] \[=\int_{\mathbb{R}^{n}}(b_{\epsilon}(x+t)-b_{\epsilon}(y))K(x+t,y) f(y)dy-\int_{\mathbb{R}^{n}}(b(x)-b(y))K(x,y)f(y)dy\] \[=\int_{|x-y|>\delta^{-1}|t|}(b_{\epsilon}(x+t)-b_{\epsilon}(x))K( x,y)f(y)dy\] \[\quad+\int_{|x-y|>\delta^{-1}|t|}(b_{\epsilon}(x+t)-b_{\epsilon}( x))\big{(}K(x+t,y)-K(x,y)\big{)}f(y)dy\] \[\quad+\int_{|x-y|\leq\delta^{-1}|t|}(b_{\epsilon}(y)-b_{\epsilon}( x))K(x,y)f(y)dy\] \[\quad+\int_{|x-y|\leq\delta^{-1}|t|}(b_{\epsilon}(x+t)-b_{ \epsilon}(y))K(x+t,y)\big{)}f(y)dy\] \[=:J_{1}+J_{2}+J_{3}+J_{4}.\] We first consider \(J_{1}\). Thanks to the size of the kernel \(K\), we get \[\begin{split}|J_{1}|&\leq|b_{\epsilon}(x+t)-b_{ \epsilon}(x)|\Big{|}\int_{|x-y|>\delta^{-1}|t|}K(x,y)f(y)dy\Big{|}\\ &\lesssim\|\nabla b_{\epsilon}\|_{L^{\infty}(\mathbb{R}^{n})}|t| \int_{|x-y|>\delta^{-1}|t|}\frac{|f(y)|}{|x-y|^{n}}dy\\ &\lesssim\|\nabla b_{\epsilon}\|_{L^{\infty}(\mathbb{R}^{n})}|t| \int_{|x-y|>\delta^{-1}|t|}\frac{|f(y)|}{|x-y|^{n-\alpha}}\cdot\frac{1}{|x-y| ^{\alpha}}dy\\ &\lesssim\|\nabla b_{\epsilon}\|_{L^{\infty}(\mathbb{R}^{n})}|t| (\delta^{-1}|t|)^{\alpha}I_{\alpha}(|f|)(x),\end{split} \tag{4.12}\] where \(I_{\alpha}\) stands for the fractional operator, \[I_{\alpha}(f)(x)=\int_{\mathbb{R}^{n}}\frac{f(y)}{|x-y|^{n-\alpha}}dy.\] By the \((H^{1}(\mathbb{R}^{n}),L^{q}(\mathbb{R}^{n}))\) boundedness of fractional integral operator \(I_{\alpha}\), we obtain \[\begin{split}\|J_{1}\|_{L^{q}(\mathbb{R}^{n})}& \lesssim\|\nabla b_{\epsilon}\|_{L^{\infty}(\mathbb{R}^{n})}|t|( \delta^{-1}|t|)^{\alpha}\|I_{\alpha}(f)\|_{L^{q}(\mathbb{R}^{n})}\\ &\lesssim\|\nabla b_{\epsilon}\|_{L^{\infty}(\mathbb{R}^{n})}|t| (\delta^{-1}|t|)^{\alpha}\|f\|_{H^{1}(\mathbb{R}^{n})}\lesssim\delta^{-\alpha }|t|^{1+\alpha}.\end{split} \tag{4.13}\] As for \(J_{2}\), applying the smoothness of the kernel \(K\), we deduce that \[\begin{split}|J_{2}|&\lesssim\|b_{\epsilon}\|_{L^{ \infty}(\mathbb{R}^{n})}|t|^{\gamma}\int_{|x-y|>\delta^{-1}t}\frac{|f(y)|}{|x-y| ^{n+\gamma}}dy\\ &\lesssim\|b_{\epsilon}\|_{L^{\infty}(\mathbb{R}^{n})}|t|^{\gamma }(\delta^{-1}|t|)^{\alpha+\gamma}I_{\alpha}(|f|)(x)\end{split} \tag{4.14}\] and \[\|J_{2}\|_{L^{q}(\mathbb{R}^{n})}\lesssim\delta^{-\alpha-\gamma}|t|^{\alpha+2 \gamma}. \tag{4.15}\] Next, we consider \(J_{3}\). The Holder inequality gives us that \[|J_{3}| \lesssim\|\nabla b_{\epsilon}\|_{L^{\infty}(\mathbb{R}^{n})}\int_{|x- y|\leq\delta^{-1}|t|}\frac{|f(y)|}{|x-y|^{n-1}}dy\] \[\lesssim\|\nabla b_{\epsilon}\|_{L^{\infty}(\mathbb{R}^{n})}( \delta^{-1}|t|)^{1-\alpha}I_{\alpha}(|f|)(x) \tag{4.16}\] and \[\|J_{3}\|_{L^{q}(\mathbb{R}^{n})}\lesssim\delta^{-1+\alpha}|t|^{1-\alpha}. \tag{4.17}\] Similarly, we also obtain \[|J_{4}| \lesssim\|\nabla b_{\epsilon}\|_{L^{\infty}(\mathbb{R}^{n})}\int_ {|x-y|\leq\delta^{-1}|t|}\frac{|f(y)|}{|x+t-y|^{n-1}}dy\] \[\lesssim\|\nabla b_{\epsilon}\|_{L^{\infty}(\mathbb{R}^{n})}( \delta^{-1}|t|)^{1-\alpha}I_{\alpha}(|f|)(x+t) \tag{4.18}\] and \[\|J_{4}\|_{L^{q}(\mathbb{R}^{n})}\lesssim\delta^{-1+\alpha}|t|^{1-\alpha}. \tag{4.19}\] A combination of the inequalities (4.13), (4.15), (4.17) and (4.19) provides us \[\|[b_{\epsilon},T](f)(\cdot+t)-[b_{\epsilon},T](f)(\cdot)\|_{L^{q}(\mathbb{R }^{n})}\lesssim\delta^{-\alpha}|t|^{1+\alpha}+\delta^{-\alpha-\gamma}|t|^{ \alpha+2\gamma}+\delta^{-1+\alpha}|t|^{1-\alpha}.\] We assume that \(|t|\leq\delta^{2}<1\), then \[\|[b_{\epsilon},T](f)(\cdot+t)-[b_{\epsilon},T](f)(\cdot)\|_{L^{q}(\mathbb{R }^{n})}\lesssim\delta^{\eta},\] where \(\eta=\min\{1-\alpha,\alpha+3\gamma\}<1\). Thus we prove the inequality (4.11) and \([b,T]\) is compact from \(H^{1}(\mathbb{R}^{n})\) to \(L^{q}(\mathbb{R}^{n})\). ## 5. Characterization of compactness of commutator in the endpoint space In this section, we will proceed with the proof of the following auxiliary lemmas, which we need to prove our main results. We first recall a technical lemma about certain \(H^{\rho}(\mathbb{R}^{n})\) (see, [41] for \(\rho=1\) and [42] for \(\rho<1\)). **Lemma 5.1**.: _Let \(\frac{n}{n+1}<\rho\leq 1\) and \(f\) be a function satisfying the following estimates:_ 1. \(\int_{\mathbb{R}^{n}}f(x)dx=0;\)__ 2. _there exist balls_ \(B_{1}=B(x_{1},r)\) _and_ \(B_{2}=B(x_{2},r)\) _for some_ \(x_{1},x_{2}\in\mathbb{R}^{n}\) _and_ \(r>0\) _such that_ \[|f(x)|\leq h_{1}(x)\chi_{B_{1}}(x)+h_{2}(x)\chi_{B_{2}}(x),\] _where_ \(\|h_{i}\|_{L^{q}(\mathbb{R}^{n})}\leq C|B_{i}|^{1/q-1/\rho}\) _with_ \(1<q\leq\infty\)_;_ 3. \(|x_{1}-x_{2}|=Nr\) _Then, \(f\in H^{\rho}(\mathbb{R}^{n})\) and there exists a positive constant \(C\) independent of \(x_{1},x_{2},r\) such that_ \[\|f\|_{H^{1}(\mathbb{R}^{n})}\leq C\log N\text{ for }\rho=1\] _and_ \[\|f\|_{H^{\rho}(\mathbb{R}^{n})}\leq CN^{n(\frac{1}{\rho}-1)}\text{ for }0<\rho<1.\] **Lemma 5.2**.: _Let \(0<\alpha<1\), \(1<q_{0}<q=\frac{n}{n-\alpha}\) and \(\rho=\frac{n}{n+\alpha}\). For any \(g\in L^{\infty}_{c}(\mathbb{R}^{n}),h\in L^{\infty}_{c}(\mathbb{R}^{n})\), we have_ \[\|\Pi(g,h)\|_{H^{\rho}(\mathbb{R}^{n})}\lesssim\|g\|_{B^{q_{0}q^{\prime},(1- \frac{q_{0}}{q})n}(\mathbb{R}^{n})}\|h\|_{L^{1}(\mathbb{R}^{n})}, \tag{5.1}\] _where \(\Pi(g,h):=gT^{*}(h)-hT(g)\)._ Proof.: For any \(g,h\in L^{\infty}_{c}(\mathbb{R}^{n})\), to show that \(\Pi(g,h)\in H^{\rho}(\mathbb{R}^{n})\) with the norm (5.1), we need to consider the properties of \(\Pi(g,h)\). Since \(g,h\) are all in \(L^{\infty}_{c}(\mathbb{R}^{n})\), from the boundedness of Calderon-Zygmund operators, it is direct to see that \(\Pi(g,h)\in L^{\rho}(\mathbb{R}^{n})\bigcap L^{2}(\mathbb{R}^{n})\) with compact support. Moreover, note that from the definition of \(\Pi\), we have \[\int_{\mathbb{R}^{n}}\Pi(g,h)(x)dx=0.\] Hence, we immediately have that \(\Pi(g,h)\) is a multiple of an \(H^{\rho}(\mathbb{R}^{n})\). Then it suffices to verify that \(H^{\rho}(\mathbb{R}^{n})\) norm of \(\Pi(g,h)\) satifies (5.1). We first show that the inner product \[\langle b,\Pi(g,h)\rangle_{L^{2}(\mathbb{R}^{n})}:=\int_{\mathbb{R}^{n}}b(x) \Pi(g,h)(x)dx \tag{5.2}\] is well defined for \(b\in Lip_{\alpha}(\mathbb{R}^{n})\). Without loss of generality we assume that \(\Pi(g,h)\) is supported in a cube \(Q_{\Pi}\). We also note that for \(b\in Lip_{\alpha}(\mathbb{R}^{n})\cap L^{2}_{loc}(\mathbb{R}^{n})\). As a consequence, we obtain \[\bigg{|}\int_{\mathbb{R}^{n}}b(x)\Pi(g,h)(x)dx\bigg{|}\] \[=|Q_{\Pi}|^{1+\alpha/n}\bigg{|}\frac{1}{|Q_{\Pi}|^{1+\alpha/n}} \int_{Q_{\Pi}}(b(x)-b_{Q_{\Pi}})\Pi(g,h)(x)dx\bigg{|}\] \[\leq|Q_{\Pi}|^{1+\alpha/n}\|b\|_{Lip_{\alpha}(\mathbb{R}^{n})}\| \Pi(g,h)\|_{L^{2}(\mathbb{R}^{n})}<\infty,\] where the equality above follows from the cancellation condition of \(\Pi(g,h)\) and the first inequality above follows from Holder inequality. This shows that the inner product in (5.2) is well defined. From Kolmogorov's inequality, we have \(L^{q,\infty}(\mathbb{R}^{n})\subset L^{q_{0},\beta}(\mathbb{R}^{n})\) with \(\beta=n(1-q_{0}/q)\). Since \([b,T]\) is bounded from \(L^{1}(\mathbb{R}^{n})\) to \(L^{q,\infty}(\mathbb{R}^{n})\) when \(b\in Lip_{\alpha}(\mathbb{R}^{n})\), from the duality results of the block and Morrey spaces, we have \[\Big{|}\int_{\mathbb{R}^{n}}b(x)\Pi(g,h)dx\Big{|} =\Big{|}\int_{\mathbb{R}^{n}}g(x)[b,T](h)(x)dx\Big{|}\] \[\leq\|g\|_{\mathcal{B}^{q_{0},\beta}}\|[b,T](h)\|_{L^{q_{0},\beta} (\mathbb{R}^{n})}\] \[\leq C\|g\|_{\mathcal{B}^{q_{0},\beta}}\|[b,T](h)\|_{L^{q,\infty} (\mathbb{R}^{n})}\] \[\lesssim\|b\|_{Lip_{\alpha}(\mathbb{R}^{n})}\|g\|_{\mathcal{B}^{ q_{0},\beta}}\|h\|_{L^{1}(\mathbb{R}^{n})}.\] We point out that from the fundamental fact as in [16, Exercise 1.4.12(b)], we have \(\Pi(g,h)(x)\) is in \(H^{\rho}(\mathbb{R}^{n})\) with \[\|\Pi(g,h)\|_{H^{\rho}(\mathbb{R}^{n})} \approx\sup_{b:\|b\|_{Lip_{\alpha}(\mathbb{R}^{n})\leq 1}}\big{|} \langle b,\Pi(g,h)\rangle\big{|}\] \[\lesssim\|g\|_{\mathcal{B}^{q_{0}^{\prime},(1-\frac{q_{0}}{q})_{n }}}\|h\|_{L^{1}(\mathbb{R}^{n})},\] which implies that (5.1) holds. The proof of Lemma 5.2 is completed. **Lemma 5.3**.: _Let \(0<\alpha<1\), \(1<q_{0}<q=\frac{n}{n-\alpha}\) and \(\rho=\frac{n}{n+\alpha}\). If \(f\in H^{\rho}(\mathbb{R}^{n})\) can be written as_ \[f=\sum_{k\geq 1}\lambda_{k}a_{k}.\] _Then, for any \(\varepsilon>0\), there exists \(\{g^{k}\}_{k\geq 1},\{h^{k}\}_{k\geq 1}\subset L_{c}^{\infty}(\mathbb{R}^{n})\), and a large positive number \(N\) (depending only on \(\varepsilon\)) such that_ \[\|a_{k}-\Pi(g^{k},h^{k})\|_{H^{\rho}(\mathbb{R}^{n})}<\varepsilon \tag{5.3}\] _and_ \[\sum_{k\geq 1}|\lambda_{k}|\|g^{k}\|_{\mathcal{B}^{q_{0}^{\prime},n(1-\frac{q_ {0}}{q})}(\mathbb{R}^{n})}\|h^{k}\|_{L^{1}(\mathbb{R}^{n})}\leq CN^{n}\|f\|_{ H^{\rho}(\mathbb{R}^{n})}.\] _Furthermore, we have_ \[\Big{\|}f-\sum_{k\geq 1}\lambda_{k}\Pi(g^{k},h^{k})\Big{\|}_{H^{\rho}( \mathbb{R}^{n})}\leq C\varepsilon\|f\|_{H^{\rho}(\mathbb{R}^{n})}.\] Proof.: Let \(a\) be an \(H^{\rho}(\mathbb{R}^{n})\)-atom, supported in \(B(x_{0},r)=:B_{0}\), for some \(x_{0}\in\mathbb{R}^{n}\) and for \(r>0\), such that \[\int_{\mathbb{R}^{n}}a(x)dx=0\qquad and\qquad\|a\|_{L^{\infty}(\mathbb{R}^{n}) }\leq r^{-n+\alpha}.\] We select \(y_{0}\in\mathbb{R}^{n}\) such that \(|x_{0}-y_{0}|=Nr\). Apply the homogeneity of \(T\), we get that for any \(x\in B(y_{0},r)\), \[K(x_{0}-x)\geq\frac{C}{(Nr)^{n}}\] and \[|T^{*}(\chi_{B_{(y_{0},r)}})(x_{0})| =\left|\int_{\mathbb{R}^{n}}\frac{\chi_{B_{(y_{0},r)}}(z)}{|x_{0}-z |^{n}}dz\right|\] \[=\left|\int_{B(y_{0},r)}\frac{1}{|x_{0}-z|^{n}}dz\right|\] \[\geq C(Nr)^{-n}|B(y_{0},r)|\] \[\geq CN^{-n}.\] Now, let us set \[g(x):=\chi_{B_{(y_{0},r)}(x)}\quad\text{and}\quad h(x):=\frac{a(x)}{T^{*}(g)( x_{0})}.\] From the definitions of the functions \(g(x)\) and \(h(x)\), we arrive at \[\begin{cases}\|g\|_{\mathcal{B}^{q0^{\prime},(1-\frac{q_{0}}{q})n}(\mathbb{R} ^{n})}\leq Cr^{n-n/q};\\ \|h\|_{L^{1}(\mathbb{R}^{n})}\leq\frac{Cr^{n}\|a\|_{L^{\infty}(\mathbb{R}^{n}) }}{|T^{*}(g)(x_{0})|}\leq CN^{n}r^{\alpha}.\end{cases} \tag{5.4}\] By a direct computation, (5.4) shows that \[\|g\|_{\mathcal{B}^{q0^{\prime},(1-\frac{q_{0}}{q})n}(\mathbb{R}^{n})}\|h\|_{ L^{1}(\mathbb{R}^{n})}\leq CN^{n}r^{n-\frac{n}{q}-\alpha}\leq CN^{n}. \tag{5.5}\] Next, we have \[a(x)-\Pi(g,h)(x) =a(x)-hT^{*}(g)(x)+gT(h)(x)\] \[=a(x)\frac{T^{*}(g)(x_{0})-T^{*}(g)(x)}{T^{*}(g)(x_{0})}+gT(h)(x)\] \[=:\mathrm{I}_{1}(x)+\mathrm{I}_{2}(x).\] Obviously, \(\mathrm{I}_{1}(x)\) is supported on \(B_{0}\) and \(\mathrm{I}_{2}(x)\) is supported on \(B(y_{0},r)\). We first estimate \(\mathrm{I}_{1}(x)\). For \(x\in B_{0}\), we have \[|\mathrm{I}_{1}(x)| =\left|a(x)\frac{T^{*}(g)(x_{0})-T^{*}(g)(x)}{T^{*}(g)(x_{0})}\right|\] \[\leq C\|a\|_{L^{\infty}}N^{n}\Big{|}\int_{B(y_{0},r)}\frac{|x_{0} -x|^{\gamma}}{|z-x_{0}|^{n-\alpha+\gamma}}dz\Big{|}\] \[\leq CN^{n}r^{-n+\alpha}\frac{r^{n+\gamma}}{(Nr)^{n+\gamma}}\] \[\leq\frac{C}{N^{\gamma}r^{n-\alpha}}.\] For the term \(\mathrm{I}_{2}(x)\), it follows from the cancellation property of the atom \(a\) that \[|T(h)(x)| \leq\frac{1}{T^{*}(g)(x_{0})}\Big{|}\int_{B_{0}}\frac{|x_{0}-x|^{ \gamma}}{|z-x_{0}|^{n-\alpha+\gamma}}a(z)dz\Big{|}\] \[\leq CN^{n}\frac{r^{\gamma}}{(Nr)^{n-\alpha+\gamma}}\int_{B_{0}}| a(z)|dz\] \[\leq C\frac{\|a\|_{L^{\infty}}}{Nr^{n}}r^{n}\] \[\leq\frac{C}{N^{\gamma}r^{n-\alpha}},\] As a consequence, we have \[|\mathrm{I}_{2}(x)|\lesssim\frac{1}{Nr^{n-\alpha}}\chi_{B(y_{0},r)}.\] Combining the estimates of \(\mathrm{I}_{1}(x)\) and \(\mathrm{I}_{2}(x)\), we obtain that \[|a-\Pi(g,h)(x)|\lesssim\frac{1}{Nr^{n-\alpha}}\chi_{B(x_{0},r)}+\frac{1}{Nr^{n -\alpha}}\chi_{B(y_{0},r)}. \tag{5.6}\] In addition, we point out that \[\int_{\mathbb{R}^{n}}(a-\Pi(g,h))dx=0, \tag{5.7}\] because the atom \(a\) has cancellation property and the second integral equals \(0\) just by the definitions of \(\Pi\). Then the inequality (5.6) and the cancellation (5.7), together with Lemma 5.1 to the function \(F(x)=a-\Pi(g,h)(x)\), we obtain \[\|a-\Pi(g,h)\|_{H^{\rho}(\mathbb{R}^{n})}\leq C\frac{\log N}{N^{\gamma}}. \tag{5.8}\] Let \(N\) sufficiently large such that \[\frac{\log N}{N^{\gamma}}<\varepsilon. \tag{5.9}\] Therefore, we obtain (5.3). By applying (5.8) and (5.9) to \(a=a_{k}\) with \(k\geq 1\), we obtain that there exist \(\{g^{k}\}_{k\geq 1},\{h^{k}\}_{k\geq 1}\subset L^{\infty}_{c}(\mathbb{R}^{n})\) such that \[\|a_{k}-\Pi(g^{k},h^{k})\|_{H^{\rho}(\mathbb{R}^{n})}\leq C\varepsilon.\] It follows from (5.5) that \[\|g^{k}\|_{B^{q_{0}^{\prime},(1-\frac{q_{0}}{q})n}(\mathbb{R}^{n})}\|h^{k}\|_ {L^{1}(\mathbb{R}^{n})}\leq CN^{n}.\] Thus, \[\sum_{k\geq 1}|\lambda_{k}|\|g^{k}\|_{B^{q_{0}^{\prime},(1-\frac{q_{0}}{q})n} }\|h^{k}\|_{L^{1}(\mathbb{R}^{n})}\leq CN^{n}\|f\|_{H^{\rho}(\mathbb{R}^{n})}.\] This implies that \[\left\|f-\sum_{k\geq 1}\lambda_{k}\Pi(g^{k},h^{k})\right\|_{H^{ \rho}(\mathbb{R}^{n})} \leq\sum_{k\geq 1}|\lambda_{k}|\left\|a_{k}-\Pi(g^{k},h^{k}) \right\|_{H^{\rho}(\mathbb{R}^{n})}\] \[\leq C\varepsilon\sum_{k\geq 1}|\lambda_{k}|\leq C\varepsilon\|f \|_{H^{\rho}(\mathbb{R}^{n})}.\] This ends the proof of Lemma 5.3. **Theorem 5.4**.: _Suppose \(1<q_{0}<q<\infty\), \(\frac{n}{n+\gamma}<\rho<1\) with \(1-\frac{1}{q}-\frac{1}{\rho}+1=0\) and suppose that \(T\) ia a Calderon-Zygmund operator that is homogeneous. Then for any \(f\in H^{\rho}(\mathbb{R}^{n})\) there exist sequence \(\{\lambda_{s}^{k}\}\in\ell^{\rho}\) and functions \(h_{s}^{k}\in L_{c}^{\infty}(\mathbb{R}^{n})\), \(g_{s}^{k}\in L_{c}^{\infty}(\mathbb{R}^{n})\) such that_ \[f=\sum_{k=1}^{\infty}\sum_{s=1}^{\infty}\lambda_{s}^{k}\Pi(g_{s}^{k},h_{s}^{k}) \tag{5.10}\] _in the sense of \(H^{\rho}(\mathbb{R}^{n})\). Moreover_ \[\|f\|_{H^{\rho}(\mathbb{R}^{n})}\approx C\inf\bigg{\{}(\sum_{s=1}^{\infty}\sum _{k=1}^{\infty}|\lambda_{s}^{k}|^{\rho}\|g_{s}^{k}\|_{\mathcal{B}^{q_{0}^{ \prime},(1-\frac{q_{0}}{q})n}(\mathbb{R}^{n})}^{\rho}\|h_{s}^{k}\|_{L^{1}( \mathbb{R}^{n})}^{\rho}\bigg{)}^{\frac{1}{\rho}}\bigg{\}},\] _where the infimum above is taken over all possible representations of \(f\) that satisfy (5.10)._ Proof.: By Lemma 5.2, it is obvious that \[\|\Pi(g,h)\|_{H^{\rho}(\mathbb{R}^{n})}\lesssim\|g\|_{B^{q_{0}^{\prime},(1- \frac{q_{0}}{q})n}(\mathbb{R}^{n})}\|h\|_{L^{1}(\mathbb{R}^{n})}.\] It is immediate that for any representation of \(f\) as in (5.10), i.e., \[f=\sum_{k=1}^{\infty}\sum_{s=1}^{\infty}\lambda_{s}^{k}\Pi(g_{s}^{k},h_{s}^{k })(x),\] with \[\|f\|_{H^{\rho}(\mathbb{R}^{n})}\leq C\inf\{\sum_{s=1}^{\infty}\sum_{k=1}^{ \infty}|\lambda_{s}^{k}|^{\rho}\|g_{s}^{k}\|_{B^{q_{0}^{\prime},(1-\frac{q_{0} }{q})n}(\mathbb{R}^{n})}^{\rho}\|h_{s}^{k}\|_{L^{1}(\mathbb{R}^{n})}^{\rho}\}^ {\frac{1}{\rho}},\] where the infimum above is taken over all possible representations of \(f\) that satisfy (5.10). Next, utilizing the atomic decomposition, for any \(f\in H^{\rho}(\mathbb{R}^{n})\) we can find a sequence \(\{\lambda_{1}^{k}\}_{k\geq 1}\in\ell^{\rho}\) and sequence of \(H^{\rho}(\mathbb{R}^{n})\)-atom \(\{a_{1}^{k}\}_{k\geq 1}\) so that \(f=\sum_{k=1}^{\infty}\lambda_{1}^{k}a_{1}^{k}\) and \((\sum_{k=1}^{\infty}|\lambda_{1}^{k}|^{\rho})^{\frac{1}{\rho}}\leq C\|f\|_{H^ {\rho}(\mathbb{R}^{n})}\). Fix \(\varepsilon>0\) small enough such that \(C\varepsilon<1\). We apply Lemma 5.3 to each atom \(a_{1}^{k}\), then there exists \(\{g_{1}^{k}\}_{k\geq 1}\in L_{c}^{\infty}(\mathbb{R}^{n}),\{h_{1}^{k}\}_{k \geq 1}\in L_{c}^{\infty}(\mathbb{R}^{n})\) such that \[\begin{cases}(\sum_{k\geq 1}|\lambda_{1}^{k}|^{\rho}\|g_{1}^{k}\|_{B^{q_{0}^{ \prime},(1-\frac{q_{0}}{q})n}_{q}}^{\rho}\|h_{1}^{k}\|_{L^{1}}^{\rho})^{\frac{1 }{\rho}}\leq CN^{n}\|f\|_{H^{\rho}(\mathbb{R}^{n})};\\ \|f-\sum_{k\geq 1}\lambda_{k}\Pi(g_{1}^{k},h_{1}^{k})\|_{H^{\rho}(\mathbb{R}^{n})} \leq C\varepsilon\|f\|_{H^{\rho}(\mathbb{R}^{n})}.\end{cases}\] Let us set \[f_{1}=f-\sum_{k\geq 1}\lambda_{k}\Pi(g_{1}^{k},h_{1}^{k}).\] Since \(f_{1}\in H^{\rho}(\mathbb{R}^{n})\), then we can decompose \(f_{1}\) as follows: \[f_{1}=\sum_{k=1}^{\infty}\lambda_{2}^{k}a_{2}^{k},\] where \(\{\lambda_{2}^{k}\}_{k\geq 1}\in\ell^{\rho}\), and \(\{a_{2}^{k}\}_{k\geq 1}\) are atoms. By applying Lemma 5.3 to \(f_{1}\), there exists \(\{g_{2}^{k}\}_{k\geq 1}\in L_{c}^{\infty}(\mathbb{R}^{n}),\{h_{2}^{k}\}_{k \geq 1}\in L_{c}^{\infty}(\mathbb{R}^{n})\) such that \[\begin{cases}(\sum_{k\geq 1}|\lambda_{2}^{k}|^{\rho}\|g_{2}^{k}\|_{B^{q0^{ \prime},(1-\frac{q0}{q})_{n}}}^{\rho}\|h_{2}^{k}\|_{L^{1}}^{\rho})^{\frac{1}{ \rho}}\leq CN^{n}\|f_{1}\|_{H^{\rho}(\mathbb{R}^{n})}\leq C\varepsilon N^{n} \|f\|_{H^{\rho}(\mathbb{R}^{n})};\\ \|f_{1}-\sum_{k\geq 1}\lambda_{2}^{k}\Pi(g_{2}^{k},h_{2}^{k})\|_{H^{\rho}( \mathbb{R}^{n})}\leq C\varepsilon\|f_{1}\|_{H^{\rho}}\leq C\varepsilon^{2}\|f \|_{H^{\rho}(\mathbb{R}^{n})}.\end{cases}\] Similarly, we can repeat the above argument to \[f_{2} =f_{1}-\sum_{k\geq 1}\lambda_{2}^{k}\Pi(g_{2}^{k},h_{2}^{k})\] \[=f-\sum_{k\geq 1}\lambda_{k}\Pi(g_{1}^{k},h_{1}^{k})-\sum_{k \geq 1}\lambda_{2}^{k}\Pi(g_{2}^{k},h_{2}^{k}).\] In summary, we can construct a sequence \(\{\lambda_{s}^{k}\}_{k\geq 1}\in\ell^{\rho},\ \{g_{s}^{k}\}_{k\geq 1}\in L_{c}^{ \infty}(\mathbb{R}^{n}),\{h_{s}^{k}\}_{k\geq 1}\in L_{c}^{\infty}(\mathbb{R}^{n})\), such that \[\begin{cases}(\sum_{s=1}^{N}\sum_{k\geq 1}|\lambda_{s}^{k}|^{\rho}\|g_{s}^{k}\|_ {B^{q0^{\prime},(1-\frac{q0}{q})_{n}}}^{\rho}\|h_{s}^{k}\|_{L^{1}}^{\rho})^{ \frac{1}{\rho}}\leq CN^{n}\sum_{s=1}^{N}\varepsilon^{s-1}\|f\|_{H^{\rho}( \mathbb{R}^{n})};\\ f=\sum_{s=1}^{N}\sum_{k\geq 1}\lambda_{s}^{k}\Pi(g_{s}^{k},h_{s}^{k})+f_{N};\\ \|f_{N}\|_{H^{\rho}(\mathbb{R}^{n})}\leq C\varepsilon^{N}\|f\|_{H^{\rho}( \mathbb{R}^{n})}.\end{cases}\] Thus, the desired result follows as \(N\to\infty\). This puts an end to the proof of Theorem 5.4. As a direct application, we give the characterization of Lipschitz spaces via commutators of Calderon-Zygmund and fractional integral operator. **Theorem 5.5**.: _Let \(0<\alpha<1\). The commutator \([b,T]\) is bounded from \(L^{1}(\mathbb{R}^{n})\to L^{\frac{n}{n-\alpha},\infty}(\mathbb{R}^{n})\) if and only if \(b\in Lip_{\alpha}(\mathbb{R}^{n})\)._ Proof.: The upper bound in this theorem is obvious. For the lower bound, suppose that \(f\in H^{p}(\mathbb{R}^{n})\) with \(\frac{1}{p}-1=\frac{\alpha}{n}\), using the weak factorization in Theorem 5.4 we obtain \[\langle b,f\rangle_{L^{2}(\mathbb{R}^{n})} =\langle b,\sum_{k=1}^{\infty}\sum_{s=1}^{\infty}\lambda_{s}^{k}\Pi( g_{s}^{k},h_{s}^{k})\rangle_{L^{2}(\mathbb{R}^{n})}\] \[=\sum_{k=1}^{\infty}\sum_{s=1}^{\infty}\lambda_{s}^{k}\langle b, \Pi(g_{s}^{k},h_{s}^{k})\rangle_{L^{2}(\mathbb{R}^{n})}\] \[=\sum_{k=1}^{\infty}\sum_{s=1}^{\infty}\lambda_{s}^{k}\langle g_{s }^{k},[b,T](h_{s}^{k})\rangle_{L^{2}(\mathbb{R}^{n})}.\] By the boundedness of \([b,T]\) and the dual theorem between \(M_{q_{0}}^{q}(\mathbb{R}^{n})\) and \(B^{q_{0^{\prime}},(1-\frac{q_{0}}{q})n}(\mathbb{R}^{n})\), we get \[|\langle b,f\rangle_{L^{2}(\mathbb{R}^{n})}| \leq\sum_{k=1}^{\infty}\sum_{s=1}^{\infty}|\lambda_{s}^{k}|\|g_{ s}^{k}\|_{B^{q_{0^{\prime}},(1-\frac{q_{0}}{q})n}(\mathbb{R}^{n})}\|[b,T](h_{s}^{ k})\|_{M_{q_{0}}^{q}(\mathbb{R}^{n})}\] \[\leq C\sum_{k=1}^{\infty}\sum_{s=1}^{\infty}|\lambda_{s}^{k}|\|g _{s}^{k}\|_{B^{q_{0^{\prime}},(1-\frac{q_{0}}{q})n}(\mathbb{R}^{n})}\|[b,T](h_ {s}^{k})\|_{L^{q,\infty}(\mathbb{R}^{n})}\] \[\lesssim\|[b,T]:L^{1}(\mathbb{R}^{n})\to L^{q,\infty}(\mathbb{R}^{n})\|\] \[\qquad\times\sum_{k=1}^{\infty}\sum_{s=1}^{\infty}|\lambda_{s}^{k }|\|g_{s}^{k}\|_{B^{q_{0^{\prime}},(1-\frac{q_{0}}{q})n}(\mathbb{R}^{n})}\|h_ {s}^{k}\|_{L^{1}(\mathbb{R}^{n})}\] \[\lesssim\|[b,T]:L^{1}(\mathbb{R}^{n})\to L^{q,\infty}(\mathbb{R}^{n}) \|\|f\|_{H^{p}(\mathbb{R}^{n})}.\] By the duality between \(H^{p}(\mathbb{R}^{n})\) and \(Lip_{\alpha}(\mathbb{R}^{n})\) with \(\frac{1}{p}-1=\frac{\alpha}{n}\), we have that \[\|b\|_{Lip_{\beta}(\mathbb{R}^{n})}\approx\sup_{\|f\|_{H^{p}(\mathbb{R}^{n})} \leq 1}|\langle b,f\rangle_{L^{2}(\mathbb{R}^{n})}|\lesssim\|[b,T]:L^{1}( \mathbb{R}^{n})\to L^{q,\infty}(\mathbb{R}^{n})\|.\] Hence, we complete the proof of Theorem 5.5. Now, we turn to prove the Theorem 1.4. **Proof of Theorem 1.4. Necessity.** Assume that \(b\in\operatorname{CMO}_{\alpha}(\mathbb{R}^{n})\) and \(E\) be a bounded set in \(L^{1}(\mathbb{R}^{n}).\) It is enough to show that \([b,T](E)\) is relatively compact in \(L^{q,\infty}(\mathbb{R}^{n})\). Similar to the proof of Theorem 1.3, it suffices to demonstrate that \([b_{\epsilon},T](E)\) is relatively compact in \(L^{q,\infty}(\mathbb{R}^{n})\). Note that \([b_{\epsilon},T](f)(x)\leq\|b_{\epsilon}\|_{Lip_{\alpha}(\mathbb{R}^{n})}I_{ \alpha}(|f|)(x)\) for any \(x\in\mathbb{R}^{n}\), we obtain that \([b_{\epsilon},T](E)\) satisfies the condition (1) in Lemma 4.3. For the condition (2), suppose that \(\operatorname{supp}(b_{\epsilon})\subset B_{R_{\epsilon}}\) for some \(R_{\epsilon}>1\). Then, for any \(f\in E\) and \(x\in B_{R}^{c}\) with \(R>2R_{\epsilon}\), we get that \(b_{\epsilon}(x)T(f)(x)=0\) and \[|[b_{\epsilon},T](f)(x)|=|T(b_{\epsilon}f)(x)|\] \[\leq C_{0}\|b_{\epsilon}\|_{L^{\infty}(\mathbb{R}^{n})}\int_{|y|< R_{\epsilon}}\frac{|f(y)|}{|x-y|^{n}}dy.\] For \(x\in B_{R}^{c}\) and \(y\in B_{R_{\epsilon}},\) we get \(|x-y|\approx|x|\) and \[|[b_{\epsilon},T](f)(x)\chi_{B_{R}^{c}}(x)|\lesssim|x|^{-n}\chi_{B_{R}^{c}}(x) \|f\|_{L^{1}(\mathbb{R}^{n})}.\] It follows from \(L^{q}(\mathbb{R}^{n})\subset L^{q,\infty}(\mathbb{R}^{n})\) that \[\|[b_{\epsilon},T](f)(x)\chi_{B_{R}^{c}}(x)\|_{L^{q,\infty}( \mathbb{R}^{n})} \leq\|[b_{\epsilon},T](f)(x)\chi_{B_{R}^{c}}(x)\|_{L^{q}(\mathbb{ R}^{n})}\] \[\lesssim\|f\|_{L^{1}(\mathbb{R}^{n})}\bigg{(}\int_{|x|>R}|x|^{-qn }dx\bigg{)}^{1/q}\] \[\lesssim R^{-\frac{n}{q^{\prime}}}\|f\|_{L^{1}(\mathbb{R}^{n})}.\] This implies that \(\|[b_{\epsilon},T](f)(x)\chi_{B_{R}^{c}}(x)\|_{L^{q,\infty}(\mathbb{R}^{n})}\to 0,\) as \(R\to\infty.\) Next, we give the estimate for the condition (3). For every \(\delta>0,\) if \(|t|\) is sufficiently small(merely depending on \(\delta\)), we have \[[b_{\epsilon},T](f)(x+t)-[b_{\epsilon},T](f)(x)=J_{1}+J_{2}+J_{3}+J_{4},\] where the precise definitions of \(J_{i},i=1,2,3,4\) are given in the above section. The inequalities (4.12), (4.14), (4.16) and (4.18) show that \[\|[b_{\epsilon},T](f)(\cdot+t)-[b_{\epsilon},T](f)(\cdot)\|_{L^{q,\infty}( \mathbb{R}^{n})}\lesssim\delta^{-\alpha}|t|^{1+\alpha}+\delta^{-\alpha-\gamma }|t|^{\alpha+2\gamma}+\delta^{-1+\alpha}|t|^{1-\alpha}.\] If \(|t|\leq\delta^{2}<1,\) then \[\|[b_{\epsilon},T](f)(\cdot+t)-[b_{\epsilon},T](f)(\cdot)\|_{L^{q,\infty}( \mathbb{R}^{n})}\lesssim\delta^{\eta},\] where \(\eta=\min\{1-\alpha,\alpha+3\gamma\}\). Thus \([b,T]\) is a compact operator maps from \(L^{1}(\mathbb{R}^{n})\) inon \(L^{q,\infty}(\mathbb{R}^{n})\). Now, we are ready to demonstrate that \(b\in\operatorname{CMO}_{\alpha}(\mathbb{R}^{n})\). Seeking a contradiction, we assume that \(b\notin\operatorname{CMO}_{\alpha}(\mathbb{R}^{n})\). Therefore, \(b\) violates (1), (2) and (3) in Lemma 4.3. **Case 1.** If (1) does not hold true for the function \(b\), then there exists a sequence of balls \(\{B_{k}=B(x_{k},\delta_{k})\}_{k\geq 1}\) such that \(\delta_{k}\to 0\) as \(k\to\infty,\) and \[\frac{1}{|B_{k}|^{1+\alpha/n}}\int_{B_{k}}|b(x)-b_{B_{k}}|dx\geq c_{0}>0, \tag{5.11}\] for every \(k\geq 1.\) Without loss of generality, we assume that the sequence of \(\{\delta_{k}\}_{k\geq 1}\) such that \[C\delta_{k+1}\leq\delta_{k},\] for some \(C>1\) and all \(k\geq 1\). Define \(m_{b}(\Omega),\)the median value of function \(b\) on a bounded set \(\Omega\subset\mathbb{R}^{n},\) by \[\begin{cases}|\{x\in\Omega:b(x)>m_{b}(\Omega)\}|\leq\frac{1}{2}|\Omega|,\\ |\{x\in\Omega:b(x)<m_{b}(\Omega)\}|\leq\frac{1}{2}|\Omega|.\end{cases}\] Let \(y_{k}\in\mathbb{R}^{n}\) be such that \(|x_{k}-y_{k}|=M\delta_{k},M>10\) for any \(k\geq 1\). Set \[\tilde{B}_{k}=B(y_{k},\delta_{k}),\tilde{B}_{k,1}=\Big{\{}y\in\tilde{B}_{k}:b( y)\leq m_{b}(\tilde{B}_{k})\Big{\}},\tilde{B}_{k,2}=\Big{\{}y\in\tilde{B}_{k}:b(y) \geq m_{b}(\tilde{B}_{k})\Big{\}},\] and \[B_{k,1}=\Big{\{}x\in B_{k}:b(x)\geq m_{b}(\tilde{B}_{k})\Big{\}},B_{k,2}=\Big{\{}x \in B_{k}:b(x)\leq m_{b}(\tilde{B}_{k})\Big{\}}.\] Also we write \[F_{k,1}=\tilde{B}_{k,1}\backslash\bigcup_{j=k+1}^{\infty}\tilde{B}_{j,1},F_{k,2 }=\tilde{B}_{k,2}\backslash\bigcup_{j=k+1}^{\infty}\tilde{B}_{j,2}.\] Note that \(F_{k,1}\cap F_{j,1}=\emptyset\) for \(j\neq k\), and \[\delta_{k}^{n}\gtrsim|F_{k,1}|\geq|\tilde{B}_{k,1}|-\sum_{j=k+1}^{\infty}| \tilde{B}_{j}|\gtrsim\delta_{k}^{n}-\sum_{j=k+1}^{\infty}\delta_{j}^{n}\gtrsim (1-\frac{1}{C-1})\delta_{k}^{n}.\] Thus, \(|F_{k,1}|\thickapprox|\tilde{B}_{k}|\). By the same analogue above, we also have \[|F_{k,2}|\thickapprox|\tilde{B}_{k}|.\] From the construction, we have \[\Big{|}b(x)-m_{b}(\tilde{B})\Big{|}\leq|b(x)-b(y)|,\forall(x,y)\in B_{k,l} \times\tilde{B}_{k,l},l=1,2. \tag{5.12}\] Next, it follows from the triangle inequality and inequality (5.11) that \[\begin{split} c_{0}&\leq\frac{1}{|B_{k}|^{1+\alpha/ n}}\int_{B_{k}}|b(x)-b_{B_{k}}|dx\\ &\leq\frac{2}{|B_{k}|^{1+\alpha/n}}\int_{B_{k}}|b(x)-m_{b}( \tilde{B}_{k})|dx\\ &\leq\frac{2}{|B_{k}|^{1+\alpha/n}}\int_{B_{k,1}}|b(x)-m_{b}( \tilde{B}_{k})|dx\\ &\qquad+\frac{2}{|B_{k}|^{1+\alpha/n}}\int_{B_{k,2}}|b(x)-m_{b}( \tilde{B}_{k})|dx\\ &=:M_{1}+M_{2}.\end{split} \tag{5.13}\] Then for any \(k\geq 1\), \(M_{1}\geq\frac{c_{0}}{2}\) or \(M_{2}\geq\frac{c_{0}}{2}\). Thus, one can assume without loss of generality that \(M_{1}\geq\frac{c_{0}}{2}\). Let \(\phi_{k}(x)=|B_{k}|^{-1}\big{(}\chi_{F_{k,1}}(x)-\frac{|F_{k,1}|}{|\tilde{B}_{ k}|}\big{)}\chi_{\tilde{B}_{k}}(x)\). It is easy to check that \(\phi_{k}\) satisfies \[\text{supp}\phi_{k}\subset\tilde{B}_{k},\int_{\mathbb{R}^{n}}\phi_{k}(x)dx=0\] and \(\|\phi_{k}\|_{L^{1}}\lesssim 1\). Moreover, \(\phi_{k}\in H^{\frac{1}{2}}\) with \(\|\phi_{k}\|_{H^{1/2}}\lesssim|B_{k}|\). By the homogenous of \(T\), we know that for any \(k\geq 1\) and \(x\in B_{k}\), one has \[\frac{1}{M^{n}}\lesssim|T(\phi_{k})(x)|.\] Furthermore, \(T(\phi_{k})(x)\) is a constant sign in \(\tilde{B}_{k}\). It follows from the inequalities (5.12) and (5.13) that \[\begin{split}\frac{c_{0}}{2M}&\leq\frac{1}{M|B_{k}|^ {1+\alpha/n}}\int_{B_{k,1}}|b(x)-m_{b}(\tilde{B}_{k})|dx\\ &\lesssim\frac{1}{|B_{k}|^{\alpha/n}}\int_{B_{k,1}}|b(x)-m_{b}( \tilde{B}_{k})||T(\phi_{k})(x)|dx\\ &=\frac{1}{|B_{k}|^{\alpha/n}}\int_{B_{k,1}}\Big{|}\int_{\mathbb{ R}^{n}}(b(x)-m_{b}(\tilde{B}_{k}))K(x,y)\phi_{k}(y)dy\Big{|}dx\\ &\leq\frac{1}{|B_{k}|^{\alpha/n}}\int_{B_{k,1}}\Big{|}\int_{ \mathbb{R}^{n}}(b(x)-b(y))K(x,y)\phi_{k}(y)dy\Big{|}dx\\ &=\frac{1}{|B_{k}|^{\alpha/n}}\int_{B_{k,1}}\Big{|}[b,T](\phi_{k })(x)\Big{|}dx.\end{split} \tag{5.14}\] By Komogrove inequality and \(1=1/q+\alpha/n\), we have \[\frac{c_{0}}{2M}\lesssim\|[b,T](\chi_{F_{k,1}})\|_{L^{q,\infty}}.\] The inequality (5.14) and the boundedness of \([b,T]\) from \(L^{1}(\mathbb{R}^{n})\) to \(L^{q,\infty}(\mathbb{R}^{n})\) give us that \[\|[b,T](\phi_{k})\|_{L^{q,\infty}}\approx 1. \tag{5.15}\] On the other hand, since \(\big{|}[b,T](f)(x)\big{|}\lesssim I_{\alpha}(|f|)(x)\) and \(I_{\alpha}\) maps from \(H^{\frac{1}{2}}(\mathbb{R}^{n})\) into \(L^{\frac{n}{2n-\alpha}}(\mathbb{R}^{n})\), we have \[\begin{split}\|[b,T](\phi_{k})\|_{L^{\frac{n}{2n-\alpha}}}& \lesssim\|b\|_{Lip_{\alpha}(\mathbb{R}^{n})}\|\phi_{k}\|_{H^{1/2}} \\ &\lesssim\|b\|_{Lip_{\alpha}(\mathbb{R}^{n})}\delta_{k}^{n}. \end{split}\] Thus, \([b,T](\phi_{k})\to 0\) in \(L^{\frac{n}{2n-\alpha}}\), as \(k\to\infty\). This contradicts (5.15). In other words, \(b\) must satisfy (1). Similarly, we also obtain the desired result if \(M_{1}\geq c_{0}/2\) holds true. In conclusion, \(b\) cannot violate (1). **Case 2.** Assume that \(b\) violates (2). The same argument as in the proof of **Case 1** by considering \(R_{k}\) in place of \(\delta_{k}\), with \(R_{k}\to\infty\). By repeating the above proof for \(R_{k}\) in place of \(\delta_{k}\), we also obtain (5.15). For any \(p>1\), let \(\tilde{q}\) with \(1/\tilde{q}=1/p-\alpha/n\). Since \([b,T]\) maps \(L^{p}\) to \(L^{\tilde{q}}\), then we have \[\begin{split}\|[b,T](\phi_{k})\|_{L^{\tilde{q}}}& \lesssim\|b\|_{Lip_{\alpha}(\mathbb{R}^{n})}\|\phi_{k}\|_{L^{p}}\\ &\lesssim\|b\|_{Lip_{\alpha}(\mathbb{R}^{n})}\delta_{k}^{-\frac{n }{p^{\prime}}}.\end{split}\] Thus, \([b,T](\chi_{k})\to 0\) in \(L^{\tilde{q}}\), when \(k\to\infty\). As a result, \(b\) satisfies (2). **Case 3.** The proof of this case is similar to the one of **Case 2**. Thus, we leave it to the reader. From the above cases, we conclude that \(b\in\text{CMO}_{\alpha}(\mathbb{R}^{n})\) ## 6. Appendix Now, we show that commutators of singular integrals with \(\operatorname{CMO}(\mathbb{R}^{n})\) functions are not compact in \(L^{1,\infty}(\mathbb{R}^{n})\). **Proposition 6.1**.: _There exists a function \(b\in\operatorname{CMO}(\mathbb{R}^{n})\) such that \([b,T]\) is not a compact operator from \(L\log L(\mathbb{R}^{n})\) to \(L^{1,\infty}(\mathbb{R}^{n})\)._ Proof.: Without loss of generality, we only deal with \(n=1\) and \(T=H\), where \(H\) is the Hilbert transform \[H(f)(x)=p.v.\int_{\mathbb{R}}\frac{f(y)}{x-y}dy.\] Let \(f=-\chi_{(-1,1)}\) and \(b\in C_{c}^{\infty}(\mathbb{R})\) such that \[b(x)=\left\{\begin{array}{ll}0,&|x|>2,\\ 1,&|x|<1.\end{array}\right.\] Then \(f\in L\log L(\mathbb{R})\) and \(b\in\operatorname{CMO}(\mathbb{R})\). For any \(|x|>2\), we have \[\begin{split}[b,H][b,H](f)(x)&=\int_{\mathbb{R}}\frac{b(x)- b(y)}{x-y}f(y)dy=\int_{\mathbb{R}}\frac{0-1}{x-y}f(y)dy\\ &=\int_{-1}^{1}\frac{1}{x-y}dy=\log\big{(}1+\frac{2}{x-1}\big{)}. \end{split} \tag{6.1}\] By (6.1), we conclude that for any \(A>2\) and \(0<\lambda<\min\{\log\big{(}1+\frac{2}{A-1}\big{)},-\log\big{(}1-\frac{2}{A+1} \big{)}\}\), \[\begin{split}&\Big{|}\{x\in\mathbb{R}:|[b,H](f)(x)\chi_{E_{A}}(x )|>\lambda\}\Big{|}\\ &=\Big{|}\{x\in\mathbb{R}:|\log\big{(}1+\frac{2}{x-1}\big{)}|\chi _{E_{A}}(x)>\lambda\}\Big{|}\\ &=\Big{|}\{x>A:\log\big{(}1+\frac{2}{x-1}\big{)}>\lambda\}\Big{|} \\ &\qquad+\big{|}\{x<-A:-\log\big{(}1+\frac{2}{x-1}\big{)}>\lambda\} \Big{|}.\end{split} \tag{6.2}\] By a direct computation, we obtain \[\begin{split}&\Big{|}\{x>A:\log\big{(}1+\frac{2}{x-1}\big{)}> \lambda\}\Big{|}\\ &=\Big{|}\{x\in\mathbb{R}:A<x<\frac{2}{e^{\lambda}-1}+1\}\Big{|} \\ &=\frac{2}{e^{\lambda}-1}+1-A.\end{split} \tag{6.3}\] and \[\begin{split}&\Big{|}\{x<-A:-\log\big{(}1+\frac{2}{x-1}\big{)}> \lambda\}\Big{|}\\ &=\Big{|}\{x\in\mathbb{R}:\frac{2}{e^{-\lambda}-1}+1<x<-A\}\Big{|} \\ &=-A-\frac{2}{e^{-\lambda}-1}-1.\end{split} \tag{6.4}\] The inequalities (6.2)-(6.4) imply that \[\begin{split}\|[b,H](f)\chi_{E_{A}}\|_{L^{1,\infty}(\mathbb{R}^{ n})}&=\sup_{\lambda>0}\lambda\Big{|}\{x\in\mathbb{R}:|[b,H](f)(x) \chi_{E_{A}}(x)|>\lambda\}\Big{|}\\ &=\sup_{\lambda>0}\lambda\Big{(}\frac{2}{e^{\lambda}-1}-\frac{2}{ e^{-\lambda}-1}-2A\Big{)}\\ &=\lim_{\lambda\to 0^{+}}\lambda\Big{(}\frac{2}{e^{\lambda}-1}- \frac{2}{e^{-\lambda}-1}-2A\Big{)}\\ &=4\neq 0,\end{split}\] By the fact that the functions \(\frac{\lambda}{e^{\lambda}-1}\) and \(-\frac{\lambda}{e^{-\lambda}-1}\) are monotonically decreasing, and \[\lim_{\lambda\to 0^{+}}\frac{\lambda}{e^{\lambda}-1}=\lim_{\lambda\to 0^{+}} \frac{1}{e^{\lambda}}=1,\lim_{\lambda\to 0^{+}}\frac{\lambda}{e^{-\lambda}-1}=-1,\] we conclude that for any \(A>2\), \[\|[b,H](f)\chi_{E_{A}}\|_{L^{1,\infty}(\mathbb{R})}=4,\] then \([b,H](f)\) does not satisfy the condition (1.3) in Lemma 1.1. Thus, \([b,H](f)\) is not a compact operator from \(L\log L(\mathbb{R})\) to \(L^{1,\infty}(\mathbb{R})\). **Proposition 6.2**.: _There exists a function \(b\in\operatorname{CMO}(\mathbb{R}^{n})\) such that \([b,T]\) is not a compact operator from \(H^{1}(\mathbb{R}^{n})\) to \(L^{1,\infty}(\mathbb{R}^{n})\)._ Proof.: Without loss of generality, we only deal with \(n=1\) and \(T=H\). Let \(f(x)=-x\big{(}\chi_{(-1,-\frac{1}{2})}(x)+\chi_{(\frac{1}{2},1)}(x)\big{)}\) and \(b\in C_{c}^{\infty}(\mathbb{R})\) such that \[b(x)=\left\{\begin{array}{ll}0,&|x|>2,\\ x^{-1},&\frac{1}{2}<|x|<1,\\ 0,&|x|<\frac{1}{4}.\end{array}\right.\] Then \(f\in H^{1}(\mathbb{R})\backslash H^{1}_{b}(\mathbb{R})\) and \(b\in\operatorname{CMO}(\mathbb{R})\). For any \(|x|>2\), we have \[[b,H][b,H](f)(x) =\int_{\mathbb{R}}\frac{b(x)-b(y)}{x-y}f(y)dy=\int_{\mathbb{R}} \frac{0-1/y}{x-y}f(y)dy\] \[=\int_{-1}^{-1/2}\frac{1}{x-y}dy+\int_{1/2}^{1}\frac{1}{x-y}dy\] \[=\log\big{(}1+\frac{2x}{2x^{2}-x-1}\big{)}.\] It is easy to see that \(\frac{t}{2}<\log(1+t)\) for any \(t>1\). Therefore, \[\log\big{(}1+\frac{2x}{2x^{2}-x-1}\big{)}>\frac{x}{2x^{2}-x-1}>\frac{1}{x},\] when \(x>2\). It implies that for any \(A>2\), \[\|[b,H](f)\chi_{E_{A}}\|_{L^{1,\infty}(\mathbb{R}^{n})} >\|\frac{1}{(\cdot)}\chi_{E_{A^{+}}}\|_{L^{1,\infty}(\mathbb{R}^{ n})}\] \[=\sup_{\lambda>0}\lambda\Big{|}\{x\in\mathbb{R}:A<x<\frac{1}{ \lambda}\}\Big{|}\] \[=1,\] where \(A^{+}:=\{x\in\mathbb{R}:x>A\}\). So, the condition (1.3) in Lemma 1.1 is not satisfied. We conclude that \([b,H](f)\) is not a compact operator from \(H^{1}(\mathbb{R})\) to \(L^{1,\infty}(\mathbb{R})\). In the case of classical Banach function spaces, the Minkowski-type inequality follows by the application of associate space; see [2, Lemma 4.2]. Now, we would like to give a proof for the Minkowski-type inequality for weak Lebesgue spaces,. **Proposition 6.3**.: _(Minkowski-type inequality.) Let \(1<p<\infty\). Suppose that \(f\) is a nonnegative measurable function on \(\mathbb{R}^{n}\times\mathbb{R}^{n}\) with \(\int_{\mathbb{R}^{n}}\|f(x,\cdot)\|_{L^{p,\infty}(\mathbb{R}^{n})}dx<\infty\). Then_ \[\Big{\|}\int_{\mathbb{R}^{n}}|f(x,\cdot)|dx\Big{\|}_{L^{p,\infty}(\mathbb{R}^ {n})}\leq C\int_{\mathbb{R}^{n}}\|f(x,\cdot)\|_{L^{p,\infty}(\mathbb{R}^{n})}dx.\] Proof.: We first verify that for any bounded set \(E\subset\mathbb{R}^{n}\) and \(1<q<p<\infty\), \[\begin{split}\bigg{(}\int_{E}\bigg{(}\int_{\mathbb{R}^{n}}|f(x,y )|dx\bigg{)}^{q}dy\bigg{)}^{1/q}&\leq\int_{\mathbb{R}^{n}}\bigg{(} \int_{E}|f(x,y)|^{q}dy\bigg{)}^{1/q}dx\\ &\lesssim|E|^{1/q-1/p}\int_{\mathbb{R}^{n}}\|f(x,\cdot)\|_{L^{p, \infty}(\mathbb{R}^{n})}dx.\end{split} \tag{6.5}\] Let \(x\in\mathbb{R}^{n}\) be a fixed point. For any \(\lambda>0\), we have \[\lambda|\{y\in\mathbb{R}^{n}:|f(x,y)|>\lambda\}|^{1/p}\leq\|f(x,\cdot)\|_{L^{ p,\infty}(\mathbb{R}^{n})}.\] Choose \[N(x)=\|f(x,\cdot)\|_{L^{p,\infty}(\mathbb{R}^{n})}\big{(}\frac{q}{p-q}\big{)}^{1/ p}|E|^{-1/p},\] For \(1<q<p<\infty\), we conclude that \[\int_{E}|f(x,y)|^{q}dy =q\int_{0}^{\infty}\lambda^{q-1}\Big{|}\big{\{}y\in E:|f(x,y)|> \lambda\big{\}}\Big{|}d\lambda\] \[\leq q\int_{0}^{N(x)}\lambda^{q-1}|E|d\lambda+q\int_{N(x)}^{ \infty}\lambda^{q-1}\frac{\|f(x,\cdot)\|_{L^{p,\infty}(\mathbb{R}^{n})}^{p}}{ \lambda^{p}}d\lambda\] \[=N(x)^{q}|E|+\frac{q}{p-q}N(x)^{q-p}\|f(x,\cdot)\|_{L^{p,\infty}( \mathbb{R}^{n})}^{p}.\] It shows that \[\bigg{(}\int_{E}|f(x,y)|^{q}dy\bigg{)}^{1/q}\leq 2\Big{(}\frac{q}{p-q}\Big{)}^ {1/p}\|f(x,\cdot)\|_{L^{p,\infty}(\mathbb{R}^{n})}^{p}|E|^{-1/p+1/q}.\] Therefore, the proof of the inequality (6.5) is completed. Now we return to our proof. For any \(\lambda>0\), take \(E=\{y\in\mathbb{R}^{n}:\int_{\mathbb{R}^{n}}|f(x,y)|dx>\lambda\}\), then \(|E|<\infty.\) In fact, if \(|E|=\infty\), there is a sequence \(\{E_{k}\}\) of measurable sets such that \(E_{k}\subset E\) and \(|E_{k}|=k\) for \(k=0,1,2,\cdots.\) Thus for every \(k\), by (6.5), we get \[\lambda^{q}k=\lambda^{q}|E_{k}| \leq\int_{E_{k}}\bigg{(}\int_{\mathbb{R}^{n}}|f(x,y)|dx\bigg{)}^{ q}dy\] \[\leq C|E_{k}|^{1-q/p}\bigg{(}\int_{\mathbb{R}^{n}}\|f(x,\cdot)\|_ {L^{p,\infty}(\mathbb{R}^{n})}dx\bigg{)}^{q}\] \[\leq Ck^{1-q/p}\bigg{(}\int_{\mathbb{R}^{n}}\|f(x,\cdot)\|_{L^{p, \infty}(\mathbb{R}^{n})}dx\bigg{)}^{q}.\] However, it is not true. It follows from the inequality (6.5) that \[\lambda^{q}|E| \leq\int_{E}\bigg{(}\int_{\mathbb{R}^{n}}|f(x,y)|dx\bigg{)}^{q}dy\] \[\leq C|E|^{1-q/p}\bigg{(}\int_{\mathbb{R}^{n}}\|f(x,\cdot)\|_{L^ {p,\infty}(\mathbb{R}^{n})}dx\bigg{)}^{q}.\] Therefore, \[\lambda|E|^{1/p}\leq C\int_{\mathbb{R}^{n}}\|f(x,\cdot)\|_{L^{p,\infty}( \mathbb{R}^{n})}dx,\] then \[\bigg{\|}\int_{\mathbb{R}^{n}}|f(x,\cdot)|dx\bigg{\|}_{L^{p,\infty}(\mathbb{ R}^{n})}\leq C\int_{\mathbb{R}^{n}}\|f(x,\cdot)\|_{L^{p,\infty}(\mathbb{R}^{n})}dx.\] Therefore, this proof has been completed. ### Acknowledgements The authors extend heartfelt gratitude to Professor C. Perez for introducing the problem pertaining to the endpoint theory for the boundedness of commutators. ## Conflict of interests The authors declare that they have no conflict of interest. ### Data availability Data sharing not applicable to this article as no data sets were generated or analysed during the current study. ### Declarations Conflict of interest Conflict of interest no potential conflict of interests was reported by the authors.
2309.04336
Continuum asymptotics for tree growth models
We classify the forward dynamics of all (plane) tree-valued Markov chains $(T_n,n \geq 1)$ with uniform backward dynamics. Every such Markov chain is classified by a decorated planar real tree. We also show that under an inhomogeneous rescaling after trimming leaves $(T_n, n\geq 1)$ converges to a random real tree in the Gromov--Prokhorov metric. This generalises and sheds some new light on work by Evans, Gr\"ubel and Wakolbinger (2017) on the binary special case.
David Geldbach
2023-09-08T14:04:08Z
http://arxiv.org/abs/2309.04336v1
# Continuum asymptotics for tree growth models ###### Abstract We classify the forward dynamics of all (plane) tree-valued Markov chains \((T_{n},n\geq 1)\) with uniform backward dynamics. Every such Markov chain is classified by a decorated planar real tree. We also show that under an inhomogeneous rescaling after trimming leaves \((T_{n},n\geq 1)\) converges to a random real tree in the Gromov-Prokhorov metric. This generalises and sheds some new light on work by Evans, Grubel and Wakolbinger (2017) on the binary special case. ## 1 Introduction We study tree-valued Markov chains \((T_{n},n\geq 1)\) where \(T_{n}\) is a plane tree with \(n\) leaves, a root \(r_{n}\) and no vertices with degree \(2\). We consider a broad class of such tree-valued Markov chains. We do not make any assumptions of the forward dynamics, we only assume that \((T_{n},n\geq 1)\) has _uniform backward dynamics_, see also Figure 1 for an illustration. **Definition 1.1**.: The Markov chain \((T_{n},n\geq 1)\) is said to have _uniform backward dynamics_ if for all \(n\geq 2\) the following procedure yields a tree with the same distribution as \(T_{n-1}\). 1. Choose a leaf uniformly of \(T_{n}\), 2. remove this leaf and its associated edge, 3. remove any vertex with degree \(2\) and replace it and its two edges by a single edge. The goal of this article is to classify all tree-valued Markov chains with uniform backward dynamics. We want to understand the different possible forward dynamics. We use the classification of forward dynamics to show a scaling limit. Another viewpoint is that we characterise the _Doob-Martin_ boundary of these Markov chains. This is the goal of Evans, Grubel and Wakolbinger [8], they characterise the Doob-Martin boundary of Remy's tree growth chain which corresponds to the special case of binary trees. We comment on the relation to their work in Remark 1.11. One well-studied one-parameter family of Markov chains that has uniform backward dynamics is Marchal's tree growth \((T_{n}^{M}(\alpha),n\geq 1)\) with \(\alpha\in(1,2]\). We construct \(T_{n+1}^{M}(\alpha)\) recursively from \(T_{n}^{M}(\alpha)\): Figure 1: An example for the uniform backward dynamics. The red circle indicates which leaf has been uniformly chosen. 1. Assign weight \(\alpha-1\) to each edge of \(T_{n}^{M}(\alpha)\) and weight \(k-1-\alpha\) to each branchpoint of \(T_{n}^{M}(\alpha)\) with degree \(k\geq 3\). Choose an edge or a branchpoint according to these weights. 2. If an edge \(e\) has been chosen, split it into two edges \(e_{1},e_{2}\) and attach a new leaf to the new vertex. 3. If a branchpoint \(v\) has been chosen, attach a new leaf to this branchpoint. 4. The new planar order of \(T_{n+1}^{M}(\alpha)\) is chosen uniformly at random, consistently with the planar order of \(T_{n}^{M}(\alpha)\). This has been introduced by Marchal [19] and generalises Remy's tree growth [20] which only grows binary trees, here \(\alpha=2\). The fact that the backward dynamics of Marchal's tree growth are uniform goes back to Haas, Miermont, Pitman and Winkel [15]. The uniform backward dynamics have an interesting consequence if we want to condition the Markov chain: let \(T\) be a tree such that the event \(\{T_{n+m}=T\}\) has positive probability for \(n,m\geq 1\). Consider a conditional distribution of the form \[\mathbb{P}\big{(}T_{m}\in\cdot\big{|}T_{n+m}=T\big{)}. \tag{1.1}\] We then have a very explicit way of describing the distribution of \(T_{m}^{M}\): we just need to choose \(n-m\) distinct uniform leaves of \(T\), remove them iteratively and any vertices of degree \(2\) that turn up in the process of removing leaves. There is another interesting property of Marchal's tree growth chain. Let \(d_{n}^{gr}\) be the graph metric on \(T_{n}^{M}(\alpha)\) and \(d_{n}=n^{-1+1/\alpha}d_{n}^{gr}\), a rescaled metric where every edge has length \(n^{-1+1/\alpha}\). Consider \((T_{n}^{M}(\alpha),d_{n})\) as a random metric space, then there exists a random metric space \((\mathcal{T}_{\alpha},d_{\alpha})\) such that \[(T_{n}^{M}(\alpha),d_{n})\stackrel{{ a.s.}}{{\longrightarrow}}( \mathcal{T}_{\alpha},d_{\alpha}), \tag{1.2}\] in the Gromov-Hausdorff-Prokhorov topology, for details we refer to the literature. The statement in this form goes back to Curien and Haas [4, Theorem 5]; related statements are [15, Corollary 24], [19, Theorem 3.2] and [5, Theorem 3.3.3]. The metric space \((\mathcal{T}_{\alpha},d_{\alpha})\) is a (random) real tree, we require these to be rooted. **Definition 1.2** (Real trees).: A real tree (\(\mathbb{R}\)-tree) is a complete, separable metric space \((\mathbf{T},d_{\mathbf{T}})\) with the property that for each \(x,y\in\mathbf{T}\), there is a unique non-self-intersecting path from \(x\) to \(y\), denoted by \([x,y]_{\mathbf{T}}\). This path is isometric to a closed real interval. We require \(\mathbf{T}\) to have a marked point \(r\in\mathbf{T}\) which we call its root. A prominent example of a random real tree is the Brownian continuum random tree introduced by Aldous [1, 2]. This was later generalised by Duquesne and Le Gall [5] to a family of stable trees \((\mathcal{T}_{\alpha},1<\alpha\leq 2)\) Figure 2: A realisation of Marchal’s tree growth after 25000 growth steps for different values for \(\alpha=2\) (left) and \(\alpha=1.4\) (right). These trees approximate the Brownian continuum random tree (left) and the \(1.4\)–stable tree (right). with \(\mathcal{T}_{2}\) being the Brownian continuum random tree. For an introduction to real trees, see for example the book by Evans [7]. Often we leave the metric \(d_{\mathbf{T}}\) implicit. Further, if the tree \(\mathbf{T}\) is clear from the context, we write \(d\) instead of \(d_{\mathbf{T}}\). Similarly, we often write \([x,y]\) instead of \([x,y]_{\mathbf{T}}\) for the path between two points. We also want to make precise what we mean when we speak of subtrees. **Definition 1.3** (Real subtrees).: Let \((\mathbf{T},r)\) be a real tree. Given \(x\in\mathbf{T}\), we define two notions of subtrees: 1. The _fringe subtree_ rooted at \(x\) is \[F_{\mathbf{T}}(x)=\big{\{}y\in\mathbf{T}:x\in[r,y]_{\mathbf{T}}\big{\}}.\] 2. The _subtrees of_\(x\) are the connected components of \(F_{\mathbf{T}}(x)\backslash\{x\}\). To each of them, we add \(x\) and root them at \(x\). Note that if \(x\neq r\), then \(r\notin F_{\mathbf{T}}(x)\) and \(r\) is not contained in any subtree of \(x\). We also consider a special class of real trees, so called interval partition trees (IP-trees), introduced by Forman [9]. There, distances in the real tree relate to masses of a probability measure \(\mu\). When we want to stress the fact that if a real tree \(\mathbf{T}\) has an associated probability measure \(\mu\) defined on the Borel \(\sigma\)-algebra, we speak of a _weighted real tree_. We denote by \(\mathrm{supp}(\mu)\) the closed support of \(\mu\). **Definition 1.4** (IP-tree).: A rooted, weighted real tree \((\mathbf{T},d,r,\mu)\) is an _interval-partition tree_ if it possesses the following properties. 1. _Spanning._ Every leaf is in the support of \(\mu\), i.e. \(\mathbf{T}=\mathrm{span}(\mathrm{supp}(\mu))\). 2. _Spacing._ For \(x\in\mathbf{T}\), if \(x\) is either a branch point or lies in the support of \(\mu\), then \[d(r,x)+\mu(F_{\mathbf{T}}(x))=1.\] (1.3) **Remark 1.5**.: The name originates from a so-called bead crushing construction of IP-trees, see Forman [9]. There, they consider a leaf \(x\in\mathbf{T}\) and project \(\mu\) onto the interval \([r,x]\) - this gives rise to an interval partition. The masses of the blocks are given by \(\mu(F_{\mathbf{T}}(y))-\mu(\mathbf{S}_{y})\) for \(y\in[r,x]\) and where \(\mathbf{S}_{y}\) is the subtree of \(y\) containing \(x\). We will not use these interval partitions. To construct forward dynamics of \((T_{n},n\geq 1)\), we need to endow real trees with additional structure. In particular we introduce a notion of planarity for real trees as well as decorating functions. For a real tree \((\mathbf{T},d,r)\) we call a family of maps \(\psi=(\psi_{n},n\geq 2)\) a _planar order_ for \(\mathbf{T}\) if \(\psi_{n}\) maps a tuple \((x_{1},\ldots,x_{n})\in\mathbf{T}^{n}\) to a combinatorial, partially labelled, plane tree consistently when going from \(\psi_{n}\) to \(\psi_{n+1}\). We specify the details of this definition and the consistency relations of \(\psi\) in Definition 2.2. Most importantly, we require \(\psi_{n}(x_{1},\ldots,x_{n})\) to be the combinatorial tree corresponding to \(\mathrm{span}(x_{1},\ldots,x_{n})\) as non-plane trees for every \((x_{1},\ldots,x_{n})\in\mathbf{T}^{n}\). If \(\mathbf{S}_{1}\) and \(\mathbf{S}_{2}\) are both subtrees of \(x\in\mathbf{T}\), we say that \(\mathbf{S}_{1}\) is to the left of \(\mathbf{S}_{2}\) if for two arbitrary points \(y_{1}\in\mathbf{S}_{1}\backslash\{x\},y_{2}\in\mathbf{S}_{2}\backslash\{x\}\) the image of \(y_{1}\) is to the left of \(y_{2}\) in \(\psi_{2}(y_{1},y_{2})\) - see Definition 2.5. Next, assume that \((\mathbf{T},d,r)\) is equipped with a probability measure \(\mu\). Decompose \(\mu=\mu_{atoms}+\mu_{s}+\mu_{\ell}\) where \(\mu_{atoms}\) is supported on the atoms of \(\mu\), \(\mu_{s}\) is supported diffusely on \(\mathbf{T}\backslash\{\text{leaves}\}\) and \(\mu_{\ell}\) is supported diffusely on the leaves of \(\mathbf{T}\). We can choose the supports of \(\mu_{atoms},\mu_{s},\mu_{\ell}\) to be disjoint. Further, we can choose \(supp(\mu_{s})\) in such a way that for every \(x\in\ supp(\mu_{s})\) we have \(\deg(x)=2\). We then call a measurable function \(\lambda:\mathbf{T}\rightarrow[0,1]\) a _branch weight function_ when viewed as an element of \(L^{1}(\mu_{s})\). We call \(B\) a _branchpoint weight function_ if \(B\) maps every element \(a\) of \(supp\ \mu_{atoms}\) to a function \(\beta_{a}:[0,1]\rightarrow[0,1]\) - note that \(supp\ \mu_{atoms}\) is an at most countable set. For each \(a\in\ supp(\mu_{atoms})\), we require that \(\beta_{a}\) is non-decreasing, right-continuous and the cardinality of the range of \(\beta_{a}\) is at most \(\deg a\). The degree \(\deg a\) is defined as the number of connected components of \(\mathbf{T}\backslash\{a\}\). Also, we want \(\beta_{a}\) to be piece-wise constant in the following sense. First, enumerate the connected components of \(F_{\mathbf{T}}(a)\backslash\{a\}\) by \(\mathbf{S}_{1},\mathbf{S}_{2},\ldots\): we require that \(\mu(\mathbf{S}_{1})\geq\mu(\mathbf{S}_{2})\geq\ldots\) - if two subtrees have the same mass, then we enumerate them in such a way that for \(i<j\), \(\mathbf{S}_{i}\) is to the left of \(\mathbf{S}_{j}\). We implicitly include the case that there are only finitely many subtrees. Secondly, let \(c_{i}=\sum\mu(\mathbf{S}_{j})/\sum_{k\geq 1}\mu(\mathbf{S}_{k})\) where the first sum ranges over all \(j\) such that \(\mathbf{S}_{j}\) is left of \(\mathbf{S}_{i}\) and \(\mathbf{S}_{j}\neq\mathbf{S}_{i}\). We then impose that \(\beta_{a}\) is constant on \([c_{i},\inf_{c_{j}>c_{i}}c_{j})\) for every \(i\). **Definition 1.6**.: We call a collection of objects \((\mathbf{T},d,r,\mu,\psi,\lambda,B)\) with the above properties a _decorated planar real tree_. If \((\mathbf{T},d,r,\mu)\) is an IP-tree, we speak of decorated planar IP-trees. Furthermore, we call \((\mathbf{T},d,r,\psi)\) a _planar real tree_. This notion is discussed in detail in Section 2.1. With these definitions in hand, we can construct the forward dynamics for the tree growth process \((T_{n},n\geq 1)\). See Figure 3 for an illustration. **Construction 1.7**.: Assume we are given a decorated planar real tree \((\mathbf{T},d,r,\mu,\psi,\lambda,B)\). To construct a plane tree \(T_{n}\), we proceed as follows: 1. Sample \(\xi_{1},\ldots,\xi_{n}\in\mathbf{T}\)\(i.i.d.\) from \(\mu\). 2. Consider the associated plane, partially labelled tree \(S_{n}=\psi_{n}(\xi_{1},\ldots,\xi_{n})\). By the properties imposed on \(\psi_{n}\), \(S_{n}\) will contain \(n\) vertices labelled by \([n]\), including all leaves. 3. For every vertex in \(S_{n}\) that is labelled but not a leaf, we attach a new leaf to it and move the label to the new leaf. Call the combinatorial tree obtained this way \(T_{n}^{*}\). We determine the planar order of the new leaves as follows. Suppose we have attached in \(T_{n}\) a leaf labelled \(i\) to the vertex corresponding to \(\xi_{i}\) in the real tree \(\mathbf{T}\). We need to distinguish two cases: \(\xi_{i}\in\ supp\ \mu_{atoms}\) and \(\xi_{i}\in\ supp\ \mu_{s}\). 1. If \(\xi_{i}\in\ supp\ \mu_{s}\), recall that for \(x\in\ supp\ \mu_{s}\) we have \(\deg(x)=2\). Let \(X_{i}\) be a conditionally independent Bernoulli random variable with parameter \(\lambda(\xi_{i})\). If \(X_{i}=1\), we orient the leaf labelled \(i\) to the left of the subtree of the vertex that was labelled \(i\) in \(S_{n}\) and otherwise to the right. 2. If \(\xi_{i}=a\in supp\ \mu_{atoms}\), we do the following. Enumerate the subtrees of \(a\) and write \(\mathbf{S}_{j}<\mathbf{S}_{k}\) if \(\mathbf{S}_{j}\) is to the left of \(\mathbf{S}_{k}\). Now, let \(U_{i}\) be an independent \([0,1]\) uniform random variable. In the combinatorial tree \(T_{n}^{*}\), we consider the subtrees of the parent of the leaf \(i\), each of these Figure 4: an illustration of how to use the branchpoint weight function \(B\) for a given atom \(a\). If \(U_{i}\) is between two thresholds corresponding to two different subtrees of \(a\), then we attach a leaf between the different subtrees in the discrete tree \(T_{n}\). Figure 3: an example for sampling \(T_{13}\). First, we sample \((\xi_{i},i\leq 13)\) from \(\mathbf{T}\); secondly we apply \(\psi_{13}\) to obtain \(S_{13}\) and thirdly we add leaves to some interior vertices (in red) to obtain \(T_{13}\). In the last step, the planar order is determined by \(\lambda\) and \(B\) as well as some additional randomness. corresponds naturally to some subtree \(\mathbf{S}_{j}\) in the real tree. By abuse of notation, we now orient the leaf labelled \(i\) to the left of every subtree such that \(U_{i}<\beta_{a}(\mathbf{S}_{j})\) and to the right of every subtree with \(U_{i}\geq\beta_{a}(\mathbf{S}_{j})\) where we write \[\beta_{a}(\mathbf{S}_{j})=\beta_{a}\left(\frac{1}{Z_{a}}\sum_{k:\mathbf{S}_{k} <\mathbf{S}_{j}}\mu(\mathbf{S}_{k})\right),\] with the normalising constant \(Z_{a}=\sum_{k}\mu(\mathbf{S}_{k})\). 3. If we have \(\xi_{i}=\xi_{j}=a\in supp\ \mu_{atoms}\), i.e. we attach two leaves to the same branchpoint, then we reuse the uniform random variables \(U_{i},U_{j}\) of the previous step. If \(U_{i}<U_{j}\) we orient the leaf labelled \(i\) to the left of the leaf \(j\) and vice versa. 4. Delete the leaf labels to obtain a plane tree \(T_{n}\). **Remark 1.8**.: When constructing \(T_{n}\) and \(T_{m}\) for \(n<m\), we reuse the random variables \((\xi_{i},X_{i},U_{i};i\leq n)\) in the construction of \(T_{m}\). This results in the sequence \((T_{n},n\geq 1)\) being a tree-valued Markov chain with uniform backward dynamics. Indeed, this is true because a backward step corresponds to removing \(\xi_{n}\) from the construction. Once the labels are removed, this corresponds to uniformly choosing a point from \(\xi_{1},\dots,\xi_{n}\) which in turn means choosing a leaf of \(T_{n}\) uniformly. We could keep the leaf labels in step 4.) to obtain a Markov chain of labelled trees. **Definition 1.9**.: Let \(\rho_{\mathbf{T}}\) denote the law of \((T_{n},n\geq 1)\) constructed from \((\mathbf{T},d,r,\mu,\psi,\lambda,B)\) by the above Construction 1.7. We can now state our main theorem. The theorem states that the law of \((T_{n},n\geq 1)\) can be expressed as a mixture of extremal measures which are of the form of \(\rho_{\mathbf{T}}\). This is made rigorous in Section 2.3: the measure \(\nu\) in the following theorem is defined on the Doob-Martin boundary of the Markov chain \((T_{n},n\geq 1)\). Indeed, this theorem characterises the Doob-Martin boundary. **Theorem 1.10**.: _For every tree-valued Markov chain \((T_{n},n\geq 1)\) with uniform backward dynamics there exists a unique probability measure \(\nu\) such that_ \[\mathbb{P}\left((T_{n},n\geq 1)\in\cdot\right)=\int\rho_{\mathbf{T}}\left((T_{n}, n\geq 1)\in\cdot\right)\nu(d\,\textbf{T}). \tag{1.4}\] _Here we **T** is an abbreviation for the decorated planar IP-tree \((\textbf{T},d,r,\mu,\psi,\lambda,B)\)._ This theorem is very similar in spirit to a long list of theorems that seek to classify exchangeable random objects. The most classical one is de-Finetti's theorem that states that the distribution of every sequence of exchangeable real random variables is a mixture of the distribution of sequences of \(i.i.d.\) random variables. Another notable one is Kingman's paintbox theorem that describes every exchangeable partition of \(\mathbb{N}\) as a mixture of paintboxes. The article [10] by Forman et al. also classifies a family of exchangeable objects, hierarchies in their case, by sampling from real trees and the work [11] of Foutel-Rodier et al. classifies various exchangeable objects via combs which are tree-like as well. Gerstenberg [12] classifies exchangeable interval hypergraphs, trees are a special case here, by sampling from a random subset of \([0,1]^{2}\). See Kallenberg [17] for the classical theorems, and in [10] there is a good list of references to similar, modern theorems. **Remark 1.11**.: The work [8] of Evans, Grubel and Wakolbinger forms a basis for a lot of the ideas in this article, in particular Proposition 3.6 corresponds to their main theorem [8, Theorem 8.2]. In their article, the authors study Remy's tree growth [20] - the case of \(\alpha=2\) in Marchal's tree growth - and binary tree-valued Markov chains with the same backwards dynamics. They place a great emphasis on the Doob-Martin boundary of Remy's tree growth chain and the topological properties of the boundary, we discuss these concepts in Section 2.3. This article extends their work as our framework allows for multi-furcating trees instead of binary trees just like \(\alpha\)-stable trees extend the Brownian continuum random tree or like Marchal's tree growth extends Remy's tree growth. Further, even in the case \(\alpha=2\) we believe that our variation of their construction is more descriptive in the form of our decorated planar real trees. **Remark 1.12**.: If \(\mu\) is supported on the leaves of \(\mathbf{T}\), then \(\lambda\) and \(B\) are trivial. Hence, it would suffice to specify \((\mathbf{T},d,r,\mu,\psi)\) in these cases. This in particular is almost surely the case for the Brownian continuum random tree and \(\alpha\)-stable trees. **Example 1.13**.: Assume the decorated planar real tree is given by \(\mathbf{T}=[0,1],r=0\), the usual Euclidean distance \(d=|\cdot|\) and with \(\mu,\psi,\lambda,B\) arbitrary. We observe: 1. For \((\mathbf{T},d,r,\mu)\) to be an IP-tree, if \(\mu(x)>0\) for \(x\in[0,1]\) then \(\mu((x,x+\mu(x)))=0\). Further, if for \(x,y\in[0,1]\) the interval \([x,y]\) does not contain any atoms and \(\mu([x,y])=y-x\), then \(\mu\) restricted to \([x,y]\) is the Lebesgue measure. 2. There is only one choice for \(\psi\), given \(n\) distinct points \(\psi_{n}\) maps them to a line graph with \(n\) edges while keeping the order of labels. 3. Here \(\lambda:[0,1]\to[0,1]\) is an arbitrary function viewed as element of \(L^{1}(\mu_{s})\) where \(\mu_{s}\) is the diffuse part of \(\mu\). For \(\xi_{i}\) in \(\operatorname{supp}(\mu_{s})\) we orient the corresponding leaf in \(T_{n}\) to the left with probability \(\lambda(\xi_{i})\) and to the right otherwise. 4. For any atom \(a\), \(\beta_{a}\) is determined by a single threshold \(\beta_{a}(1)\in[0,1]\). If we then sample a uniform \([0,1]\) random variable \(U_{i}\), we orient the corresponding leaf to the left if \(U_{i}\leq\beta_{a}(1)\) and to the right otherwise. If \(\xi_{i}=\xi_{j}=a\) for \(i\neq j\), we orient the leaf \(i\) to the left of \(j\) if \(U_{i}<U_{j}\) and to the right otherwise. Let us construct \(T_{n}\) in this case: after sampling \(n\) points from \(\mathbf{T}\) and applying \(\psi_{n}\) we receive a line graph of length \(k\leq n\) where \(k\) is the number of distinct points sampled. To every vertex of the line graph - the spine of \(T_{n}\) - except for the two endpoints we attach a one or multiple leaves. For every leaf corresponding to a point \(x\in\operatorname{supp}(\mu_{s})\) we flip a coin with parameter \(\lambda(x)\) to decide if we attach to the left or to the right of the spine. Similarly, we flip a coin with parameter \(\beta_{a}(1)\) for every leaf attached to an atom \(a\) to decide if we attach the leaf on the left or on the right. This results in \((T_{n},n\geq 1)\) being a sequence of growing spines with leaves hanging off on the sides, see Figure 5 for an illustration. **Example 1.14**.: This is the main object of study of [6]: Let \(\ell\in\mathbb{N},\ell\geq 2\) and consider \(S_{n}^{\ell}\) to be the \(\ell\)-ary plane tree of height \(n\), here every vertex has \(\ell\) offspring. Turn \(S_{n}^{\ell}\) into a real tree \(\mathbf{S}_{n}^{\ell}\) by assigning intervals of length \(2^{-k}\) to the edges at distance \(k\) to the root, gluing them at the branchpoints. Let \(\mathbf{T}^{\ell}\) be the completion of \(\bigcup_{n\geq 1}\mathbf{S}_{n}^{\ell}\). Consider now any diffuse probability measure \(\mu\) on the leaves of \(\mathbf{T}^{\ell}\), [9, Theorem 1.5] states that there exists a choice of metric \(d_{\mu}\) on \(\mathbf{T}^{\ell}\) that renders \((\mathbf{T}^{\ell},d_{\mu},r,\mu)\) an IP-tree. Note that \(\mu\) can be thought of as a distribution on \([0,1]\) by considering \(\ell\)-adic expansions. Because \((S_{n}^{\ell},n\geq 1)\) are plane trees, this induces a natural choice of planar order for \((\mathbf{S}_{n}^{\ell},n\geq 1)\) which induces maps \((\psi_{n},n\geq 1)\) for \(\mathbf{T}^{\ell}\). Figure 5: an example for \(T_{11}\) if \(\mathbf{T}=[0,1]\). The red crosses stand for \((\xi_{i},i\leq 11)\). The red number next to them states the outcome of the coin-flip that determines if the leaf is attached left or right of the spine. The blue circle indicates an atom \(a\). Here, we have \(\beta_{a}(1)=1/2\) which means that if \(U_{i}>1/2\) (in red), we orient the corresponding leaf to the right and to the left otherwise. This corresponding Markov chain is also called the PATRICIA chain, see [6] for a study of this in the case of binary trees. PATRICIA stands for _"practical algorithm to retrieve information coded in alphanumeric"_. Given \(z_{1},\ldots,z_{n}\in\{0,\ldots,\ell-1\}^{\infty}\), words of infinite length in the alphabet \(\{0,\ldots,\ell-1\}\), we can construct words \(y_{i},i\leq n\), of finite length such that \(y_{i}\) is an initial segment of \(z_{i}\) for all \(i\leq n\), all \(y_{i}\) are distinct and that \(y_{1},\ldots,y_{n}\) are the minimal length words with this property. These \(y_{1},\ldots,y_{n}\) form a tree with \(n\) leaves, the so-called radix sort tree. Consider now \(\mu\) as measure on \(\{0,\ldots,\ell-1\}^{\infty}\) and let \(Z_{1},\ldots,Z_{n}\) be \(i.i.d.\)\(\mu\)-samples. Then \(T_{n}\) is the radix sort tree corresponding to \(Z_{1},\ldots,Z_{n}\). See Figure 6 for an illustration. Recall from (1.2) that Marchal's tree growth, once properly rescaled, converges to a random real tree. We show that the same is true for any tree-valued Markov chain with uniform backward dynamics but we need to change the rescaling and the topology. We use the Gromov-Prokhorov topology. It was introduced by Gromov [13], see also the survey by Janson [16]. **Definition 1.15**.: Let \((\mathbf{T}_{1},d_{1},r_{1},\mu_{1})\) and \((\mathbf{T}_{2},d_{2},r_{2},\mu_{2})\) be two IP-trees. The Gromov-Prokhorov distance \(d_{\mathrm{GP}}(\mathbf{T}_{1},\mathbf{T}_{2})\) is the infimum of \(\varepsilon>0\) such that exists a measurable subset \(R\subseteq\mathbf{T}_{1}\times\mathbf{T}_{2}\) and a coupling \(\nu\) of \(\mu_{1}\) and \(\mu_{2}\), such that \[\nu(R)\geq 1-\varepsilon\quad\text{and}\quad\sup_{(x,y),(x^{\prime},y^{ \prime})\in R}|d_{1}(x,x^{\prime})-d_{2}(y,y^{\prime})|\leq\varepsilon.\] The map \(d_{\mathrm{GP}}\) is indeed a metric on isometry classes of weighted, complete, separable metric spaces. Further, we want to note that the induced topology is Polish, see [16, Theorem 3.9]. The induced topology is sometimes also called Gromov-weak topology and has an alternative formulation using sampling test-functions, more on this can be found in Athreya et al. [3]. Next, we define the rescaling. Instead of assigning a length \(n^{\beta},\beta<0\) to every edge, we rescale \(T_{n}\) inhomogeneously. Before we do this, we trim the tree: remove every leaf and its corresponding edge. This Figure 6: an example for \(T_{9}\) if \(\mathbf{T}^{3}\) is a \(3\)–ary tree. The red crosses stand for \((\xi_{i},i\leq 9)\), sampled from the leaves of \(\mathbf{T}^{3}\). Figure 7: an example of the trimming and rescaling. On the left, there is \(T_{13}\); in the middle is \(T_{13}^{trim}\) with a number \(k\) indicating an atom of weight \(k/13\) and on the right \(T_{13}^{trim}\) is drawn to scale after the rescaling. The marked edge has length \(3/13\). results in a new tree \(T_{n}^{\text{trim}}\) which is a subtree of \(T_{n}\). For every leaf \(x\in T_{n}\) we distribute mass \(1/n\) to the vertex in \(T_{n}^{\text{trim}}\) that is connected to \(x\) in \(T_{n}\), this defines a probability measure \(\mu_{n}^{\text{trim}}\) on \(T_{n}^{\text{trim}}\). Now, we rescale \(T_{n}^{\text{trim}}\). We do this by defining edge lengths according to an _inhomogenous IP-rescaling_. This means for an edge \((x,y)\) of \(T_{n}^{\text{trim}}\) we set its length to \[d_{n}^{\text{trim}}(x,y)=\left|\mu_{n}^{\text{trim}}\left(F_{T_{n}^{\text{ trim}}}(x)\right)-\mu_{n}^{\text{trim}}\left(F_{T_{n}^{\text{trim}}}(y)\right) \right|. \tag{1.5}\] See Figure 7 for an example. We extend this to a metric \(d_{n}^{\text{trim}}\) on \(T_{n}^{\text{trim}}\) by adding edge lengths along the unique path between two vertices. Let \(r_{n}\) be the root of \(T_{n}^{\text{trim}}\). In this setting, \((T_{n},n\geq 1)\) satisfies a scaling limit. **Theorem 1.16**.: _Let \((T_{n},n\geq 1)\) be a tree-valued Markov chain with uniform backward dynamics. Then there exists a random IP-tree \((\textbf{T},d,r,\mu)\) such that_ \[(T_{n}^{\text{trim}},d_{n}^{\text{trim}},r_{n},\mu_{n}^{\text{trim}}) \xrightarrow{n\to\infty}(\textbf{T},d,r,\mu), \tag{1.6}\] _almost-surely in the Gromov-Prokhorov topology. Further, the law of \((\textbf{T},d,r,\mu)\) is given by_ \[\mathbb{P}\left((\textbf{T},d,r,\mu)\in\cdot\right)=\int\rho_{\textbf{T}}\left( (\textbf{T},d,r,\mu)\in\cdot\right)\nu(d\textbf{T}), \tag{1.7}\] _where \(\nu\) is determined by Theorem 1.10._ This answers a question of Forman [9, Question 2] if IP-trees arise as scaling limits of suitably rescaled discrete random trees. **Remark 1.17**.: This theorem is in a sense optimal: both homogeneous rescaling as well as the Gromov-Hausdorff-Prokhorov topology are unsuitable in general. To see why a homogeneous rescaling does not work, join two typical realisations of an \(\alpha\)-stable tree \(\mathcal{T}_{\alpha}\) and an \(\alpha^{\prime}\)-stable tree \(\mathcal{T}_{\alpha^{\prime}}\) at the root with \(\alpha>\alpha^{\prime}\). If we were to rescale by \(n^{\beta}\), then we would need both \(\beta=-1+\frac{1}{\alpha}\) and \(\beta=-1+\frac{1}{\alpha^{\prime}}\) according (1.2) for the correct convergence. This is of course not possible. To see why the Gromov-Hausdorff-Prokhorov topology is too strong, construct a real tree **T** in the following way: for \(k\geq 1\), let \(a_{k}\) be an atom of weight \(2^{-k}\). Connect \(a_{k}\) to the root \(r\) by an interval segment of length \(1-2^{-k}\). One can see that for all \(n\geq 1\) we have \(d_{GHP}(T_{n},\textbf{T})\geq 1/2\). This is because there is \(k\in\mathbb{N}\) such that for all \(i\leq n\) we have \(\xi_{i}\neq a_{k}\). This implies that \(T_{n}\) will not converge after rescaling, in essence this is due to **T** not being a compact metric space. The structure of this paper is as follows: in Section 2 we introduce an encoding for tree-valued Markov chains, namely dendritic systems. Further, we discuss our notion of planarity for real trees and some of the measure theoretic aspects associated with the extremal decomposition of tree-valued Markov chains. In Section 3 we prove Theorem 1.10 and in Section A we prove an important auxiliary statement. Lastly in Section 4 we prove Theorem 1.16. ## 2 Preliminaries ### Planar real trees In this section we introduce a notion of planarity for real trees. Let \(\mathbb{T}\) be the space of finite plane trees. Let \(\mathbb{T}^{\ell}\) be the space of finite plane trees with leaves labelled by \([n]\), for some \(n\in\mathbb{N}\). Internal vertices are allowed to be labelled but do not have to be, in this case there will be fewer than \(n\) leaves so that the total number of labelled vertices is \(n\). While the definition of planar real trees may seem complicated, the idea is simple. For a given real tree \((\textbf{T},r)\), there already exists a natural map which takes a sequence \(x_{1},\ldots,x_{n}\in\textbf{T}\) to a combinatorial tree by _discretizing_ the subtree spanned by \(x_{1},\ldots,x_{n}\). We will now enhance these trees to be plane trees and require the planarity to be suitably consistent as \(x_{1},\cdots,x_{n}\) vary. Further, we want to keep track which \(x_{i}\) corresponds to which vertex in the combinatorial tree. This is done by labelling some of the vertices. See Figure 8 for an illustration of these ideas which we formalise in the following definitions. **Definition 2.1**.: Let \((\mathbf{T},r)\) be a real tree and \(x_{1},\ldots,x_{n}\) a finite sequence in \(\mathbf{T}\). In the following, \(<\) denotes the genealogical partial order in \(\mathbf{T}\) induced by \(x<y\) if \([r,x]\subsetneq[r,y]\). 1. We call \((x_{i},i\leq n)\) totally unordered if for any \(i\neq j\) we have \(x_{i}\nlessdot x_{j}\) and \(x_{j}\nlessdot x_{i}\). 2. \(span(x_{1},\ldots,x_{n},r)\) denotes the minimal connected subset of \(\mathbf{T}\) which includes \(\{r,x_{i};i\leq n\}\). We view \(span(x_{1},\ldots,x_{n},r)\) as a real tree rooted at \(r\). **Definition 2.2**.: We call \((\mathbf{T},r,\psi)\) a planar real tree if \((\mathbf{T},r)\) is a rooted real tree and \(\psi=\{\psi_{n},n\geq 2\}\) is a family of measurable maps \(\psi_{n}:\mathbf{T}^{n}\to\mathbb{T}^{\ell}\) satisfying the following properties: 1. As unlabelled non-plane tree, the tree \(\psi_{n}(x_{1},\ldots,x_{n})\) is the combinatorial tree corresponding to \(span(x_{1},\ldots,x_{n},r)\). 2. The vertex of \(\psi_{n}(x_{1},\ldots,x_{n})\) corresponding to \(x_{i}\) is labelled \(i\). 3. \(\psi\) is consistent in the sense that for every \(n,m\in\mathbb{N}\) and every totally unordered \(x_{1},\ldots,x_{n},y_{1},\ldots,y_{m}\in\mathbf{T}\) we have that \(\psi_{n}(x_{1},\ldots,x_{n})\) embeds into \(\psi_{n+m}(x_{1},\ldots,x_{n},y_{1},\ldots,y_{m})\) respecting the planar order and leaf labels. 4. We extend \(\psi_{n}\) to sequences that are not unordered. First, we apply \(\psi_{k},k<n\) to the totally unordered sequence \((x_{i},i\in I)\) with \(k=|I|\) that maximises \(span(r,x_{i};i\in I)\). Then, for each \(i\notin I\) we split the edge corresponding to \(x_{i}\) into two edges and label the middle vertex \(i\). If \(x_{i}\) is already a branchpoint in the span, we label the corresponding vertex \(i\). See Figure 8 for an example. Besides generalising combinatorial labelled trees, there is another reason for why this is a fairly natural notion which will be the following proposition. Recall that a continuous function \(g:[0,t]\to[0,\infty)\) with \(g(0)=g(t)=0\) describes a real tree \(\mathbf{T}_{g}\) via a quotient space construction. For \(s\leq u\) set \[s\sim_{g}u\quad\text{iff}\quad g(s)=\inf_{r\in[s,u]}g(r)=g(u).\] Define \(\mathbf{T}_{g}=[0,t]/\sim_{g}\). The metric is then given by \[d_{\mathbf{T}_{g}}(x,y)=g(x)+g(y)-2\inf_{z\in[x,y]}g(z),\] for \(x,y\in[0,t]\). Here, we abuse notation in this regards to view elements of \([0,t]\) as elements of \(\mathbf{T}_{g}\), otherwise we write \(x^{*}\in[0,t]\) for a representative of \(x\in\mathbf{T}_{g}\). See for example Evans [7, Example 3.14] for more details of this construction. An example for such a function \(g\) is the contour function \(C_{T}\) of a finite plane tree. Informally, this can be defined by a particle tracing the contour of the tree at unit speed, \(C_{T}(t)\) measures the distance to the root at time \(t\). Given \(C_{T}\) we can retrieve the planar order: take two leaves \(x,y\in\mathbf{T}\), \(x,y\) correspond to two unique maxima of \(C_{T}\), say \(C_{T}(u)=x,C_{T}(v)=y\). If \(u<v\), then the leaf in \(T\) corresponding to \(x\) is _to the left_ of the leaf corresponding to \(y\). This determines the planar order on \(T\) uniquely. Figure 8: an example for a planar real tree with 13 marked points and their image under \(\psi_{13}\). Note that not all internal points are labelled. **Proposition 2.3**.: _For a given leaf-labelled plane tree \(T\) without vertices of degree \(2\) there is a choice of maps \((\psi_{n})_{n\geq 2}\) such that if \(T\) has \(k\) leaves, then (after deleting leaf labels) \(\psi_{k}\)(leaves of \(\textbf{T}_{C_{T}})=T\). This renders \(\textbf{T}_{C_{T}}\) into a planar real tree._ We remark that we can do the same for trees with vertices of degree \(2\) and retrieve the information about vertices with degree \(2\) by inspecting the metric on \(\textbf{T}_{C_{T}}\). Proof.: We first describe the maps \(\psi_{n}\). Denote the leaves of \(\textbf{T}_{C_{T}}\) by \(y_{1},\ldots,y_{k}\) and denote the root of \(\textbf{T}_{C_{T}}\) by \(\rho\). Let \(x_{1},\ldots,x_{n}\in\textbf{T}_{C_{T}}\) be totally unordered. For \(x,y\in\textbf{T}_{C_{T}}\), we denote \(x\gets y\) if for all representatives \(x^{*},y^{*}\in[0,t]\) of \(x\) and \(y\) we have \(x^{*}<y^{*}\). This can only be the case if \(x\) and \(y\) are unordered in \(\textbf{T}_{C_{T}}\). Let \(x_{1},\ldots,x_{n}\in\) be totally unordered and let \(T_{n}\) be the leaf-labelled non-plane tree corresponding to \(span(x_{1},\ldots,x_{n},r)\). We need to equip \(T_{n}\) with a planar structure to make it into a plane tree. Because \(x_{1},\ldots,x_{n}\in\) is totally unordered, \(x_{i}\) is a leaf of \(span(x_{1},\ldots,x_{n},r)\) for every \(i\) and corresponds to a leaf in \(T_{n}\). Furthermore, this implies that there is a permutation \(\sigma\) on \([n]\) such that \(x_{\sigma(1)}\gets x_{\sigma(2)}\leftarrow\ldots\gets x_{\sigma(n)}\). We use this permutation to determine the planar order of leaves of \(T_{n}\), for all \(i,j\in[n];i\neq j\) if \(\sigma(i)<\sigma(j)\) then we set the leaf labelled \(i\) to be on the left of the leaf \(j\). The tree structure of \(T_{n}\) determines the planar order of all other vertices, we choose \(\psi_{n}(x_{1},\ldots,x_{n})\) to be the resulting plane tree. From this we can immediately see that \(\psi_{k}\)(leaves of \(\textbf{T}_{C_{T}})=T\). Indeed, if \(x\) and \(y\) are two leaves of \(T\) such that \(x\) is to the left of \(y\), then \(x\gets y\) by perceiving them as elements of \(\textbf{T}_{C_{T}}\). Because the left-right ordering of the leaves of \(T\) determines planar structure uniquely, and because the combinatorial tree corresponding to \(\textbf{T}_{C_{T}})\) is \(T\) without the planar order, we have that \(T=\psi_{k}\)(leaves of \(\textbf{T}_{C_{T}}\)) - after we have deleted the leaf labels. Lastly, we need to show that if \(x_{1},\ldots,x_{n},y\in\textbf{T}_{C_{T}})^{n+1}\) is totally unordered, then \(\psi_{n}(x_{1},\ldots,x_{n})\) embeds into \(\psi_{n+1}(x_{1},\ldots,x_{n},y)\). This follows from two facts: Firstly, the combinatorial tree corresponding to \(span(x_{1},\ldots,x_{n},r)\) embeds naturally into the combinatorial tree corresponding to \(span(x_{1},\ldots,x_{n},y,r)\). Secondly, this embedding respects the planar order of \(\psi_{n}(x_{1},\ldots,x_{n})\) and \(\psi_{n+1}(x_{1},\ldots,x_{n},y)\). Indeed if \(\sigma^{(n)}\) and \(\sigma^{(n+1)}\) are the two permutations used in the construction of the trees, then \(\sigma^{(n+1)}\) restricted to \([n]\) is \(\sigma^{(n)}\) in the sense that if \(\sigma^{(n+1)}(i)<\sigma^{(n+1)}(j)\) then \(\sigma^{(n)}(i)<\sigma^{(n)}(j)\). **Remark 2.4**.: This construction can be extended to any real tree \(\textbf{T}_{g}\) encoded by some function \(g\). This includes continuum random trees like the Brownian continuum random tree. In the above proof we made use of referring to leaves being _left or right_ of one another. We introduce a similar notation for subtrees at a given point in **T**, recall our definition of subtrees from Definition 1.3. **Definition 2.5**.: For a planar real tree \((\textbf{T},r,\psi)\), a point \(x\in\textbf{T}\) and any two subtrees \(\textbf{S}_{1},\textbf{S}_{2}\) attached to \(x\) we say that \(\textbf{S}_{1}\) is _to the left_ (respectively to the right) of \(\textbf{S}_{2}\) if this is the case for the labelled leaves of \(\psi_{2}(s_{1},s_{2})\) for arbitrary \(s_{1}\in\textbf{S}_{1}\backslash\{x\},s_{2}\in\textbf{S}_{2}\backslash\{x\}\). We will write \(\textbf{S}_{1}<\textbf{S}_{2}\) to denote that \(\textbf{S}_{1}\) is to the left of \(\textbf{S}_{2}\). Note that the choice of \(s_{1}\in\textbf{S}_{1}\backslash\{x\}\) does not matter due to the consistency properties of \(\psi\). ### Dendritic systems In this section we introduce the notion of dendritic systems. These objects aim to generalise finite leaf-labelled plane trees to infinitely many labels with a strong focus on the leaves. The reason for considering dendritic systems is that they allow us to encode a tree-valued Markov chain as a more static object. This notion is similar to that of didendritic systems of [8, Def. 5.8] which was introduced to generalise binary trees. Our notion has the advantage of accommodating multi-furcating trees as well. We will surrender the notion of edges and keep only the ancestral relation. By ancestral relation we mean that \(x\) precedes \(y\) in a combinatorial tree \(T\), \(x\preceq y\), if the path from the root to \(y\) contains \(x\). Any vertex will be thought of as a most recent common ancestor of two leaves. In the following definition, these ideas result in conditions \((C1)\)-\((C4)\). There, \((i,j)\) denotes the most recent common ancestor of two leaves labelled \(i,j\). We require that leaves do not precede any other vertices \((C1)\) and that leaves are descendants of their ancestors \((C2)\). For two vertices, \((i,j)\) and \((k,\ell)\), we require that there is another vertex which acts as most recent common ancestor, phrased a minimal element with respect to \(\preceq\), \((C4)\). \((C3)\) and \((C4)\) act as analogues of the _no-cycles_ condition of combinatorial trees. Further, we want to be able to encode the planar order of combinatorial trees as well. This is again done by specifying a left-right ordering of vertices. Recall that in the combinatorial tree the left-right order is derived from its Ulam-Harris encoding, _for example the word \((0,1,1,5,3)\) is to the left of the word \((0,1,2,1)\)_. We encode this left-right ordering by introducing a planarity function \(p\) where \(p(x,y)=1\) signifies that \(y\) is to the right of \(x\), respectively \(p(x,y)=-1\) means that \(y\) is to the left of \(x\). If \(x\) and \(y\) are ordered by the ancestral relation, we do not assign any left-right ordering. These ideas result in conditions \((P1)\)-\((P4)\): \((P1)\) states that if \(x\) is to the right of \(y\), then \(y\) is to the left of \(x\). \((P2)\) states that there is no left-right relation between vertices that satisfy \(x\preceq y\) or \(y\preceq x\). \((P3)\) states that if \(y\) is to the right of \(x\) and \(z\) is to the right of \(y\) then \(z\) is also to the right of \(x\). Lastly, \((P4)\) states that if \(y\) is to the right of \(x\) then also any descendant of \(y\) is to the right of \(x\). In Lemma 2.8 we will show that this does indeed generalise plane, leaf-labelled, combinatorial trees. **Definition 2.6** (Dendritic system).: Let \(L\subset\mathbb{N}\) be a finite or countably infinite set of leaf labels. A planar dendritic system \(\mathcal{D}=(L,\sim,\preceq,p)\) is the collection of the following objects: an equivalence relation \(\sim\) on \(L\times L\), we denote the space of equivalence classes as \(T\); a genealogical partial order \(\preceq\) on \(T\) and a planarity function \(p:T\times T\to\{0,1,-1\}\) satisfying the following properties for all \(i,j,k,\ell\in L\): 1. \((i,j)\sim(j,i)\), and \((i,j)\sim(k,k)\) if and only if \(i=j=k\). 2. \((i,j)\preceq(i,i)\). 3. \((i,j)\preceq(k,\ell)\) and \((k,\ell)\preceq(i,j)\) if and only if \((i,j)\sim(k,\ell)\). 4. \(a((i,j),(k,\ell))=\min_{\preceq}\{(i,j),(k,\ell),(i,\ell),(i,k),(j,\ell),(j,k)\}\) exists in \(T\). Further, the planarity function \(p\) satisfies for all \(x,y,z\in T\): 1. \(p(x,y)=-p(y,x)\). 2. \(p(x,y)=0\) if and only if \(x\preceq y\) or \(y\preceq x\). 3. If \(p(x,y)=1\) and \(p(y,z)=1\) then \(p(x,z)=1\). 4. If \(p(x,y)=1\) and \(y\preceq z\) then \(p(x,z)=1\). Here \(x\prec y\) corresponds to \(x\preceq y\) and \(x\neq y\). We will always refer to \(\{(i,i);i\in L\}\) as the leaves of \(\mathcal{D}\). Moreover, consider two arbitrary vertices \((i,j)\) and \((k,\ell)\). Unless there is an ancestral relationship between \((i,j)\) and \((k,\ell)\), \((P4)\) allows us to determine \(p((i,j),(k,\ell))\), namely \(p((i,j),(k,\ell))=p(i,k)=p(j,k)=p(i,\ell)=p(j,\ell)\) where we abuse notation to write \(i=(i,i)\). We can rephrase this as follows. **Lemma 2.7**.: _For a dendritic system \((L,\sim,\preceq,p)\), \(p\) is uniquely determined by \(\preceq\) and \(\{p(i,j);i,j\in L\}\) where we write \(i=(i,i)\)._ In the case where \(L\) is finite, dendritic systems precisely correspond to rooted plane trees: **Lemma 2.8**.: _If \(L\) is finite, there is a natural bijection between the set of dendritic systems \(\mathcal{D}=(L,\sim,\preceq,p)\) and the set of leaf-labelled rooted plane trees \((T,r)\) without vertices of degree \(2\) (except for possibly the root) and leaves labelled by \(L\)._ **Remark 2.9**.: Similarly to the space of trees, we equip the space of dendritic systems with the \(\sigma\)-algebra that is generated by finite projections. In the case of trees, we project onto a finite ball around the root and in the case of dendritic systems we restrict the dendritic system to to \([n]\cap L\). Proof of Lemma 2.8.: On the one hand, let \((T,r)\) be a leaf-labelled rooted plane tree, we can assume that every edge is directed towards the root. For two leaves labelled \(i,j\in T\), we let \(b(i,j)\in T\) be their most recent common ancestor. Define a dendritic system as follows: \((i,j)\sim(k,\ell)\) if \(b(i,j)=b(k,\ell)\), \((i,j)\preceq(k,\ell)\) if there is a directed path from \(b(k,\ell)\) to \(b(i,j)\), and \(p(i,j)=1\) for two leaves labelled \(i,j\) if \(i\) precedes \(j\) in lexicographic order of the Ulam-Harris encoding. By Lemma 2.7, this determines \(p\) uniquely. On the other hand, let \(\mathcal{D}=(L,\sim,\preceq,p)\) be a dendritic system. We want to define a plane leaf-labelled tree \((T,r)\). The equivalence classes of \((L\times L,\sim)\) are the vertices and we add an edge between \(x\) and \(y\) if there is no \(z\) such that \(x\prec z\prec y\). Because \(L\) is finite, this yields a tree. We direct an edge \((x,y)\) to \(x\) if \(x\prec y\) and to \(y\) otherwise. The root \(r\) is now the minimal element of this directed tree, it exists due to \((C3)\) and \((C4)\). Lastly, we need to impose a planar order on \(T\), i.e. a valid Ulam-Harris encoding of the vertices. This is done iteratively from the root \(r\), encoded by \(\emptyset\). Then every vertex has finitely many children \(x_{1},\ldots,x_{n}\). Due to \((P3)\) there is a permutation \(\sigma\) such that \(p(x_{\sigma(i)},x_{\sigma(j)})=1\) if \(i<j\). \(x_{i}\) is then encoded by its parents encoding appended with \(\sigma(i)\). Loosely speaking, \(p\) determines a permutation at each branchpoint of \(T\) which we use to obtain a planar order. One can see that the two procedures described above are inverse to each other. This allows us to illustrate a link between dendritic systems and planar real trees. Let \((\textbf{T},r,\psi)\) be a planar real tree. **Corollary 2.10**.: _For any totally unordered sequence \((x_{1},x_{2},\ldots)\in\textbf{T}\) there is a dendritic systems \(\mathcal{D}\) on \(\mathbb{N}\) such that the restriction of \(\mathcal{D}\) to \([n]\) is isomorphic to \(\psi_{n}(x_{1},\ldots,x_{n})\) as rooted plane leaf-labelled trees._ Proof.: By Lemma 2.8 there exists a dendritic system \(\mathcal{D}^{(n)}\) with leaves labelled \([n]\) such that the plane leaf-labelled tree corresponding to \(\mathcal{D}^{(n)}\) is \(\psi_{n}(x_{1},\ldots,x_{n})\). Because \((\textbf{T},r,\psi)\) is a planar real tree \(\psi_{n}(x_{1},\ldots,x_{n})\) embeds into \(\psi_{n+m}(x_{1},\ldots,x_{n+m})\) which means that we have that \(\mathcal{D}^{(n+m)}\) restricted to \([n]\) is \(\mathcal{D}^{(n)}\). This implies the existence of a dendritic system \(\mathcal{D}\) with the desired property. The corollary above directly yields a method to construct random dendritic systems: assume \((\textbf{T},d,r,\psi)\) is endowed with a diffuse probability measure \(\mu\) that puts all mass on the leaves. Sample \((\xi_{1},\xi_{2},\ldots)\)\(i.i.d.\) from \(\mu\), then \((\xi_{1},\xi_{2},\ldots)\) gives rise to a random dendritic system. In light of the main theorem, Theorem 1.10, this already foreshadows how we want to use dendritic systems: **Proposition 2.11**.: _Any tree-valued Markov chain \((T_{n},n\geq 1)\) with uniform backward dynamics corresponds to an exchangeable dendritic system and vice versa._ We should specify precisely how we define exchangeability for dendritic systems. Given a finite permutation \(\sigma\) on the leaf labels and a dendritic system \(\mathcal{D}=(L,\sim,\preceq,p)\) we define the dendritic system \(\mathcal{D}^{\sigma}=(L,\sim^{\sigma},\preceq^{\sigma},p^{\sigma})\) by 1. \((i,j)\sim^{\sigma}(k,\ell)\) if and only if \((\sigma(i),\sigma(j))\sim(\sigma(k),\sigma(\ell))\), 2. \((i,j)\preceq^{\sigma}(k,\ell)\) if and only if \((\sigma(i),\sigma(j))\preceq(\sigma(k),\sigma(\ell))\), 3. \(p^{\sigma}(i,j)=p(\sigma(i),\sigma(j))\). **Definition 2.12**.: For a random dendritic system \(\mathcal{D}\) we say that \(\mathcal{D}\) is 1. exchangeable, if \(\mathcal{D}\) and \(\mathcal{D}^{\sigma}\) have the same distribution for every finite permutation \(\sigma\) on the leaf labels; 2. ergodic, if we have that for any event \(A\) we have \(\mathbb{P}(\{\mathcal{D}\in A\})\in\{0,1\}\) whenever \(\mathbb{P}(\{\mathcal{D}\in A\}\Delta\{\mathcal{D}^{\sigma}\in A\})=0\) for every finite permutation \(\sigma\) on the leaf labels. Proof of Proposition 2.11.: We will write \(\mathcal{D}|_{[n]}\) if we restrict a dendritic system to the leaves labelled by \([n]\), respectively the equivalence classes of \([n]\times[n]\). Let us describe how to encode \((T_{n},n\geq 1)\) in a dendritic system. To this end, we define a sequence of dendritic systems \((\mathcal{D}_{n}=([n],\sim_{n},\preceq_{n},p_{n}))_{n\in\mathbb{N}}\). For fixed \(n\), label the leaves of \(T_{n}\) uniformly by \([n]\) and let \(\mathcal{D}_{n}\) be the corresponding dendritic system by Lemma 2.8. Note that \(\mathcal{D}_{n}\) and \(\mathcal{D}_{n+m}|_{[n]}\) agree in law. Indeed, this is true because of the uniform backward dynamics as restricting \(\mathcal{D}_{n+m}\) to \([n]\) corresponds to \(m\) steps backwards in the Markov chain. Hence \((\mathcal{D}_{n})_{n\in\mathbb{N}}\) forms a consistent family and by the Daniell-Kolmogorov extension theorem there exists a dendritic system \(\mathcal{D}\) such that \(\mathcal{D}|_{[n]}=\mathcal{D}_{n}\) in distribution, for all \(n\in\mathbb{N}\). On the other hand, if we are given a dendritic system \(\mathcal{D}\), then \((\mathcal{D}|_{[n]},n\geq 1)\) corresponds to a sequence of plane leaf-labelled trees by Lemma 2.8. Remove the labels to obtain a sequence of plane leaf-labelled trees \((T_{n},n\geq 1)\). Because \(\mathcal{D}\) is exchangeable, we can see that the backward dynamics of \((T_{n},n\geq 1)\) are uniform by considering a uniform permutation of the leaf labels. In the proof of Theorem 1.10 we will decompose the law of \((T_{n},n\geq 1)\) into extremal measures. This is in the sense of convex combinations of distributions of \((T_{n},n\geq 1)\) for different choices of \((T_{n},n\geq 1)\). Call \((T_{n},n\geq 1)\)_extremal_ if its distribution is extremal in the space of probability measures of tree-valued Markov chains with uniform backward dynamics. These are precisely the tree growth processes that correspond to ergodic dendritic systems. **Proposition 2.13**.: _The Markov chain \((T_{n},n\geq 1)\) is extremal if and only if the associated dendritic system is ergodic._ The proof of this proposition is a straightforward generalisation of [8, Proposition 5.19] and we refer to the book of Kallenberg [18, Theorem A1.4] for background on ergodic decompositions. Due to the above propositions we will study exchangeable, ergodic dendritic systems instead of tree-valued Markov chains with uniform backward dynamics. ### Doob-Martin boundary viewpoint Let use elaborate on the connection of (1.1) and the Doob-Martin boundary. In the case of Marchal's tree growth, we neglect the dependence on \(\alpha\). If we consider a conditioned version of Marchal's tree growth of the form \[\mathbb{P}\big{(}T_{m}^{M}\in\cdot\big{|}T_{n}^{M}=T\big{)}, \tag{2.1}\] for \(m<n\), then we do not need to know how exactly the tree growth is defined to construct \(T_{m}^{M}\). Instead, we only need to know that Marchal's tree growth possesses uniform backward dynamics. This means iteratively remove uniform leaves to obtain \((T_{m}^{M},1\leq m\leq n)\) under the conditioned measure. This means that under the conditioned measure \((T_{m}^{M},1\leq m\leq n)\) is a tree-valued Markov chain with uniform backward dynamics with finite time horizon. If we can find a sequence of trees \((T^{(n_{k})},k\geq 1)\) such that \(n_{k}\to\infty\) as \(k\to\infty\) and such that \[\lim_{k\to\infty}\mathbb{P}\big{(}T_{m}^{M}\in\cdot\big{|}T_{n_{k}}^{M}=T^{(n _{k})}\big{)}\quad\text{exists}, \tag{2.2}\] then we obtain a tree-valued Markov chain with infinite time horizon. The procedure above is related to the Doob-Martin boundary of the Markov chain \((T_{n}^{M},n\geq 1)\). Our references here are the book of Woess [21, Chapter 7] for the general case and [8, Section 2 and 3] for tree-valued Markov chains, see also [8] for more references. Recall that we write \(\mathbb{T}\) for the space of plane trees so that \((T_{n}^{M},n\geq 1)\) is a \(\mathbb{T}\)-valued Markov chain. In the following we abbreviate \(e=T_{1}^{M}\), the unique tree consisting of a single edge. We also write \(\mathbb{P}^{S}\) for the probability measure under which \((T_{n}^{M},n\geq 1)\) is Marchal's tree growth with \(T_{1}^{M}=S\), \(\mathbb{P}^{S}\)-almost surely. For two trees \(S,T\) with \(m<n\) leaves respectively we define the probability that starting from \(S\) we will ever see \(T\) \[p(S,T):=\mathbb{P}^{S}(T_{\ell}^{M}=T\text{ for some }\ell)=\mathbb{P}^{S}(T_{n-m+1} ^{M}=T).\] Indeed, because we add a leaf in every step of the Markov chain, this can only happen after \(n-m\) steps. We use this to define the Doob-Martin kernel \(K\) of \(T^{M}\) for \(S,T\in\mathbb{T}\) by \[K(S,T):=\frac{p(S,T)}{p(e,T)}\] whenever \(p(e,T)>0\). Implicitly we now restrict our space \(\mathbb{T}\) to \(\{T\in\mathbb{T}:p(e,T)>0\}\), if \(p(e,T)=0\) we define \(K(S,T)=0\) for all \(S\). Let \(\Pi\) be the transition matrix of Marchal's tree growth. We then have for \(S\neq T\), \[\sum_{T^{\prime}\in\mathbb{T}}\Pi(S,T^{\prime})K(T^{\prime},T)=K(S,T). \tag{2.3}\] This is not true for \(S=T\). Hence, \(K(\cdot,T)\) is almost a harmonic function. Observe for \(S\) with \(m\) leaves and \(T\) with \(n>m\) leaves: \[K(S,T)=\frac{1}{\mathbb{P}^{\mathrm{e}}(T_{m}^{M}=S)}\mathbb{P}^{\mathrm{e}} \big{(}T_{m}^{M}=S\big{|}T_{n}^{M}=T\big{)}=\frac{1}{C(S)}\mathbb{P}^{\mathrm{ e}}\big{(}T_{m}^{M}=S\big{|}T_{n}^{M}=T\big{)},\] where the constant \(C(S)\) depends on \(S\) but not on \(T\). This illustrates the connection between the kernel \(K\) and (2.1). One can then show that \(K(\cdot,T)\neq K(\cdot,T^{\prime})\) whenever \(T\neq T^{\prime}\). Indeed, if \(S\) has the same number of leaves as \(T\) then \(K(S,T)\neq 0\) if and only if \(S=T\). This yields a bijection between \(\mathbb{T}\) and \(\{K(\cdot,T),T\in\mathbb{T}\}\). The advantage is that \(\{K(\cdot,T),T\in\mathbb{T}\}\subset\mathbb{R}^{\mathbb{T}}_{+}\) and that it turns out to be precompact under the induced topology. We let \(\overline{\mathbb{T}}\) be the closure of \(\mathbb{T}\) in \(\mathbb{R}^{\mathbb{T}}_{+}\), the so-called _Doob-Martin compactification_ of \(\mathbb{T}\). The set \(\partial\mathbb{T}=\overline{\mathbb{T}}\backslash\mathbb{T}\) is called the _Doob-Martin boundary_ of the Markov chain \(T^{M}\). We equip \(\overline{\mathbb{T}}\) with its Borel-\(\sigma\)-algebra. We write \(K(\cdot,b)\) for an element \(b\in\partial\mathbb{T}\) of the boundary. These considerations lead to the following statement, for more details see [8, Section 2 and 3]. **Proposition 2.14**.: _[_8_, Corollary 3.10]_ _For a sequence of trees \((T^{(n_{k})},k\geq 1)\) where \(T^{(n_{k})}\) has \(n_{k}\) leaves and \(n_{k}\to\infty\), the limit in (2.2) exists if and only if \((T^{(n_{k})},k\geq 1)\) converges in the Doob-Martin boundary \(\partial\mathbb{T}\)._ By choosing \(T^{(n)}=T_{n}\) where \((T_{n},n\geq 1)\) has uniform backward dynamics, we immediately obtain the following consequence. **Theorem 2.15** (Boundary convergence).: _A tree-valued Markov chain \((T_{n},n\geq 1)\) with uniform backward dynamics converges almost surely in \(\overline{\mathbb{T}}\), the limit is supported in \(\partial\mathbb{T}\). In particular this is the case for Marchal's tree growth._ This is true because \((T_{n},n\geq 1)\) has the same backwards dynamics as Marchal's tree growth and hence the same Doob-Martin boundary. This is a general fact in the abstract setting, see [21, Theorem 7.19]. A natural consequence of this statement is that we want to identify the limiting distribution in a more tractable object. This is done in our main theorem, Theorem 1.10. There is a second, general statement related to the Doob-Martin boundary. It concerns itself with \(\Pi\)-harmonic functions, recall from (2.3) that \(K\) is closely connected to harmonic functions. In fact, for fixed \(b\in\partial\mathbb{T}\) the function \(K(\cdot,b)\) is harmonic. Every other harmonic function can be decompositioned as follows, in the general setting this is [21, Theorem 7.45]. **Theorem 2.16** (Integral representation).: _For a harmonic function \(h:\mathbb{T}\to\mathbb{R}_{+}\) with \(h(e)=1\) there exists an unique probability measure \(\nu^{h}\) on \(\partial\mathbb{T}\) such that for every \(T\in\mathbb{T}\) we have_ \[h(T)=\int_{\partial\mathbb{T}}K(T,b)\nu^{h}(db).\] Recall that the set of probability distributions of tree growth processes is a convex set. A distribution is called extremal if it cannot be written as a non-trivial convex combination of two other distributions. To conclude this section, we state that any probability measure on tree-valued Markov chains can be decomposed into its extremal elements. Due to this statement, it suffices to consider tree growth processes \((T_{n},n\geq 1)\) whose distribution is extremal. **Corollary 2.17**.: _The set of extremal distributions can be parameterised by \(\{\mu^{b},b\in\partial\mathbb{T}\}\). Further, for any Markov chain \((T_{n},n\geq 1)\) with uniform backward dynamics there exists a probability measure \(\hat{\nu}\) such that_ \[\mathbb{P}\left((T_{n},n\geq 1)\in\cdot\right)=\int_{\partial\mathbb{T}}\mu^{b }\left((T_{n},n\geq 1)\in\cdot\right)\hat{\nu}(db).\] Proof.: This will follow from Theorem 2.16 once we show that there is a correspondence between harmonic functions and Markov chains. Given \((T_{n},n\geq 1)\) denote its limit in \(\partial\mathbb{T}\) by \(B\), according to Theorem 2.15 this exists almost surely. We define a harmonic function by \[h(T)=\int_{\partial\mathbb{T}}K(T,b)\mathbb{P}(B\in db). \tag{2.4}\] On the other hand, given a harmonic function \(h^{\prime}\), we define a new Markov chain on the set \(\{T\in\mathbb{T}:h^{\prime}(T)>0\}\) by its transition matrix, \[\Pi^{h^{\prime}}(S,T)=\frac{1}{h^{\prime}(S)}\Pi(S,T)h^{\prime}(T),\] where \(\Pi\) is the transition matrix of Marchal's tree growth. This is a Doob \(h\)-transform. If we now choose \(h^{\prime}=h\) as in (2.4), we obtain the distribution of the Markov chain \((T_{n},n\geq 1)\). This follows from a computation that when we condition on \(\{T_{N}=T\}\), the distribution of the initial segment \((T_{n},n\leq N)\) is given by an \(h\)-transform with \(K(\cdot,T)\). This computation is straight-forward and can be found in [8, Chapter 2]. By then taking the limit \(N\to\infty\) and Theorem 2.15 we obtain the correspondence between Markov chains and harmonic functions. We leave it to the reader to check that a convex combination of harmonic functions translates to a convex combination of the distributions associated to the Markov chains. The corollary now follows from Theorem 2.16, \(h\) corresponds to an extremal distribution if and only if \(\nu^{h}=\delta_{b}\) for some \(b\in\partial\mathbb{T}\). ## 3 Proof of Theorem 1.10 Recall that we can encode a tree-valued Markov chain \((T_{n},n\geq 1)\) with uniform backward dynamics in an exchangeable dendritic system in Proposition 2.11, recall the definition of dendritic systems from Definition 2.6. Recall also that extremal tree growth processes correspond to ergodic dendritic systems. The proof of Theorem 1.10 consists of three steps: we decompose the distribution of \((T_{n},n\geq 1)\) into extremal measures with Corollary 2.17, we then show that for every extremal distribution there is _some_ decorated planar real tree and lastly we show that this can be chosen to be a IP-tree. The latter two steps are made up of Propositions 3.3 and 3.4. First, we will state how to sample a dendritic system \(\mathcal{D}=(\mathbb{N},\sim,\preceq,p)\) from a decorated planar real tree \((\mathbf{T},d,r,\mu,\psi,\lambda,B)\). Recall that \(\psi\) is a planar order for the rooted, weighted real tree \((\mathbf{T},d,r,\mu)\), \(\lambda\) a branch weight function and \(B\) a branchpoint weight function. We split the construction into two parts, sampling \((\mathbb{N},\sim,\preceq)\) from the tree and determining the planarity function \(p\) using \((\psi,\lambda,B)\) and extra randomness. **Construction 3.1**.: Sample a sequence \(\{\xi_{i}\}_{i\in\mathbb{N}}\)\(i.i.d.\) from \(\mu\). We then define for \(i,j,k,\ell\in\mathbb{N}\): 1. \((i,i)\sim(k,\ell)\) if and only if \(i=k=\ell\). 2. \((i,j)\sim(k,\ell)\) for \(i\neq j,k\neq l\) if and only if \([r,\xi_{i}]\cap[r,\xi_{j}]=[r,\xi_{\ell}]\cap[r,\xi_{k}]\). 3. A partial order \(\preceq\) on \(\mathbb{N}^{2}/\sim\) is inherited from the genealogical partial order \(<\) on \(\mathbf{T}\) and adding \((i,j)\prec(i,i)\) for \(i\neq j\). This means for distinct \(i,j,k,\ell\in\mathbb{N}\) we set \[(k,\ell)\prec(i,j)\quad\text{ if and only if }\quad[r,\xi_{k}]\cap[r,\xi_{\ell}] \subsetneq[r,\xi_{i}]\cap[r,\xi_{j}].\] **Construction 3.2**.: In the setting of Construction 3.1, we now sample \(\{U_{i}\}_{i\in\mathbb{N}}\)\(i.i.d.\) uniform random variables from \([0,1]\) independently from \(\{\xi_{i}\}_{i\in\mathbb{N}}\). 1. To determine the planarity function \(p\), we distinguish four cases. Recall that we decomposed \(\mu=\mu_{atoms}+\mu_{s}+\mu_{\ell}\) into the mass on the atoms, the skeleton (diffusely) and the leaves (diffusely). 1. If neither \(\xi_{i}<\xi_{j}\) nor \(\xi_{j}<\xi_{i}\), then \(p\) is determined by \(\psi_{2}(\xi_{i},\xi_{j})\), see Figure 9 for an illustration. More precisely, there is a unique plane tree with two leaves and one root, we need to check which of the two leaves is labelled \(i\) and which is labelled \(j\). We set \[p(i,j)=\begin{cases}1&\text{ if the left leaf is labelled $i$ and the right leaf is labelled $j$ in $\psi_{2}(\xi_{i},\xi_{j})$},\\ -1&\text{ otherwise.}\end{cases}\] 2. If \(\xi_{i}<\xi_{j}\) we distinguish two cases. 1. In the case that \(\xi_{i}\in supp\ \mu_{s}\), we set \[p(i,j)=\begin{cases}1&\text{ if }U_{i}<\lambda(\xi_{i}),\\ -1&\text{ if }U_{i}\geq\lambda(\xi_{i}).\end{cases}\] 2. In the case that \(\xi_{i}\in supp\ \mu_{atoms}\), we first need to identify the subtree of \(\xi_{i}\) in which \(\xi_{j}\) is located - see Definition 1.3 for our notion of subtree and Definition 2.5 for subtrees being left or right of each other. Denote by \(\mathbf{S}_{j}\) the unique subtree which contains \(\xi_{j}\). Let \[\beta_{\xi_{i}}(\mathbf{S}_{j})=\beta_{\xi_{i}}\left(\mu(\mathbf{S}_{j})+\sum _{\mathbf{S}^{\prime}<\mathbf{S}_{j}}\mu(\mathbf{S}^{\prime})\right).\] We then set \[p(i,j)=\begin{cases}1&\text{ if }U_{i}<\beta_{\xi_{i}}(\mathbf{S}_{j}),\\ -1&\text{ if }U_{i}\geq\beta_{\xi_{i}}(\mathbf{S}_{j}).\end{cases}\] 3. If \(\xi_{i}>\xi_{j}\), we do the same as above with reversed roles for \(i\) and \(j\). 4. If \(\xi_{i}=\xi_{j}\), we set \[p(i,j)=\begin{cases}1&\text{ if }U_{i}<U_{j},\\ -1&\text{ if }U_{i}\geq U_{j}.\end{cases}\] Note that the value of \(p((i,j),(k,\ell))\), where \((i,j),(k,\ell)\) are not leaves, is uniquely determined by the values of \(p\) on the leaves by imposing the consistency relations \((P1)\)-\((P4)\), see Lemma 2.7. In the following sections we will show these two propositions. **Proposition 3.3**.: _For every ergodic dendritic system \(\mathcal{D}\) there exists a deterministic decorated planar real tree \((\,\textbf{T},d,r,\mu,\psi,\lambda,B)\) such that the distribution of \(\mathcal{D}\) equals the one sampled from Constructions 3.1 and 3.2._ **Proposition 3.4**.: _The decorated planar real tree \((\,\textbf{T},d,r,\mu,\psi,\lambda,B)\) in Proposition 3.3 can be uniquely chosen as a decorated planar IP-tree up to measure and root preserving isometry of \((\,\textbf{T},d,r,\mu)\)._ We prove Theorem 1.10 from these propositions. Proof of Theorem 1.10.: Let \((T_{n},n\geq 1)\) be a tree-valued Markov chain with uniform backward dynamics. Suppose that \((T_{n},n\geq 1)\) is extremal in the sense of Section 2.3. Then by Proposition 2.13 it corresponds to an ergodic dendritic system \(\mathcal{D}\). By Propositions 3.3 and 3.4 there exists a decorated planar IP-tree \((\,\textbf{T},d,r,\mu,\psi,\lambda,B)\) - unique up to measure and root preserving isometry of \((\,\textbf{T},d,r,\mu)\) - such that \(\mathcal{D}\) has the same distribution as the dendritic system obtained through Constructions 3.1 and 3.2. By Proposition 2.13 the same is true for the Markov chain \((T_{n},n\geq 1)\). In the case that \((T_{n},n\geq 1)\) is not extremal, the desired statement follows from the decomposition into extremal distribution in Corollary 2.17 and the above considerations for extremal tree-growth chains. Figure 9: How \(\psi\) is used to determine \(p\) in the sampling construction. Given two (random) points \(\xi_{i}\) and \(\xi_{j}\) which do not satisfy \(\xi_{i}\prec\xi_{j}\) nor \(\xi_{j}\prec\xi_{i}\), \(\psi_{2}\) maps these points to the unique tree with two leaves. The two options to label the leaves correspond to \(p(i,j)=\pm 1\) respectively. ### Existence of a sampling representation In this section we will prove Proposition 3.3. A key step in this proof is the following Proposition 3.6. The proposition deals with the following construction. **Construction 3.5**.: Assume that we are given a weighted, rooted real tree \((\textbf{T},d,r,\mu)\) and a function \(F:(\textbf{T}\times[0,1])^{2}\times[0,1]\rightarrow\{\pm 1\}\). Let \(Leb\) be the Lebesgue measure on \([0,1]\). Assume that \(F\) satisfies the following consistency relations for \(\mu\)-almost every \(x,y,z\) and \(Leb\)-almost every \(u,v,w,a,b,c\). 1. \(F(x,u,y,v,a)=-F(y,v,x,u,a)\), 2. if \(F(x,u,y,v,a)=F(y,v,z,w,b)\) then also \(F(x,u,z,w,c)=F(x,u,y,v,a)\), 3. if \([r,x]\cap[r,y]\notin\{[r,x],[r,y]\}\) and \([r,y]\subsetneq[r,z]\) then \(F(x,u,y,v,a)=F(x,u,z,w,b)\), 4. if \([r,x]\subsetneq[r,y]\subsetneq[r,z]\) then \(F(x,u,y,v,a)=F(x,u,z,w,c)\). Then, in the context of Construction 3.1, sample \(i.i.d.\) uniform random variables \(\{U_{i},U_{ij}\}_{i,j\in\mathbb{N},i<j}\) from \([0,1]\). We define a planarity function \(p\) by \[p(i,j)=F(\xi_{i},U_{i},\xi_{j},U_{j},U_{i,j}),\] abusing notation to write \(i=(i,i)\). The value of \(p((i,j),(k,\ell))\), where \((i,j),(k,\ell)\) are not leaves, is uniquely determined by the values of \(p\) on the leaves by imposing the consistency relations \((P1)\)-\((P4)\), see Lemma 2.7. **Proposition 3.6**.: _Every ergodic, exchangeable dendritic system \(\mathcal{D}\) can be represented by a real tree \((\textbf{T},d,r)\), a probability measure \(\mu\) on **T** and a measurable function \(F:(\textbf{T}\times[0,1])^{2}\times[0,1]\rightarrow\{\pm 1\}\) in such a way that we have \(\textbf{T}=span(supp(\mu))\), \(F\) is satisfies the consistency relations stated in Construction 3.5 and \(\mathcal{D}\) is then equal in distribution to the dendritic system constructed through Constructions 3.1 and 3.5._ Its proof is analogous to the proof of [8, Theorem 8.2] but makes use of the full generality of a theorem of Gufler [14]. We will prove Proposition 3.6 in Section A. The consistency conditions on \(F\) correspond naturally to the consistency conditions of \(p\), see Section A.3. We also note that in Theorem [8, Theorem 8.2] similar consistency relations are imposed. Remark that it suffices to describe the dendritic system restricted to \([n]\) for every \(n\) to determine the distribution of the dendritic system uniquely. We prove Proposition 3.3 assuming that Proposition 3.6 is given. This means that this proposition provides the tree **T** and the measure \(\mu\), hence we can consider specific trees and measures. We first consider three special cases for **T** as a warm-up: when the mass is distributed diffusely on the skeleton, when it is supported diffusely on the leaves and when it has atoms. Note that these cases arise naturally as we can decompose \(\mu=\mu_{atoms}+\mu_{\ell}+\mu_{s}\) into measures that place mass only on atoms, diffusely on leaves and diffusely on the branches (the tree without the leaves) respectively. Before doing that, we will state an elementary lemma. **Lemma 3.7**.: _Assume \(X,Y,Z\) are independent random variables with laws \(\lambda_{1},\lambda_{2},\lambda_{3}\) respectively and \(f\) a measurable function. If \(f(X,Y)=f(X,Z)\)\(\mathbb{P}\)-a.s. then \(f(x,Y)\) is \(\mathbb{P}\)-a.s. constant for \(\lambda_{1}\)-\(a.e.\)\(x\). We then have \(f(X,Y)=g(X)\)\(\mathbb{P}\)-\(a.s.\) for some measurable function \(g\)._ Proof.: Note first that \(f(x,Y)=g(x,Z)\) for \(\lambda_{1}\)-a.e. \(x\), \(\mathbb{P}\)-almost surely. Fix \(x\) such that \(f(x,Y)=f(x,Z)\)\(\mathbb{P}\)-\(a.s.\) and let \(f_{x}(\cdot)=f(x,\cdot)\). Then \(f_{x}(Y)=f_{x}(Z)\) almost surely. Because \(Y\perp\!\!\!\perp Z\), \(Y\perp\!\!\!\perp f_{x}(Y)\) and hence \(f_{x}(Y)\perp\!\!\!\perp f_{x}(Y)\) which implies that \(f_{x}(Y)\) is \(\mathbb{P}\)-\(a.s.\) constant - define \(g(x)\) to be this constant, i.e. \(g(x)=\int f(x,y)\lambda_{2}(dy)\) which is measurable. This completes the proof because \(\lambda_{1}\big{(}\big{\{}x:f(x,Y)=f(x,Z)\big{\}}\big{)}=1\). **Lemma 3.8**.: _Assume that from Proposition 3.6 we get \((\textbf{T},d,r,\mu,F)\) so that \(\textbf{T}=[0,1],r=0,\mu=Leb\) with \(F\) being arbitrary. Then there exists a decorated planar real tree \((\textbf{T},d,r,\mu,\psi,\lambda,B)\) so that the dendritic systems constructed by Constructions 3.2 and 3.5 have the same distribution. Here, \(\psi\) is the only possible planar order for \([0,1]\) and \(\lambda\) is determined by \(F\). Because \(Leb\) has no atoms, \(B\) is trivial._ Note that in this case \(\mu\) is supported diffusely on branches. There is only one planar order for this tree in this case: given \(n\) distinct points and the root in \(\mathbf{T}\), \(\psi_{n}\) maps them to a discrete line segment of length \(n\). Here the sampling representation is very concise: sample \(n\) uniform \(iid\) points \(\xi_{1},\ldots,\xi_{n}\) on \([0,1]\). For each \(\xi_{i}\), sample a Bernoulli random variable with parameter \(\lambda(\xi_{i})\) independently and attach a leaf labelled \(i\) to the left of \(\xi_{i}\) - if the Bernoulli random variable equals 1 - or to the right of \(\xi_{i}\) otherwise. \(T_{n}\) is then the plane combinatorial tree spanned by \(r\) and the added leaves. This means that \(T_{n}\) is a binary tree consisting of a spine with leaves hanging off the spine left and right. Proof of Lemma 3.8.: Due to the special structure of the tree and the probability measure we are almost surely always in the case where \(\{\xi_{i}<\xi_{j}\}\) or \(\{\xi_{i}>\xi_{j}\}\). This leaves us to show that on the event \(\{\xi_{i}<\xi_{j}\}\) we have almost surely \[p(i,j)=\begin{cases}1&\text{if }U_{i}<\lambda(\xi_{i}),\\ -1&\text{if }U_{i}\geq\lambda(\xi_{i})\end{cases}\] for a suitable branch weight function \(\lambda\). Without loss of generality we always condition on the event \(\{\xi_{i}<\xi_{j}\}\) in the following. By Proposition 3.6 we have: \[p(i,j)=F(\xi_{i},U_{i},\xi_{j},U_{j},U_{ij}).\] We sample a second copy of \((\xi_{j},U_{j},U_{ij})\) independently of \((\xi_{i},U_{i},\xi_{j},U_{j},U_{ij})\) and denote it by \((\xi_{j}^{*},U_{j}^{*},U_{ij}^{*})\) and we restrict ourselves to the event \(\{\xi_{i}<\xi_{j}^{*}\}\). Due to the consistency properties of \(F\), more precisely \((F4)\), we almost surely have \[F(\xi_{i},U_{i},\xi_{j},U_{j},U_{ij})=F(\xi_{i},U_{i},\xi_{j}^{*},U_{j}^{*},U_ {ij}^{*}).\] Consider the family of regular conditional distribution \(\mathbb{P}(\cdot|\xi_{i}=x,\xi_{j}>\xi_{i},\xi_{j}^{*}>\xi_{i})\) under which for all \(x\in(0,1)\), both \(\xi_{j}\) and \(\xi_{j}^{*}\) are \(Uniform((x,1))\) distributed and \(U_{i},\xi_{j},U_{j},U_{ij},\xi_{j}^{*},U_{j}^{*},U_{ij}^{*}\) are independent of each other. This means we can apply Lemma 3.7 which tells us that there exists some measurable function \(V^{\prime}:[0,1]^{2}\to\{\pm 1\}\) such that \(F(\xi_{i},U_{i},\xi_{j},U_{j},U_{ij})=V^{\prime}(x,U_{i})\), \(\mathbb{P}(\cdot|\xi_{i}=x,\xi_{j}>\xi_{i},\xi_{j}^{*}>\xi_{i})\)-almost surely for \(Leb\)-almost every \(x\). Integrating over \(x\) and reversing the roles for \(i\) and \(j\) already gives us the following description of \(p\): \[p(i,j)=\begin{cases}V^{\prime}(\xi_{i},U_{i})&\text{if}\quad\xi_{i}<\xi_{j}, \\ V^{\prime}(\xi_{j},U_{j})&\text{if}\quad\xi_{j}<\xi_{i}.\end{cases}\] Let us now define the branch weight function \(\lambda\) by \(\lambda(x)=\mathbb{P}(V^{\prime}(x,U)=1)\) for \(x\in[0,1]\). We use this to define: \[\tilde{p}(i,j)=\begin{cases}V(\xi_{i},U_{i})&\text{if}\quad\xi_{i}<\xi_{j}, \\ V(\xi_{j},U_{j})&\text{if}\quad\xi_{j}<\xi_{i}.\end{cases}\] where \(V(x,u)=\mathds{1}_{\lambda(x)<u}-\mathds{1}_{\lambda(x)>u}\). If we manage to show \[\{p(i,j):1\leq i\neq j\leq n\}\stackrel{{ d}}{{=}}\{\tilde{p}(i,j ):1\leq i\neq j\leq n\} \tag{3.1}\] for every \(n\in\mathbb{N}\) then we have completed the proof the lemma. To this end, fix \(2\leq n\in\mathbb{N}\). Denote \(\mathbb{P}(\cdot|\xi_{1}<\ldots<\xi_{n},\xi_{1},\ldots,\xi_{n})\) by \(\mathbb{P}^{\xi}\) and let \(a_{1},\ldots,a_{n-1}\in\{\pm 1\}^{n-1}\). To show (3.1), it suffices to show that we \(\mathbb{P}^{\xi}\)-a.s. have \[\mathbb{P}^{\xi}\big{(}\forall i<n:\ p(i,i+1)=a_{i}\big{)}=\mathbb{P}^{\xi} \big{(}\forall i<n:\ \tilde{p}(i,i+1)=a_{i}\big{)}. \tag{3.2}\] Indeed, assume we are given \(b=\{b_{ij}\}_{i,j\leq n,i\neq j}\in\{\pm 1\}^{n(n-1)}\) with \(b_{ij}=-b_{ji}\) and \(b_{ij}=b_{in}\) for \(i<n\). If we are given \(b\) that does not satisfy these assumptions, then \[\mathbb{P}^{\xi}\big{(}\forall i,j\leq n,i\neq j:\ p(i,j)=b_{ij}\big{)}=0,\] due to \(b\) violating the consistency properties required in \((P1)\) or \((P4)\). The same holds for \(\tilde{p}\). Now if \(b\) satisfies the assumptions stated above, we then have \[\mathbb{P}^{\xi}\big{(}\forall i,j\leq n,i\neq j:\ p(i,j)=b_{ij}\big{)} \stackrel{{(P1)}}{{=}}\mathbb{P}^{\xi}\big{(}\forall i,j\leq n,i<j: \ p(i,j)=b_{ij}\big{)}\stackrel{{(P4)}}{{=}}\mathbb{P}^{\xi} \big{(}\forall i<n:\ p(i,i+1)=b_{in}\big{)},\] where both equalities hold \(\mathbb{P}\)-almost surely. We can apply \((P4)\) here because for \(i+1<j\) we have \((j,(i+1))\prec(j,j)\) due to \(\xi_{i}<\xi_{i+1}<\xi_{j}\), hence \(p(i,i+1)=p(i,(j,i+1))=p(i,j)\). The same is true for \(\tilde{p}\), which means that it suffices to only check (3.2). Consider now (3.2), due to our definition of \(\lambda\) and the independence of \(U_{1},\dots,U_{n}\), we \(\mathbb{P}\)-almost surely have \[\mathbb{P}^{\xi}\big{(}\forall i<n:\ p(i,n)=a_{i}\big{)} =\mathbb{P}^{\xi}\big{(}\forall i<n:V^{\prime}(\xi_{i},U_{i})=a_{ i}\big{)}\] \[=\prod_{i=1}^{n-1}\mathbb{P}^{\xi}\big{(}V^{\prime}(\xi_{i},U_{i} )=a_{i}\big{)}\] \[=\prod_{i=1}^{n-1}\big{(}\lambda(\xi_{i})\mathds{1}_{a_{i}=1}+(1- \lambda(\xi_{i}))\mathds{1}_{a_{i}=-1}\big{)}\] \[=\prod_{i=1}^{n-1}\mathbb{P}^{\xi}\big{(}V(\xi_{i},U_{i})=a_{i} \big{)}\] \[=\mathbb{P}^{\xi}\big{(}\forall i<n:\ \tilde{p}(i,n)=a_{i}\big{)}.\] This implies (3.1) which completes the proof of the lemma. As the second warm-up case, we consider the case where \(\mathbf{T}\) is arbitrary but \(\mu\) is supported only on the leaves of \(\mathbf{T}\). In this case \(\lambda\) and \(B\) are trivial, but we do need to define \(\psi\). **Lemma 3.9**.: _Assume that from Proposition 3.6 we get \((\textbf{T},d,r,\mu,F)\) so that \(\mu\) is supported diffusely on the leaves of **T**. Then there exists a decorated planar real tree \((\textbf{T},d,r,\mu,\psi,\lambda,B)\) so that the dendritic systems constructed by Constructions 3.2 and 3.5 have the same distribution. \(\lambda\) and \(B\) are trivial._ Proof.: Our main concern is to define the planar order \(\psi\), recall the definition from Definition 2.2. More concretely, we need to define \(\psi_{n}(x_{1},\dots,x_{n})\) for any totally unordered \(x=(x_{1},\dots,x_{n})\in\mathbf{T}^{n}\). Note that we need to do this for all \((x_{1},\dots,x_{n})\) and not just on the support of \(\mu\). Hence we fix totally unordered \((x_{1},\dots,x_{n})\in\mathbf{T}^{n}\). We let \(\mathbf{S}_{i}\) be the subtree corresponding to \(x_{i}\) in the following sense: let \(\overline{x_{i}}\) be the most recent branchpoint in \(span(r,x_{1},\dots,x_{n})\). Let \(y_{i}\) be the middle point of the segment \([\overline{x_{i}},x_{i}]_{\mathbf{T}}\). We set \(\mathbf{S}_{i}=F_{\mathbf{T}}(y_{i})\), the fringe subtree of \(y_{i}\), see Figure 10 for an illustration. The reason for using \(F_{\mathbf{T}}(y_{i})\) instead of \(F_{\mathbf{T}}(x_{i})\) is that we always have \(\mu(F_{\mathbf{T}}(y_{i}))>0\) but not necessarily \(\mu(F_{\mathbf{T}}(x_{i}))>0\). This is true because we have \(\mathbf{T}=span(supp(\mu))\) by Proposition 3.6. Note that if \(x_{i}\) is a leaf, then \(\mu(F_{\mathbf{T}}(x_{i}))=\mu(\{x_{i}\})=0\) because we assumed \(\mu\) to be diffuse. Define now \(\mathbb{P}^{x}=\bigotimes_{i=1}^{n}\frac{1}{\mu(\mathbf{S}_{i})}\mu|_{ \mathbf{S}_{i}}\). Sampling \((\xi_{1},\dots,\xi_{n})\) from \(\mathbb{P}^{x}\) is equivalent to sampling \((\xi_{1},\dots,\xi_{n})\) from \(\mathbb{P}\) and conditioning on \(\{\xi_{i}\in\mathbf{S}_{i}\}\) for every \(i\). Using \((\xi_{1},\dots,\xi_{n})\) sampled from \(\mathbb{P}^{x}\) in Construction 3.1 and 3.5 (instead of \(i.i.d.\)\(\mu\) samples) yields a random dendritic system \(\mathcal{D}_{n}=([n],\sim_{n},\preceq_{n},p_{n})\) with leaves labelled by \([n]\). We claim that \(\mathcal{D}_{n}\) is in fact almost surely constant. If this is the case, then \(\mathcal{D}_{n}\) will correspond to a non-random tree \(T_{n}\) by Lemma 2.8 which is a combinatorial, leaf-labelled and plane tree. We then set \[\psi_{n}(x_{1},\dots,x_{n})=T_{n}.\] Next we need to prove the claim than \(\mathcal{D}_{n}\) is almost surely constant. Let \((\xi_{i},U_{i},U_{ij};i,j\leq n)\) be the random variables involved in the construction of \(\mathcal{D}_{n}\), let \((\xi_{i}^{*},U_{i}^{*},U_{ij}^{*};i,j\leq n)\) be an independent copy with the same distribution, extending the probability space. For every \(i\neq j\) we have \[F(\xi_{i},U_{i},\xi_{j},U_{j},U_{ij})=F(\xi_{i}^{*},U_{i}^{*},\xi_{j}^{*},U_{j }^{*},U_{ij}^{*})\] due to the consistency properties \((F2)\) and \((F3)\) of Proposition 3.6 and the fact that \(\xi_{i},\xi_{i}^{*}\in\mathbf{S}_{i}\) whereas \(\xi_{j},\xi_{j}^{*}\in\mathbf{S}_{j}\). Informally, sampling once from \(\mathbf{S}_{i}\) and \(\mathbf{S}_{j}\) already determines which subtree is to the left of the other subtree, hence the second sample must agree with the left-right prescription of the first sample. More formally, \((\xi_{i},U_{i},U_{ij};i,j\leq n)\) and \((\xi_{i}^{*},U_{i}^{*},U_{ij}^{*};i,j\leq n)\) are independent and hence \(F\) restricted to \(\mathbf{S}_{i}\times[0,1]\times\mathbf{S}_{j}\times[0,1]\times[0,1]\) is \(\mathbb{P}^{x}\)-almost surely constant for any \(i\neq j\) by Lemma 3.7. This proves the claim that \(\mathcal{D}_{n}\) is constant and thus yields the map \(\psi_{n}\). By construction we also have the property that \(\psi_{n}(x_{1},\dots,x_{n})\) as non-plane combinatorial tree is the combinatorial tree corresponding to \(\operatorname{span}(r,x_{1},\dots,x_{n})\) What remains to be shown is that \(\psi_{n}(x_{1},\ldots,x_{n})\) embeds into \(\psi_{n+1}(x_{1},\ldots,x_{n},y)\) for any \(y\in\mathbf{T}\) such that \((x_{1},\ldots,x_{n},y)\) is still totally unordered. Let \((\mathbf{S}_{1},\ldots,\mathbf{S}_{n})\) denote the trees used in the construction of \(\psi_{n}(x_{1},\ldots,x_{n})\) and \((\mathbf{S}^{\prime}_{1},\ldots,\mathbf{S}^{\prime}_{n},\mathbf{S}_{y})\) the trees used in the construction of \(\psi_{n+1}(x_{1},\ldots,x_{n},y)\). We need to observe that in general we have \(\mathbf{S}_{i}\neq\mathbf{S}^{\prime}_{i}\). This is because either \(y\in\mathbf{S}_{i}\) for some \(i\) or because including \(y\) introduces new branchpoints in \(\operatorname{span}(r,x_{1},\ldots,x_{n},y)\) which change \(\overline{x_{i}}\) and hence \(\mathbf{S}_{i}\). Nevertheless, we always have \[\mathbf{S}^{\prime}_{i}\subseteq\mathbf{S}_{i}\quad\forall i\leq n.\] Recall that in the first part of the proof we sampled \(\xi_{i}\) from \(\mu_{i}\), i.e. from \(\mu\) conditioned on \(\xi_{i}\in\mathbf{S}_{i}\) to determine \(\psi_{n}(x_{1},\ldots,x_{n})\). It is easy to see that we can a posteriori replace \(\mathbf{S}_{i}\) with \(\mathbf{S}^{\prime}_{i}\) in the construction and still obtain the same \(\psi_{n}(x_{1},\ldots,x_{n})\). This is because we concluded that \(F(\xi_{i},U_{i},\xi_{j},U_{j},U_{ij})\) is almost surely constant for every \(i,j\leq n\) so we can condition \(\xi_{i},i\leq n\) to be in the smaller sets \(\mathbf{S}^{\prime}_{i}\) for every \(i\leq n\) and deduce the required constancy on these sets. There is a distinct advantage to using \((\mathbf{S}^{\prime}_{i},i\leq n)\) instead of \((\mathbf{S}_{i},i\leq n)\) as these are the subsets of \(\mathbf{T}\) used in the construction of \(\psi_{n+1}(x_{1},\ldots,x_{n},y)\). This then yields the embedding of \(\psi_{n}(x_{1},\ldots,x_{n})\) into \(\psi_{n+1}(x_{1},\ldots,x_{n},y)\). There is a unique edge or branchpoint in \(\psi_{n}(x_{1},\ldots,x_{n})\) to which we attach the leaf corresponding to \(y\), determined by where \(\overline{y}\) is located in \(\operatorname{span}(r,x_{1},\ldots,x_{n})\) because Definition 2.2 requires that \(\psi_{n+1}(x_{1},\ldots,x_{n},y)\) without the planar order is the combinatorial tree corresponding to \(\operatorname{span}(r,x_{1},\ldots,x_{n},y)\). There also is a determined way for the planar order of the new leaf corresponding to \(y\) which is compatible with the planar order of \(\psi_{n}(x_{1},\ldots,x_{n})\) because we used the same sets \(\mathbf{S}_{i},i\leq n\), respectively identically distributed random variables, to construct the planar orders. To conclude the proof of this lemma, note that the choice of \(\lambda\) is trivial. Indeed, recall that we view \(\lambda\) as an element of \(L^{1}(\mu_{s})\). Here \(\mu_{s}=0\), so \(L^{1}(\mu_{s})\) contains only a single element. Similarly, we need not define \(B\) as \(\mu\) does not have any atoms, hence \(B\) is trivial. We have to check that the distribution of the dendritic system constructed via Constructions 3.1 and 3.2 using \(\psi\) is the same as the distribution of the dendritic system in Proposition 3.6. By construction of \(\psi\), this is the case on the event where \((\xi_{i},i\geq 1)\) is a totally unordered sequence in \(\mathbf{T}\). Because \(\mu\) is diffusely supported on the leaves of \(\mathbf{T}\), this happens with probability 1. This concludes the proof. It is important to note that we used the fact that \(\mu\) is supported diffusely on the leaves only in the conclusion of the proof but not in the construction of \(\psi\). Hence we can also repeat this construction in the general setting of Proposition 3.6. **Corollary 3.10**.: _In the setting of Proposition 3.6, \(F\) induces a deterministic planar order \(\psi\) for **T**. In the context of Construction 3.1, consider the event that \((\xi_{i};i\in I)\) are totally unordered for some finite set \(I\subset\mathbb{N}\). Then on this event the planarity function \(p\) constructed in Construction 3.2 (which uses this \(\psi\)) restricted to \(I\) has the same distribution as the planarity function of Construction 3.5 restricted to \(I\)._ As the third warm-up case, we consider the case where \(\mathbf{T}\) is a single point. Note that in that case \(\lambda\) and \(B\) are necessarily trivial and there is again only one choice for the planar order \(\psi\). **Lemma 3.11**.: _Assume that from Proposition 3.6 we get \((\textbf{T},d,r,\mu,F)\) so that \(\textbf{T}=\{0\}\), \(r=0\) and \(\mu=\delta_{0}\). Then there exists a decorated planar real tree \((\textbf{T},d,r,\mu,\psi,\lambda,B)\) so that the dendritic systems constructed by Figure 10: The subsets of \(\mathbf{T}\) involved in the proof of Lemma 3.9. _Constructions 3.2 and 3.5 have the same distribution. \(\psi,\lambda\) and \(B\) need not be specified due to the special structure of **T**._ Proof.: The only tree with \(n\) leaves that can arise in this case is the tree where all \(n\) leaves are attached directly to the root. This tree also has a unique planar order. We are left with distributing the leaf labels, but due to exchangeability they form a uniform permutation on \(\{1,\ldots,n\}\). This means we have to assign them in a consistent way which can be realised by \(p(i,j)=\mathds{1}_{U_{i}<U_{j}}-\mathds{1}_{U_{j}>U_{i}}\). Finally, we will prove Proposition 3.3 in the general case by combining the ideas of the preceding lemmas. Proof of Proposition 3.3.: Let \((\mathbf{T},d,r,\mu,F)\) as in Proposition 3.6. The proof consists of two steps: constructing the planar order \(\psi\), the branch weight function \(\lambda\), the branchpoint weight function \(B\), and then checking that the distribution of the sampled dendritic system agrees with the distribution of Proposition 3.6. **Step \(1\): constructing \((\psi,\lambda,B)\).** First, we define \(\psi_{n}(x_{1},\ldots,x_{n})\) for any totally unordered \((x_{1},\ldots,x_{n})\in\mathbf{T}\) as in the proof of Lemma 3.9 and Corollary 3.10. This is then extended to arbitrary \((x_{1},\ldots,x_{n})\) by part 4 of Definition 2.2. Recall the definition of a branch weight function \(\lambda\) from the introduction. Hence, fix \(x\in supp(\mu_{s})\), i.e. \(x\) is located on the diffuse mass on the branches. By our convention, we can assume \(\deg(x)=2\). Define \(\mu^{x}=\frac{1}{\mu(F_{\mathbf{T}}(x))}\mu|_{F_{\mathbf{T}}(x)}\), \(\mu\) restricted to \(F_{\mathbf{T}}(x)\) and normalised. Let \(U_{1},U_{2},U_{3}\) be independent, uniform \([0,1]\) random variables and let \(\xi\) be an independent \(\mu^{x}\)-distributed random variable. \(\xi\) can also be seen as a \(\mu\)-distributed random variable conditioned on \(\xi\in F_{\mathbf{T}}(x)\). We can now define \(\lambda(x)\), \[\lambda(x)=\mathbb{P}\big{(}F(x,U_{1},\xi,U_{2},U_{3})=1\big{)}. \tag{3.3}\] Note that this defines a measurable function \(\lambda\) because \(F\) is measurable. On an informal level, \(\lambda(x)\) is the probability that a leaf attached to \(x\) will be to the left of the subtree \(F_{\mathbf{T}}(x)\) of \(x\). Recall the definition of branchpoint weight function \(B\) from the introduction. Fix an atom \(a\) of \(\mu\). We need to define \(\beta_{a}:[0,1]\to[0,1]\) so that \(\beta_{a}\) is non-decreasing, right-continuous and the cardinality of the range of \(\beta_{a}\) is at most \(\deg a\in\mathbb{N}\cup\{\infty\}\). Also, we want \(\beta_{a}\) to be piece-wise constant in the following sense: enumerate the connected components of \(F_{\mathbf{T}}(a)\backslash\{a\}\) by \(\mathbf{S}_{1},\mathbf{S}_{2},\ldots\) and let \(c_{i}=\sum\mu(\mathbf{S}_{j})/\sum_{k\geq 1}\mu(\mathbf{S}_{k})\) where the first sum ranges over all \(j\) such that \(\mathbf{S}_{j}\) is left of \(\mathbf{S}_{i}\) (see Definition 2.5) and \(\mathbf{S}_{j}\neq\mathbf{S}_{i}\). We then impose that \(\beta_{a}\) is constant on \([c_{i},\inf_{c_{j}>c_{i}}c_{j})\) for every \(i\). Fix \(i\) and consider \(\mu^{i}=\frac{1}{\mu(\mathbf{S}_{i})}\mu|_{\mathbf{S}_{i}}\), i.e. \(\mu\) restricted to \(\mathbf{S}_{i}\) and normalised. Let \(U_{1},U_{2},U_{3}\) be independent, uniform \([0,1]\) random variables and let \(\xi^{i}\) be an independent \(\mu^{i}\)-distributed random variable. \(\xi\) can also be seen as \(\mu\)-distributed random variable conditioned on \(\xi^{i}\in\mathbf{S}_{i}\). We can now define \[b_{i}=\mathbb{P}\big{(}F(a,U_{1},\xi^{i},U_{2},U_{3})=1\big{)}. \tag{3.4}\] On an informal level, this corresponds to the probability that a leaf attached to \(a\) is left of the subtree \(\mathbf{S}_{i}\). We use this to define \(\beta_{a}\). On the interval \([c_{i},\inf_{c_{j}>c_{i}}c_{j})\), we set \(\beta_{a}\) to be \(b_{i}\), \[\beta_{a}\big{|}_{[c_{i},\inf_{c_{j}>c_{i}}c_{j})}=b_{i}.\] For completeness, we set \(\beta_{a}(1)=\sup_{j\in\mathbb{N}}b_{j}\) and \(\beta_{a}(x)=\lim_{z\downarrow x}\beta_{a}(z)\) for every \(x\in[0,1)\) where \(\beta_{a}(x)\) has not been defined yet. All the claimed properties of \(\beta_{a}\) (as described in Construction 1.7) follow from this construction, in particular note that \(\beta_{a}\) is non-decreasing due to the consistency property (\(F2\)). **Step \(2\): equivalence in distribution.** We need to check that the dendritic system \(\mathcal{D}^{*}=(\mathbb{N},\sim^{*},\prec^{*},p^{*})\) constructed by Constructions 3.1 and 3.2 using \((\psi,\lambda,B)\) constructed in steps \(1\)-\(3\) has the same distribution as the dendritic system \(\mathcal{D}=(\mathbb{N},\sim,\prec,p)\) constructed by Constructions 3.1 and 3.5. We consider them under a partial coupling which is obtained by using the same sequence \((\xi_{i},i\leq 1)\) of \(i.i.d.\)\(\mu\)-random variables for Construction 3.1. This means that \(\sim=\sim^{*}\) and \(\preceq=\preceq^{*}\)\(\mathbb{P}\)-almost surely. Condition on \((\xi_{i},i\leq n)\), let \(\mathbb{P}^{\xi}\) be a regular conditional probability of \(\mathbb{P}\) given \((\xi_{i},i\geq n)\). It now suffices to check that the restrictions of \(\mathcal{D}\) and \(\mathcal{D}^{*}\) to the leaves labelled by \([n]\) have the same distribution for all \(n\in\mathbb{N}\). Due to the coupling, it suffices to show that \(\{p(i,j):i\neq j\in[n]\}\) and \(\{p^{*}(i,j):i\neq j\in[n]\}\) have the same distribution under \(\mathbb{P}^{\xi}\), \(\mathbb{P}\)-almost surely. We partition \([n]\): Choose a set \(I_{1}\subset[n]\) such that \((\xi_{i},i\in I_{1})\) is totally unordered and such that \(\mathrm{span}(r,\xi_{i},i\in I_{1})=\mathrm{span}(r,\xi_{i},i\in[n])\). Next, let \[I_{2}=\left\{i\in[n]:\xi_{i}\in\mathrm{supp}(\mu_{s})\quad\text{and}\quad i \notin I_{1}\right\}.\] Lastly, for every atom \(a\) of \(\mu\), we let \[I_{3}^{a}=\left\{i\in[n]:\xi_{i}=a\quad\text{and}\quad i\notin I_{1}\right\}.\] By construction, \(I_{1},I_{2}\) and \((I_{3}^{a},a\) atom) are disjoint and \(I_{1}\cup I_{2}\cup\bigcup_{a}I_{3}^{a}=[n]\). See Figure 11 for an illustration of these sets. We show \[\{p(i,j):i,j\in[n];i\neq j\}\stackrel{{ d}}{{=}}\{p^{*}(i,j):i, j\in[n];i\neq j\}\quad\mathbb{P}-a.s. \tag{3.5}\] in three steps. First with \([n]\) replaced by \(I_{1}\), then by \(I_{1}\cup I_{2}\) and lastly for \([n]\) itself. We do these in three steps - _a,b,c_ - for \(I_{1}\), \(I_{1}\cup I_{2}\) and \([n]\) respectively. _Step \(2a\)._ By construction, \(\{\xi_{i},i\in I_{1}\}\) is a totally unordered set and hence \[\{p(i,j):i,j\in I_{1};i\neq j\}\stackrel{{ d}}{{=}}\{p^{*}(i,j):i,j\in I_{1};i\neq j\}\quad\mathbb{P}-a.s. \tag{3.6}\] by Corollary 3.10. _Step \(2b\)._ Consider now \(I_{2}\). For \(i\in I_{2}\) we let \(s(i)=\min\{i^{\prime}:i^{\prime}\in I_{1},\xi_{i}<\xi_{i^{\prime}}\}\) - by construction of \(I_{2}\) the set \(\{i^{\prime}:i^{\prime}\in I_{1},\xi_{i}<\xi_{i^{\prime}}\}\) is never empty for every \(i\in I_{2}\) and thus \(s(i)\) is well-defined. The idea behind considering \(\xi_{s(i)}\) is similar to the proof of Lemma 3.8, it suffices to consider only one other leaf to determine the orientation of leaf \(i\). Here we use \(s(i)\), in the lemma we used the smallest leaf above. Let \(i,j\in I_{2}\) with \(i\neq j\). We then have 3 cases: either \(\xi_{i}<\xi_{j}\), \(\xi_{j}<\xi_{i}\) or \(\xi_{i}\nleq\xi_{j},\xi_{j}\nleq\xi_{i}\). If \(\xi_{i}<\xi_{j}\), then \((i,j)=(i,s(i))\prec(j,s(i))\). By \((P4)\) we then almost surely have \[p(i,j)=p(i,(j,s(i))=p(i,s(i)). \tag{3.7}\] Similarly, if \(\xi_{j}<\xi_{i}\) we almost surely have \(p(i,j)=p(s(j),j)=-p(j,s(j))\) where the second equality follows from \((P1)\). If \(\xi_{i}\nleq\xi_{j}\) and \(\xi_{j}\nleq\xi_{i}\) we have \((i,j)=(s(i),s(j))\prec(i,s(i))\) and \((i,j)\prec(j,s(j))\). Hence we almost surely have \[p(i,j)=p(i,s(j))=p(s(i),s(j)), \tag{3.8}\] again by \((P4)\). The same reasoning works for \(p^{*}\) as well, so that we have analogues of (3.7) and (3.8) for \(p^{*}\) as well. This implies that to show \[\{p(i,j):i,j\in I_{1}\cup I_{2};i\neq j\}\stackrel{{ d}}{{=}}\{p^{*}(i,j):i,j\in I _{1}\cup I_{2};i\neq j\}\quad\mathbb{P}-a.s., \tag{3.9}\] Figure 11: The three sets \(I_{1},I_{2}\) and \(I_{3}\). The big, blue circles signify atoms of \(\mathbf{T}\). it suffices to show that \[\{p(i,j):i,j\in I_{1}:i\neq j\text{ or }i\in I_{2},j=s(i)\}\stackrel{{ d}}{{=}}\{p^{*}(i,j):i,j\in I_{1}:i\neq j \text{ or }i\in I_{2},j=s(i)\}\quad\mathbb{P}-a.s.. \tag{3.10}\] Consider now \(i\in I_{2}\) and \(j=s(i)\in I_{1}\). Let \(\pi(\xi_{j})\) be either the branchpoint in \(\operatorname{span}(r,\xi_{\ell};\ell\in I_{1})\) closest to \(\xi_{j}\) or the closest \(\xi_{k},k\in[n]\) - whichever is closer. Denote by \(\tilde{\xi_{j}}\) the midpoint of the interval \([\pi(\xi_{j}),\xi_{j}]\). Let \(\mathbf{S}_{j}=F_{\mathbf{T}}(\tilde{\xi_{j}})\), by construction \(\xi_{j}\in\mathbf{S}_{j}\) and \(\xi_{i}\notin\mathbf{S}_{j}\). Now let \(\zeta_{j},\zeta_{j}^{\prime}\) be sampled from \(\frac{1}{\mu(\mathbf{S}_{j})}\mu|\mathbf{S}_{j}\) - we assume that they are all independent under \(\mathbb{P}^{\xi}\) and independent of all uniform variables. Here we note again that \(\mu(\mathbf{S}_{j})>0\), but not necessarily \(\mu(F_{\mathbf{T}}(\xi_{j}))>0\). Let \(V_{j},V_{j}^{\prime},V_{ij},V_{ij}^{\prime}\) be additional uniform random variables. We then \(\mathbb{P}^{\xi}\)-almost surely have \[p(i,j)=F(\xi_{i},U_{i},\xi_{j},U_{j},U_{ij})=F(\xi_{i},U_{i},\zeta_{j},V_{j},V _{ij})=F(\xi_{i},U_{i},\zeta_{j}^{\prime},V_{j}^{\prime},V_{ij}^{\prime}).\] The first equality is how \(p(i,j)\) is constructed, the other two inequalities follow from \(\xi_{i},\zeta_{i},\zeta_{i}^{\prime}\in\mathbf{S}_{i}\) and the consistency properties \((F3)\) and \((F4)\). By Lemma 3.7, this means that there is a function \(G_{i}\) such that \[p(i,s(i))=G_{i}(\xi_{i},U_{i}). \tag{3.11}\] Note that this is how we have defined \(\lambda(\xi_{i})\) in (3.3), \(\lambda(\xi_{i})=\mathbb{P}^{\xi}(G_{i}(\xi_{i},U_{i})=1)\). Now let \(\gamma=(\gamma_{ij})_{i,j\in I_{1}\cup I_{2}}\in\{\pm 1\}^{I_{1}\cup I_{2}}\) be in such a way that \[\mathbb{P}^{\xi}\big{(}\forall i,j\in I_{1}\cup I_{2}:i\neq j:p(i,j)=\gamma_{ ij}\big{)}>0.\] Informally, this means we only consider \(c\) which does not break the consistency relations of \(p\) in an obvious way, for example by not satisfying \(\gamma_{ij}=-\gamma_{ji}\). Any such \(\gamma\) would have probability \(0\), both for the above expression and the same expression with \(p\) replaced by \(p^{*}\). Using the observations we have made so far, i.e. we only need to show (3.10) for (3.9) and (3.11), we get \[\mathbb{P}^{\xi}\big{(}\forall i,j\in I_{1}\cup I_{2};i\neq j:p(i, j)=\gamma_{ij}\big{)}\] \[\quad=\mathbb{P}^{\xi}\big{(}\forall i\in I_{2}:p(i,s(i))=\gamma_ {i,s(i)};\forall i,j\in I_{1};i\neq j:p(i,j)=\gamma_{ij}\big{)}\] \[\quad=\mathbb{P}^{\xi}\big{(}\forall i\in I_{2}:p(i,s(i))=\gamma_ {i,s(i)}\big{|}\forall i,j\in I_{1};i\neq j:p(i,j)=\gamma_{ij}\big{)}\mathbb{ P}^{\xi}\big{(}\forall i,j\in I_{1};i\neq j:p(i,j)=\gamma_{ij}\big{)}\] \[\quad=\Big{(}\prod_{i\in I_{2}}\mathbb{P}^{\xi}\big{(}G_{i}(\xi_ {i},U_{i})=\gamma_{i,s(i)}\big{)}\Big{)}\mathbb{P}^{\xi}\big{(}\forall i,j\in I _{1};i\neq j:p(i,j)=\gamma_{ij}\big{)}\] \[\quad=(*).\] In the last step, we have used the independence of the \(U_{i},i\in I_{2}\). As we have noted above, the distribution of \(G_{i,s(i)}(\xi_{i},U_{i})\) is the same as the distribution of \(p^{*}(i,j)\). Combining this with (3.6), we have \[(*)=\prod_{i\in I_{2}}\mathbb{P}^{\xi}\big{(}p^{*}(i,s(i))=\gamma_{i,s(i)} \big{)}\mathbb{P}^{\xi}\big{(}\forall i,j\in I_{1};i\neq j:p^{*}(i,j)=\gamma_{ ij}\big{)}=(**).\] We reduce this expression with the same reasoning as above, this time for \(p^{*}\) instead of \(p\), \[(**) =\mathbb{P}^{\xi}\big{(}\forall i\in I_{2}:p^{*}(i,s(i))=\gamma_{ ij,s(i)}\big{|}\forall i,j\in I_{1};i\neq j:p^{*}(i,j)=\gamma_{ij}\big{)} \mathbb{P}^{\xi}\big{(}\forall i,j\in I_{1};i\neq j:p^{*}(i,j)=\gamma_{ij} \big{)}\] \[=\mathbb{P}^{\xi}\big{(}\forall i\in I_{2}:p^{*}(i,s(i))=\gamma_{ ij};\forall i,j\in I_{1};i\neq j:p^{*}(i,j)=\gamma_{ij}\big{)}\] \[=\mathbb{P}^{\xi}\big{(}\forall i,j\in I_{1}\cup I_{2};i\neq j:p^ {*}(i,j)=\gamma_{ij}\big{)}.\] Because \(\gamma\) was arbitrary, we have shown (3.9). _Step \(2c\)._ Lastly, we want to show (3.5), given (3.9). Fix an atom \(a\) such that \(I_{3}^{a}\neq\emptyset\). Note there can be the case where there is \(i\in I_{1}\) with \(\xi_{i}=a\). To deal with this case and to include this index, we define \(\tilde{I}_{3}^{a}=\{i:\xi_{i}=a\}\). Further, there are \(d(a)\geq 1\) and \(i_{1}^{a},\ldots,i_{d(a)}^{a}\in I_{1}\) such that \(a\leq\xi_{i_{j}^{a}}\) for \(1\leq j\leq d(a)\) and such that \(\operatorname{span}(r,\xi_{i};i\in[n])\) and \(\operatorname{span}(r,\xi_{i^{a}_{1}},\ldots,\xi_{i^{a}_{d(a)}}^{a})\) are the same in a small neighbourhood of \(a\). This means that we choose as many of the leaves in \(\operatorname{span}(r,\xi_{i};i\in I_{1})\) that sit above \(a\) as needed to realise the degree of \(a\) in \(\operatorname{span}(r,\xi_{i};i\in[n])\). The case of \(d(a)=1\) can also happen if there is \(i\in I_{1}\) with \(\xi_{i}=a\). We can choose them in such a way that \(p(i^{a}_{k},i^{a}_{k+1})=1\) for \(k\leq d(a)-1\), i.e. they are indexed from left to right in an increasing manner. Let now be \(i\in\tilde{I}^{a}_{3}\) and \(j\in[n]\), i.e. a leaf which is attached to \(a\) and another leaf. We consider \(p(i,j)\). There are three cases, \(\xi_{j}<a\), or \(a<\xi_{j}\), or \(a\not<\xi_{j}\) and \(\xi_{j}\not<a\). We show that in all three cases we have \[p(i,j)=p(i^{a}_{k},j), \tag{3.12}\] \(\mathbb{P}^{\xi}\)-almost surely for an appropriate choice of \(1\leq k\leq d(a)\). If \(\xi_{j}<a\) or if \(a\not<\xi_{j},\xi_{j}\not<a\), we choose \(i^{a}_{k}=i^{a}_{1}\) - (3.12) then holds by the consistency property \((P4)\) of \(p\). If \(a<\xi_{j}\), then there exists some \(k\) such that \(\xi_{j}\) and \(\xi_{i^{a}_{k}}\) are in the same subtree of \(a\). (3.12) again holds by \((P4)\). This means that for an appropriate \(\gamma=(\gamma_{ij})_{i\neq j\in[n]}\in\{\pm 1\}^{n(n-1)}\) that does not violate the consistency conditions we have \[\mathbb{P}^{\xi}\big{(}\forall i,j\in[n],i\neq j:p(i,j)=\gamma_{ ij}\big{)}=\mathbb{P}^{\xi}\big{(}\forall i,j\in I_{1}\cup I_{2},i\neq j:p(i,j )=\gamma_{ij};\\ \forall a\forall i\in\tilde{I}^{a}_{3}\forall k\leq d(a):p(i,i^{ a}_{k})=\gamma_{i,i^{a}_{k}};\forall a\forall i,j\in\tilde{I}^{a}_{3},i\neq j :p(i,j)=\gamma_{ij}\big{)}. \tag{3.13}\] The same statement holds if we replace \(p\) by \(p^{*}\). Note that in the case where there is \(i\in I_{1}\) with \(\xi_{i}=a\) there is some redundancy in the above statement. There we have \(d(a)=1\) and \(i^{a}_{1}\in\tilde{I}^{a}_{3}\). Consider the case where \(a<\xi_{i^{a}_{k}}\) for all \(k\leq d(a)\) (this is true for all \(k\) unless \(d(a)=1\) and \(\xi_{i^{a}_{1}}=a\)). Then every \(k\) corresponds to a subtree \(\mathbf{S}_{k}\) of \(a\), see Definition 1.3 for the relevant definition. We necessarily have \(\mu(\mathbf{S}_{k})>0\) for every \(1\leq k\leq d(a)\). Let \(\zeta_{k},\zeta^{\prime}_{k}\) be sampled from \(\frac{1}{\mu(S_{k})}\mu|_{S_{k}}\) and let \(V_{k},V^{\prime}_{k},V_{ik},V^{\prime}_{ik}\) be uniform random variables, all of them are assumed to be independent under \(\mathbb{P}^{\xi}\) and independent of any other uniform random variables. Let \(i\in I^{a}_{3}\), we then \(\mathbb{P}^{\xi}\)-almost surely have \[p(i,i^{a}_{k})=F(a,U_{i},\xi_{i^{a}_{k}},U_{i^{a}_{k}},U_{i,i^{a}_{k}})=F(a,U_ {i},\zeta_{k},V_{k},V_{ik})=F(a,U_{i},\zeta^{\prime}_{k},V^{\prime}_{k},V^{ \prime}_{ik}).\] The first equality is Construction 3.5, the latter two equalities follow from the consistency properties \((F3)\) and \((F4)\). By Lemma 3.7 we get that there is a function \(G^{a}_{k}\) such that \(\mathbb{P}^{\xi}\)-almost surely \[p(i,i^{a}_{k})=G^{a}_{k}(U_{i}).\] Using this, we continue the considerations of (3.13). We restrict ourselves to the case where \(I^{a}_{3}=\tilde{I}^{a}_{3}\). \[\mathbb{P}^{\xi}\big{(}\forall i,j\in[n], i\neq j:p(i,j)=\gamma_{ij}\big{)}\] \[=\mathbb{P}^{\xi}\big{(}\forall i,j\in I_{1}\cup I_{2},i\neq j:p(i,j)=\gamma_{ij};\forall a\forall i\in I^{a}_{3}\forall k\leq d(a):p(i,i^{a}_{ k})=\gamma_{i,i^{a}_{k}};\] \[\quad\quad\forall a\forall i,j\in I^{a}_{3},i\neq j:p(i,j)= \gamma_{ij}\big{)}\] \[=\mathbb{P}^{\xi}\big{(}\forall a\forall i\in I^{a}_{3}\forall k \leq d(a):p(i,i^{a}_{k})=\gamma_{i,i^{a}_{k}};\forall a\forall i,j\in I^{a}_ {3},i\neq j:p(i,j)=\gamma_{ij}\big{|}\] \[\quad\quad\forall i,j\in I_{1}\cup I_{2},i\neq j:p(i,j)=\gamma_{ ij}\big{)}\mathbb{P}^{\xi}\big{(}\forall i,j\in I_{1}\cup I_{2},i\neq j:p(i,j)= \gamma_{ij}\big{)}\] \[=\prod_{a}\mathbb{P}^{\xi}\big{(}\forall i\in I^{a}_{3}\forall k \leq d(a):G^{a}_{k}(U_{i})=\gamma_{i,i^{a}_{k}};\forall i,j\in I^{a}_{3},i\neq j :F(a,U_{i},a,U_{j},U_{ij})=\gamma_{ij}\big{)}\] \[\quad\quad\cdot\mathbb{P}^{\xi}\big{(}\forall i,j\in I_{1}\cup I _{2},i\neq j:p(i,j)=\gamma_{ij}\big{)} \tag{3.14}\] For the last equality, we have used that due to (3.5), the uniform variables used at different atoms are independent because for \(a\neq a^{\prime}\) the sets \(I^{a}_{3}\) and \(I^{a^{\prime}}_{3}\) are disjoint. Consider now \[\mathbb{P}^{\xi}\big{(}\forall i\in I^{a}_{3}\forall k\leq d(a):G^{a}_{k}(U_{i} )=\gamma_{i,i^{a}_{k}};\forall i,j\in I^{a}_{3},i\neq j:F(a,U_{i},a,U_{j},U_{ij} )=\gamma_{ij}\big{)},\] for a fixed atom \(a\). Note that we have defined the thresholds of \(\beta_{a}\) exactly so that \(\mathbb{P}(G^{a}_{k}(U_{i})=1)=b_{k}\), compare to (3.4). Let \(b_{d(a)+1}=1\). For every \(i\in I^{a}_{3}\) there is a unique \(k(i)\) such that \(a_{ki}=1\) for \(k\leq k(i)\) and \(a_{ki}=-1\) for \(k>k(i)\). The possible values for \(k(i)\) reach from \(0\) to \(d(a)\) where \(0\) and \(d(a)\) correspond to the extremal cases where the leaf \(i\) is to the left or to the right of all subtrees. This implies that \[\mathbb{P}^{\xi}\big{(}\forall i\in I_{3}^{a}\forall k\leq d(a):G_{k} ^{a}(U_{i})=\gamma_{i,i^{a}_{k}};\forall i,j\in I_{3}^{a},i\neq j:F(a,U_{i},a,U_{ j},U_{ij})=\gamma_{ij}\big{)}\\ =\bigg{(}\prod_{i\in I_{3}^{a}}(b_{k(i)+1}-b_{k(i)})\bigg{)}\mathbb{P}^{\xi} \big{(}\forall i,j\in I_{3}^{a},i\neq j:F(a,U_{i},a,U_{j},U_{ij})=\gamma_{ij} \big{|}\forall i\in I_{3}^{a}\forall k\leq d(a):G_{k}^{a}(U_{i})=\gamma_{i,i^{a} _{k}}\big{)}\\ =\bigg{(}\prod_{i\in I_{3}^{a}}(b_{k(i)+1}-b_{k(i)})\bigg{)}\bigg{(} \prod_{k=0}^{d(a)}\frac{1}{|\{i\in I_{3}^{a}:k(i)=k\}|!}\bigg{)}\] The last equality holds because the leaves with indices in the set \(\{i\in I_{3}^{a}:k(i)=k\}\) form a uniform random permutation by exchangeability. We have chosen Construction 3.2 in such a way that \[\bigg{(}\prod_{i\in I_{3}^{a}}(b_{k(i)+1}-b_{k(i)})\bigg{)}\bigg{(} \prod_{k=0}^{d(a)}\frac{1}{|\{i\in I_{3}^{a}:k(i)=k\}|!}\bigg{)}\\ =\mathbb{P}^{\xi}\big{(}\forall i\in I_{3}^{a}\forall k\leq d(a):p ^{*}(i,i^{a}_{k})=\gamma_{i,i^{a}_{k}};\forall i,j\in I_{3}^{a},i\neq j:p^{*}( i,j)=\gamma_{ij}\big{)}. \tag{3.15}\] Before (3.14) we assumed \(I_{3}^{a}=\bar{I}_{3}^{a}\). Consider now the case where \(I_{3}^{a}\neq\bar{I}_{3}^{a}\) which happens when there is \(i\in I_{1}\) with \(\xi_{i}=a\). In this case the computations (3.14) - (3.15) become easier. The reason for this is that we do not need to consider the terms of the form \[\left\{\forall i\in I_{3}^{a}\forall k\leq d(a):p(i,i^{a}_{k})=\gamma_{i,i^{a }_{k}}\right\}.\] This is because \(d(a)=1\) and \(\xi_{i^{a}_{1}}\in\tilde{I}_{3}^{a}\). Besides that we consider \(\tilde{I}_{3}^{a}\) instead of \(I_{3}^{a}\). The computations then proceed as above. Recall that we have already shown (3.9), this means that \[\mathbb{P}^{\xi}\big{(}\forall i\neq j\in I_{1}\cup I_{2}:p(i,j)=\gamma_{ij} \big{)}=\mathbb{P}^{\xi}\big{(}\forall i\neq j\in I_{1}\cup I_{2}:p^{*}(i,j)= \gamma_{ij}\big{)}. \tag{3.16}\] Going back to (3.14), with (3.16) we have that \[\mathbb{P}^{\xi}\big{(}\forall i,j\in[n], i\neq j:p(i,j)=\gamma_{ij}\big{)}\\ =\mathbb{P}^{\xi}\big{(}\forall i,j\in I_{1}\cup I_{2},i\neq j:p^{*}(i,j)= \gamma_{ij};\forall a\forall i\in I_{3}^{a}\forall k\leq d(a):p^{*}(i,i^{a}_{k })=\gamma_{i,i^{a}_{k}};\\ \forall a\forall i,j\in I_{3}^{a},i\neq j:p^{*}(i,j)=\gamma_{ij}\big{)}\\ =\mathbb{P}^{\xi}\big{(}\forall i,j\in[n],i\neq j:p^{*}(i,j)=\gamma_{ij} \big{)}.\] This completes showing that \(p\) and \(p^{*}\) have the same distribution under \(\mathbb{P}^{\xi}\) which completes the proof. ### Uniqueness of a canonical representation So far we have proven Proposition 3.3 which states that there is _some_ decorated planar real tree that corresponds to our dendritic system. This means we have the following collection of objects: a rooted, weighted real tree \((\mathbf{T},d,r,\mu)\), a planar order \(\psi\) on \(\mathbf{T}\), a branch weight function \(\lambda\) and a branchpoint weight function \(B\). In this section we want to find a more canonical representation for this in the form of IP-trees, see Definition 1.4. This will lead to us proving Proposition 3.4. The notion of IP-trees has been introduced by Forman [9]. **Definition 3.12** (Special points).: For a weighted, rooted real tree \((\mathbf{T},d,r,\mu)\) the special points are 1. the locations of atoms of \(\mu\), 2. the branch points of \(\mathbf{T}\), and 3. the isolated leaves of \(\mathrm{span}(\mathrm{supp}(\mu))\), by which we mean leaves of \(\mathrm{span}(\mathrm{supp}(\mu))\) that are not limit points of the branch points of \(\mathrm{span}(\mathrm{supp}(\mu))\). **Definition 3.13** (mass-structural isomorphism).: Let \(\mathscr{S}_{i}\) be the sets of special points of weighted, rooted real trees \((\mathbf{T}_{i},d_{i},r_{i},\mu_{i})\) for \(i=1,2\). A measurable map \(\phi:\mathbf{T}_{1}\to\mathbf{T}_{2}\) is a _mass-structural isomorphism_ if it has the following properties. 1. _Mass preserving._ For every \(x\in\mathscr{S}_{1}\), \(\mu_{1}([r_{1},x]_{\mathbf{T}_{1}})=\mu_{2}([r_{2},\phi(x)]_{\mathbf{T}_{2}})\), \(\mu_{1}(\{x\})=\mu_{2}(\{\phi(x)\})\), and \(\mu_{1}(F_{\mathbf{T}_{1}}(x))=\mu_{2}(F_{\mathbf{T}_{2}}(\phi(x))\). 2. _Structure preserving._ For \(x,y\in\mathscr{S}_{1}\) we have \(x\in[r_{1},y]_{\mathbf{T}_{1}}\) if and only if \(\phi(x)\in[r_{2},\phi(y)]_{\mathbf{T}_{2}}\). We call two rooted, weighted real trees mass-structurally equivalent if there exists a mass-structural isomorphism between the two. This is an equivalence relation. We then have the following two theorems of Forman [9], the second one concern itself with hierarchies. A hierarchy on \(\mathbb{N}\) (\(\mathcal{H}_{n},n\geq 1\)) [9, Definition 1.6] is an object such that for every \(n\geq 1\), \(\mathcal{H}_{n}\) is a collection of subsets of \([n]\) satisfying certain consistency assumptions - we do not recall these here. To every IP-tree \((\mathbf{T},d,r,\mu)\) we associate a hierarchy, \((\xi_{i},i\geq 1)\) are \(i.i.d.\)\(\mu\)-random variables, \[\mathcal{H}_{n}=\left\{\left\{i\in[n]:\xi_{i}\in F_{\mathbf{T}}(x)\right\}:x \in\mathbf{T}\right\}\cup\left\{\left\{i\right\}:i\in[n]\right\}\qquad\text{ for }n\geq 1. \tag{3.17}\] Observe that this is very similar to the first two steps of Construction 1.7. For a given \(n\), \(\mathcal{H}_{n}\) as above can be represented as a discrete tree, therefore we can think of a hierarchy (\(\mathcal{H}_{n},n\geq 1\)) as a sequence of growing trees. **Theorem 3.14**.: _[_9_, Theorem 1.5]_ _Each mass-structural equivalence-class of rooted, weighted real trees contains exactly one isomorphism class of IP-trees._ **Theorem 3.15**.: _[_9_, Theorem 1.7]_ _Two IP-trees are mass-structurally equivalent if and only if the induced hierarchies in (3.17) have the same law._ Before we can apply this to our setting, we make sure that we can also pass the planar order \(\psi\), the branch weight function \(\lambda\) and the branchpoint weight function \(B\) through a mass-structural isomorphism. **Lemma 3.16**.: _A mass-structural isomorphism \(\phi\) induces a new planar order \(\phi(\psi)\), a new branch weight function \(\phi(\lambda)\) and a new branchpoint weight function \(\phi(B)\)._ Proof.: Assume we have \(\phi:(\mathbf{T},d,r,\mu)\to(\mathbf{T}^{\prime},d^{\prime},r^{\prime},\mu^{ \prime})\) and that \(\psi\), \(\lambda\) and \(B\) are a planar order, branch weight function and branchpoint weight function for \((\mathbf{T},d,r,\mu)\). For a totally unordered sequence \(x^{\prime}_{1},\ldots,x^{\prime}_{n}\in\mathbf{T}^{\prime}\), we define \[\phi(\psi)_{n}(x^{\prime}_{1},\ldots,x^{\prime}_{n})=\psi_{n}(\phi^{-1}(x^{ \prime}_{1}),\ldots,\phi^{-1}(x^{\prime}_{n})).\] Because \(\phi\) is structure preserving in the sense of Definition 3.13 we obtain a totally unordered sequence \(\phi^{-1}(x^{\prime}_{1}),\ldots,\phi^{-1}(x^{\prime}_{n})\). The same property and the fact that \(\psi\) is a planar order also implies that we can embed \(\phi(\psi)_{m}(x^{\prime}_{1},\ldots,x^{\prime}_{m})\) into \(\phi(\psi)_{n}(x^{\prime}_{1},\ldots,x^{\prime}_{n})\) for \(m<n\) respecting the planar structure. For any \(x^{\prime}\in\mathbf{T}^{\prime}\), define \(\phi(\lambda)(x^{\prime})=\lambda(\phi^{-1}(x^{\prime}))\) which is again a branch weight function. Similarly, if \(a^{\prime}\in\mathbf{T}^{\prime}\) is an atom of \(\mu^{\prime}\), then \(a=\phi^{-1}(a^{\prime})\) is an atom of \(\mu\) because \(\phi\) is mass-preserving. We can then define \(B(a^{\prime})=\beta_{a}\). Because \(\phi\) is structure-preserving, this is a valid branchpoint weight function which is compatible with \(\phi(\psi)\). With Lemma 3.16 in hand, we can prove Proposition 3.4. Proof of Proposition 3.4.: Let \((\mathbf{T}_{i},d_{i},r_{i},\mu_{i},\psi^{(i)},\lambda_{i},B_{i}),i\in\{1,2\}\) be two decorated planar real trees such that the induced Markov chains \((T_{n}^{(i)},n\geq 1),i\in\{1,2\}\) have the same distribution. We show the uniqueness of \((\mathbf{T},d,r,\mu,\psi,\lambda,B)\) in multiple steps. _Uniqueness of \((\mathbf{T},d,r,\mu)\):_ Observe that applying a mass-structural isomorphism using the induced maps of Lemma 3.16 does not change the distribution of the sampled dendritic system: More precisely, assume we are given a mass-structural isomorphism \(\phi:(\mathbf{T}_{1},d_{1},r_{1},\mu_{1})\to(\mathbf{T}_{2},d_{2},r_{2},\mu_{2})\) and a planar order \(\psi\), branch weight function \(\lambda\) and branchpoint weight function \(B\) for \(\mathbf{T}_{1}\). Sample \(\{\xi_{i}\}_{i\in\mathbb{N}}\) independently from \(\mu_{1}\) in \(\mathbf{T}_{1}\), then \(\{\phi(\xi_{i})\}_{i\in\mathbb{N}}\) is an \(i.i.d.\)-\(\mu_{2}\) sequence. Using these random variables and the same sequence of independent uniform random variables \(\{U_{i}\}_{i\in\mathbb{N}}\) we can construct two dendritic systems \(\mathcal{D}_{1}=(\mathbb{N},\sim_{1},\prec_{1},p_{1})\) and \(\mathcal{D}_{2}=(\mathbb{N},\sim_{2},\prec_{2},p_{2})\) via Construction 1.7. Because \(\phi\) is structure preserving, \(\sim_{1}\) and \(\sim_{2}\), respectively \(\prec_{1}\) and \(\prec_{2}\), are almost surely the same. Further, because we defined \(\phi(\psi)\), \(\phi(\lambda)\) and \(\phi(B)\) by pullback, \(p_{1}\) and \(p_{2}\) are almost surely the same. In particular, the distribution of \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) is identical. On the other hand, observe that if in Construction 1.7 we do not add planarity to \(T_{n}\) and keep the leaf labels, we retrieve the hierarchy given by (3.17). Now, by Theorem 3.15 and because the induced Markov chains \((T_{n}^{(i)},n\geq 1),i\in\{1,2\}\) have the same distribution, the trees \((\mathbf{T}_{1},d_{1},r_{1},\mu_{1})\) and \((\mathbf{T}_{2},d_{2},r_{2},\mu_{2})\) are mass-structurally isomorphic. Having also shown that the distribution of a dendritic system is invariant under mass-structural isomorphism of the decorated planar real tree, this and Theorem 3.14 then yield the desired uniqueness of \((\mathbf{T},d,r,\mu)\). _Uniqueness of \(\psi\):_ Assume now that \((\mathbf{T},d,r,\mu)\) is fixed and that we are given two planar orders \(\psi^{(1)}\) and \(\psi^{(2)}\) with the distribution of the Markov chain being the same. In particular, we assume that the distributions of \((\psi_{n}^{(1)}(\xi_{1},\ldots,\xi_{n}),n\geq 2)\) and \((\psi_{n}^{(2)}(\zeta_{1},\ldots,\zeta_{n}),n\geq 2)\) are the same where \((\xi_{i},i\geq 1)\) and \((\zeta_{i},i\geq 1)\) are \(i.i.d.\)\(\mu\)-random variables. We will show that there is an isometry \(\varphi:\mathbf{T}\to\mathbf{T}\) such that \(\varphi(\psi^{(1)})=\psi^{(2)}\). By [17, Theorem 3.4 (i)] there exists a kernel \(K_{1}\) such that for appropriate events \(A,B\) we have \[\mathbb{P}\left((\xi_{i},i\geq 1)\in A,(\psi_{n}^{(1)}(\xi_{1},\ldots,\xi_{n}),n \geq 2)\in B\right)=\int_{B}K_{1}(S,A)\mathbb{P}\left((\psi_{n}^{(1)}(\xi_{1}, \ldots,\xi_{n}),n\geq 2)\in dS\right).\] The same is true for \(\psi^{(2)}\) with another kernel \(K_{2}\). This means that we can work on a probability space such that \(\psi_{n}^{(1)}(\xi_{1},\ldots,\xi_{n})=\psi_{n}^{(2)}(\zeta_{1},\ldots,\zeta _{n})\) for all \(n\geq 2\) while keeping the joint distribution of \((\xi_{i},i\geq 1)\) and \((\psi_{n}^{(1)}(\xi_{1},\ldots,\xi_{n}),n\geq 2)\) the same. Abbreviate \(S_{n}=\psi_{n}^{(1)}(\xi_{1},\ldots,\xi_{n})\). We use this to define a map \(\varphi:\mathbf{T}\to\mathbf{T}\). First, for all \(i\geq 1\) we define \(\varphi(\xi_{i})=\zeta_{i}\). For any \(i\geq 1\) and \(n\geq i\), \(\xi_{i}\) and \(\varphi(\xi_{i})\) correspond to the same vertex \(x_{i}^{n}\) in \(S_{n}\). Next, let \(\xi_{i}\wedge\xi_{j}\) be the most recent common ancestor of \(\xi_{i}\) and \(\xi_{j}\) and similarly let \(x_{i}^{n}\wedge x_{j}^{n}\) be the most recent common ancestor of \(x_{i}^{n}\) and \(x_{j}^{n}\). Define \(\varphi(\xi_{i}\wedge\xi_{j})=\varphi(\xi_{i})\wedge\varphi(\xi_{j})\), both \(\xi_{i}\wedge\xi_{j}\) and \(\varphi(\xi_{i}\wedge\xi_{j})\) correspond to \(x_{i}^{n}\wedge x_{j}^{n}\) in \(S_{n}\). Observe that for \(i,j,k,\ell\in\mathbb{N}\) if \(\xi_{i}\wedge\xi_{j}=\xi_{k}\wedge\xi_{\ell}\) then \(\varphi(\xi_{i})\wedge\varphi(\xi_{j})=\varphi(\xi_{k})\wedge\varphi(\xi_{\ell})\), hence \(\varphi(\xi_{i}\wedge\xi_{j})\) is well defined. This defines \(\varphi\) on \(\{\xi_{i},i\geq 1\}\) as well as all branchpoints of \(\mathbf{T}\). Let \(\mu_{n}=\sum_{i=1}^{n}\delta_{x_{i}^{n}}\) on \(S_{n}\) and we observe that \(\lim_{n\to\infty}\frac{1}{n}\mu_{n}(F_{S_{n}}(x_{i}^{n}))=\mu(F_{\mathbf{T}}( \xi_{i}))\), almost-surely by the strong law of large numbers applied to \(\{\xi_{j},j>i\}\). Similarly, \(\lim_{n\to\infty}\frac{1}{n}\mu_{n}(F_{S_{n}}(x_{i}^{n}\wedge x_{j}^{n}))=\mu(F _{\mathbf{T}}(\xi_{i}\wedge\xi_{j}))\) for all \(i\) and \(j\). This allows us to show that \(\varphi\) restricted to \(\{\xi_{i},i\geq 1\}\) is an isometry, we use the IP-spacing (1.3), \[d(\varphi(\xi_{i}),\varphi(\xi_{j})) =\left|\mu(F_{\mathbf{T}_{2}}(\varphi(\xi_{i}\wedge\xi_{j})))-\mu( F_{\mathbf{T}}(\varphi(\xi_{i})))\right|+\left|\mu(F_{\mathbf{T}}(\varphi(\xi_{i} \wedge\xi_{j})))-\mu(F_{\mathbf{T}}(\varphi(\xi_{j})))\right|\] \[=\lim_{n\to\infty}\frac{1}{n}\left|\mu_{n}(F_{S_{n}}(x_{i}^{n} \wedge x_{j}^{n}))-\mu_{n}(F_{S_{n}}(x_{i}^{n}))\right|+\lim_{n\to\infty}\frac{1}{ n}\left|\mu_{n}(F_{S_{n}}(x_{i}^{n}\wedge x_{j}^{n}))-\mu_{n}(F_{S_{n}}(x_{j}^{n}))\right|\] \[=\left|\mu(F_{\mathbf{T}}(\xi_{i}\wedge\xi_{j}))-\mu(F_{\mathbf{ T}}(\xi_{i}))\right|+\left|\mu(F_{\mathbf{T}_{1}}(\xi_{i}\wedge\xi_{j}))-\mu(F_{ \mathbf{T}}(\xi_{j}))\right|\] \[=d(\xi_{i},\xi_{j}).\] The same is true for branchpoints. In particular, this means that \(\varphi\) maps Cauchy-sequences to Cauchy-sequences. Hence, assume that for \(y\in\mathrm{supp}\ \mu\) there is a sequence \(y_{k},k\geq 1\) with \(\lim_{k\to\infty}y_{k}=y\) and for every \(k\) we have either \(y_{k}\in\{\xi_{i},i\geq 1\}\) or \(y_{k}\) is a branchpoint in \(\mathbf{T}\). We then define \(\varphi(y)=\lim_{k\to\infty}\varphi(y_{k})\). Due to the aforementioned properties of \(\varphi\) and because \(\mathbf{T}\) is a complete metric space, this limit exists and is well-defined, i.e. does not depend on the choice of sequence \((y_{k})_{k}\). The map \(\varphi\) can be extended to an isometry. Indeed, because \(\mathbf{T}\) is an IP-tree, it suffices to show that \(\varphi\) restricted to special points (supp \(\mu\) and branchpoints) is a mass-structural isomorphism. Theorem 3.14 then tells us that there is an isometry, and by checking the proof in [9] we can see that this isometry is an extension of the underlying mass-structural isomorphism between special points. Let us now check that \(\varphi\) is a mass-structural isomorphism. Clearly, \(\varphi\) is structure preserving because \(\psi_{n}^{(1)}(x_{1},\ldots,x_{n})\) corresponds to \(\mathrm{span}(x_{1},\ldots,x_{n})\) as combinatorial trees. Further, \(\varphi\) is mass preserving: consider \(z\in\mathbf{T}\), both \(z\) and \(\varphi(z)\) correspond to the same point in \(S_{n}\), call it \(z_{n}\). We then have \[\mu(F_{\mathbf{T}}(z))=\lim_{n\to\infty}\frac{1}{n}\mu_{n}(F_{S_{n}}(z_{n}))=\mu( F_{\mathbf{T}}(\varphi(z)),\qquad\text{almost-surely},\] where we applied the strong law of large numbers twice. The same approach works for atoms and segments. Hence the \(\varphi\) is a mass-structural isomorphism and thus can be extended to an isometry on the whole tree Next, we show that \(\varphi(\psi^{(1)})=\psi^{(2)}\). For this, let \(n\geq 2\) and let \(x_{1},\ldots,x_{n}\in\{\zeta_{i},i\geq 1\}\cup\{\text{branchpoints}\}\) and therefore also \(\varphi^{-1}(x_{1}),\ldots,\varphi^{-1}(x_{n})\in\{\xi_{i},i\geq 1\}\cup\{\text{branchpoints}\}\). Observe that for \(N\) large enough \(\varphi(\psi^{(1)})_{n}(x_{1},\ldots,x_{n})\) and \(\psi^{(2)}_{n}(x_{1},\ldots,x_{n})\) are subtrees of \(S_{N}\). Moreover due to the coupling they are the same, i.e. \(\varphi(\psi^{(1)})_{n}(x_{1},\ldots,x_{n})=\psi^{(2)}_{n}(x_{1},\ldots,x_{n})\). This can be extended to \(x_{1},\ldots,x_{n}\in\text{supp }\mu\) by density of \(\{\zeta_{i},i\geq 1\}\). Because \(\text{span}(\text{supp}(\mu))=\mathbf{T}\) this is also true for all \(x_{1},\ldots,x_{n}\in\mathbf{T}\). Indeed, it suffices to specify \(\varphi(\psi^{(1)})_{n}(x_{1},\ldots,x_{n})\) for totally unordered \(x_{1},\ldots,x_{n}\). If for some \(i\in[n]\), \(x_{i}\notin\text{supp}(\mu)\), then we can choose any leaf \(x_{i}^{\prime}\) with \(x_{i}<x_{i}^{\prime}\) to obtain \(\varphi(\psi^{(1)})_{n}(x_{1},\ldots,x_{i},\ldots,x_{n})=\varphi(\psi^{(1)})_ {n}(x_{1},\ldots,x_{i}^{\prime},\ldots,x_{n})\). Because all leaves are in the support of \(\mu\), this determines \(\varphi(\psi^{(1)})_{n}(x_{1},\ldots,x_{n})\) - the same argument works for \(\psi^{(2)}_{n}\). Hence we have shown that \(\varphi(\psi^{(1)})_{n}=\psi^{(2)}_{n}\), doing this for all \(n\) shows that \(\varphi(\psi^{(1)})=\psi^{(2)}\). This completes showing the uniqueness of \(\psi\). _Uniqueness of \(\lambda\):_ Assume now that \((\mathbf{T},d,r,\mu,\psi)\) is fixed and we are given two different branch weight functions \(\lambda^{(1)},\lambda^{(2)}\in L^{1}(\mu_{s})\). Let \(T_{n}^{(1)}\) and \(T_{n}^{(2)}\) be the trees obtained from using \(\lambda^{(1)}\) and \(\lambda^{(2)}\) respectively while sampling from \((\mathbf{T},d,r,\mu)\). There exists a segment \([x,y]\subset\mathbf{T}\) such that \(\int_{[x,y]}\lambda^{(1)}d\mu_{s}\neq\int_{[x,y]}\lambda^{(2)}d\mu_{s}\). The segment \([x,y]\subset\mathbf{T}\) corresponds to segments \([x_{n}^{(1)},y_{n}^{(1)}]\) and \([x_{n}^{(2)},y_{n}^{(2)}]\) in \(T_{n}^{(1)}\) and \(T_{n}^{(2)}\) respectively. Let \(L_{n}^{(1)}\) be the proportion of leaves directly attached to the left of \([x_{n}^{(1)},y_{n}^{(1)}]\) - here we only count vertices of degree 2 in \(\mathbf{T}\) to avoid counting atoms. Define \(L_{n}^{(2)}\) similarly. By the strong law of large numbers, we almost surely have as \(n\to\infty\) \[L_{n}^{(1)}\longrightarrow\int_{[x,y]}\lambda^{(1)}d\mu_{s}\qquad\text{and} \qquad L_{n}^{(2)}\longrightarrow\int_{[x,y]}\lambda^{(2)}d\mu_{s}.\] By assumption, these two integrals are different and thus the distributions of \(\big{(}T_{n}^{(1)},n\geq 1\big{)}\) and \(\big{(}T_{n}^{(2)},n\geq 1\big{)}\) are different. This shows the uniqueness of \(\lambda\). _Uniqueness of \(B\):_ Assume now that \((\mathbf{T},d,r,\mu,\psi)\) is fixed and we are given two different branchpoint weight functions \(B^{(1)},B^{(2)}\). Then there exists an atom \(a\) such that \(\beta_{a}^{(1)}\neq\beta_{a}^{(2)}\). Hence there is \(t\in(0,1)\) such that \(\beta_{a}^{(1)}(t)\neq\beta_{a}^{(2)}(t)\) and such that - by the requirements that we pose on branchpoint weight functions - this \(t\) without loss of generality corresponds to one subtree \(\mathbf{S}\) of \(a\). Let \(T_{n}^{(1)}\) and \(T_{n}^{(2)}\) be the trees obtained from using \(B^{(1)}\) and \(B^{(2)}\) respectively. For \(n\) sufficiently large, the atom \(a\) corresponds to \(a_{n}^{(1)}\in T_{n}^{(1)}\) and to \(a_{n}^{(2)}\in T_{n}^{(2)}\) respectively, similarly \(\mathbf{S}\) corresponds to subtrees \(S_{n}^{(1)},S_{n}^{(2)}\) of \(a_{n}^{(1)}\) and \(a_{n}^{(2)}\) respectively. Let \(K_{n}^{(1)}\) be the proportion of leaves directly attached to \(a_{n}^{(1)}\) on the left of \(S_{n}^{(1)}\), as compared to the right of \(S_{n}^{(1)}\). Define \(K_{n}^{(2)}\) similarly. By the strong law of large numbers, we almost surely have as \(n\to\infty\) \[K_{n}^{(1)}\longrightarrow\beta_{a}^{(1)}(t)\qquad\text{and}\qquad K_{n}^{(2 )}\longrightarrow\beta_{a}^{(2)}(t).\] By assumption, these two values are different and thus the distributions of \(\big{(}T_{n}^{(1)},n\geq 1\big{)}\) and \(\big{(}T_{n}^{(2)},n\geq 1\big{)}\) are different. This shows the uniqueness of \(B\). ## 4 Scaling Limits In the following, let \((T_{n},n\geq 1)\) be an extremal tree-valued Markov chain with uniform backward dynamics corresponding to the decorated planar real tree \((\mathbf{T},d,r,\mu,\psi,\lambda,B)\) where \((\mathbf{T},d,r,\mu)\) is an IP-tree. The goal of this section is to show that \(T_{n}\) - trimmed and appropriately rescaled - converges to \(\mathbf{T}\) almost surely in the Gromov-Prokhorov topology. Recall the rescaling from (1.5) and the Gromov-Prokhorov metric from Definition 1.15. **Remark 4.1**.: One might ask why it is necessary to trim \(T_{n}\) before rescaling it. Consider the decorated planar real tree that is made up from a single atom \(a\) of weight 1, here \(d,\psi,\lambda,B\) are all trivial. For any \(n\geq 2\) the tree \(T_{n}\) is a star tree with \(n\) leaves directly connected to the root. In the IP-rescaling (1.5), all these edges have length \(1-1/n\). From Definition 1.15 we can see that \(d_{\text{GP}}(\mathbf{T},T_{n})=1\) for all \(n\geq 2\), hence we have no convergence. This problem is solved by trimming. An important idea in the proof will be that \(T_{n}^{\text{trim}}\) corresponds to a subtree of \(\mathbf{T}\). Recall that \(T_{n}\) is constructed by sampling \((\xi_{1},\ldots,\xi_{n})\) from \(\mathbf{T}\) and that \(T_{n}\) corresponds to \(\text{span}(r,\xi_{1},\ldots,\xi_{n})\) plus additional leaves added through \(B\) and \(\lambda\). The trimming removes all additional leaves but also leaves which were not added through \(B\) and \(\lambda\). Let us define a function \(\eta^{n}:\mathbf{T}^{n}\to\mathbf{T}^{n}\) which corresponds to trimming on the level of real trees. First, consider the set of all most recent common ancestors of \(\xi_{1},\dots,\xi_{n}\): \[M_{n}=\left\{\xi_{i}\wedge\xi_{j},1\leq i\neq j\leq n\right\}.\] where \(\xi_{i}\wedge\xi_{j}\) is the most recent common ancestor of \(\xi_{i}\) and \(\xi_{j}\). We then set \[\eta^{n}_{i}(\xi_{1},\dots,\xi_{n})=\operatorname{argmin}_{y\in M_{n},y\in[r, \xi_{i}]}d_{\mathbf{T}}(y,\xi_{i});\quad\forall i\leq n, \tag{4.1}\] which is the closest element of \(M_{n}\) that is an ancestor of \(\xi_{i}\). We write \(\eta^{n}_{i}(\xi_{1},\dots,\xi_{n})\) for the \(i\)-th coordinate of \(\eta^{n}(\xi_{1},\dots,\xi_{n})\) and we will abuse notation by writing \(\eta^{n}(\xi_{i})=\eta^{n}_{i}(\xi_{1},\dots,\xi_{n})\). Equip \(\operatorname{span}(r,\eta^{n}(\xi_{1},\dots,\xi_{n}))\) with a probability measure \(\mu^{\eta}_{n}\) by placing weight \(1/n\) on \(\eta^{n}(\xi_{i})\) for every \(i\leq n\) and with a metric \(d^{\eta}\) according to an IP-rescaling as in (1.5). By construction, we then have the following lemma. **Lemma 4.2**.: _As rooted, weighted metric spaces, \((T^{trim}_{n},d^{trim}_{n},r_{n},\mu^{trim}_{n})\) and \((\operatorname{span}(r,\eta^{n}(\xi_{1},\dots,\xi_{n})),d^{\eta}_{n},r,\mu^{ \eta}_{n})\) are isomorphic._ We will implicitly use this representation of \((T^{trim}_{n},d^{trim}_{n},r_{n},\mu^{trim}_{n})\). We can now state the main theorem of this section. **Theorem 4.3**.: _Let \((T_{n},n\geq 1)\) be an extremal tree-valued Markov chain corresponding to the decorated planar real tree \((\textbf{T},d,r,\mu,\psi,\lambda,B)\) where \((\textbf{T},d,r,\mu)\) is an IP-tree. Let \((T^{trim}_{n},d^{trim}_{n},r_{n},\mu^{trim}_{n})\) be the trimmed and rescaled version of \(T_{n}\). We then have_ \[(T^{trim}_{n},d^{trim}_{n},r_{n},\mu^{trim}_{n})\xrightarrow{n\to\infty}( \textbf{T},d,r,\mu),\] _almost surely in the Gromov-Prokhorov topology._ From this we prove Theorem 1.16. Figure 12: The different operations involved in trimming, this diagram commutes. The double–headed arrow is the correspondence of Lemma 4.2. A number \(k\) next to a vertex signifies an atom of weight \(k/13\) for \(\mu^{\eta}_{13}\) and \(\mu^{trim}_{13}\) respectively. Proof of Theorem 1.16.: This follows immediately from Theorem 4.3, the decomposition into extremal distribution in Corollary 2.17 and the classification of tree-valued Markov chains with uniform backward dynamics in Theorem 1.10. The proof of the above theorem proceeds by comparing \(T_{n}^{\text{trim}}\) and \(\mathbf{T}\) with \(\text{span}(r,\xi_{1},\dots,\xi_{n})\). For this purpose, let \(\mathbf{S}_{n}=\text{span}(r,\xi_{1},\dots,\xi_{n})\) and choose \(\mu_{n}=\frac{1}{n}\sum_{i=1}^{n}\delta_{\xi_{i}}\). We then construct a metric \(d_{n}\) for \(\mathbf{S}_{n}\) by again considering the inhomogenous IP-rescaling (1.5) with respect to the root \(r\). The proof of Theorem 4.3 consists of showing that both \(d_{\text{GP}}(T_{n}^{\text{trim}},\mathbf{S}_{n})\) and \(d_{\text{GP}}(\mathbf{S}_{n},\mathbf{T})\) are small for \(n\) sufficiently large. Before we can do this, we will show some general statements about IP-trees. In Lemma 4.4 we show some small auxiliary statements and in Lemma 4.5 we construct a partition of \(\mathbf{T}\) that helps us to approximate subtrees, i.e. sets of the form \(F_{\mathbf{T}}(x)\) for \(x\in\mathbf{T}\). **Lemma 4.4**.: _Let \((\textbf{T},d,r,\mu)\) be an IP-tree._ 1. _For all_ \(x\in\textbf{T}\) _we have_ \[d(r,x)\leq 1-\mu(F_{\textbf{T}}(x)).\] 2. _For any_ \(c<1\) _the set_ \[\left\{x\in\textbf{T}:d(r,x)=c\right\},\] _is at most countably infinite._ 3. _Suppose_ \((\textbf{T}^{\prime},d^{\prime},r^{\prime},\mu^{\prime})\) _is another IP-tree and_ \(\varphi:\text{dom }\varphi\rightarrow\text{supp}(\mu^{\prime})\) _is an injective map respecting the tree structure with_ \(\text{dom }\varphi\subseteq\text{supp}(\mu)\)_. This means_ \(\varphi(x)\wedge\varphi(y)=\varphi(x\wedge y)\) _for all_ \(x,y\in\textbf{T}\) _and if_ \(y\in[r,x]\) _then_ \(\varphi(y)\in[r^{\prime},\varphi(x)]\) _for all_ \(x,y\in\textbf{T}\)_. We then have_ \[\sup_{x,y\in\text{dom }\varphi}\left|d(x,y)-d^{\prime}(\varphi(x),\varphi(y)) \right|\leq 4\sup_{x\in\textbf{T}}\left|\mu\left(F_{\textbf{T}}(x)\right)-\mu^ {\prime}\left(F_{\textbf{T}^{\prime}}(\varphi(x))\right)\right|.\] Proof.: Recall the definition of IP-tree from Definition 1.4. 1. If \(x\) is either a branch point, a leaf or lies in the support of \(\mu\) we have \(d(r,x)=1-\mu(F_{\textbf{T}}(x))\). If that is not the case, consider \[x^{*}=\inf\{y\in\textbf{T}:x<y\text{ and }y\text{ is a branchpoint, a leaf or in the support of }\mu\},\] (4.2) where the infimum is taken with respect to the ancestral order of \(\mathbf{T}\). This may be closest descendant of \(x\) for which the IP-tree property holds. We then have \(\mu(F_{\textbf{T}}(x))=\mu(F_{\textbf{T}}(x^{*}))\) and hence \[d(r,x)\leq d(r,x^{*})=1-\mu(F_{\textbf{T}}(x^{*}))=1-\mu(F_{\textbf{T}}(x)).\] It may happen that the infimum in (4.2) is not a branchpoint, a leaf or in the support of \(\mu\). In that case the argument is easily adapted by considering a sequence that converges to \(x^{*}\). 2. Let \(x\in\textbf{T}\) be so that \(d(r,x)=c\). Due to the spanning property of IP-trees, i.e. \(\text{span}(\text{supp}(\mu))=\textbf{T}\), we necessarily have that \(\mu(F_{\textbf{T}}(x))>0\). Indeed, if we had \(\mu(F_{\textbf{T}}(x))=0\) then \(x\) would need to be a leaf of \(\mathbf{T}\) contained in the support of the diffuse part of \(\mu\). In this case we would have \(d(r,x)=1\) which contradicts \(d(r,x)=c<1\). Because this is true for all \(x\) in \(\left\{y\in\textbf{T}:d(r,y)=c\right\}\), this set has to be at most countably infinite. 3. Note that for \(x,y\in\text{supp}(\mu)\) we have \[d(x,y)=\begin{cases}|\mu(F_{\textbf{T}}(x))-\mu(F_{\textbf{T}}(y))|&\text{if }\quad y\in[r,x]\text{ \ or \ }x\in[r,y],\\ 2\mu(F_{\textbf{T}}(x\wedge y))-\mu(F_{\textbf{T}}(x))-\mu(F_{\textbf{T}}(y) )&\text{else.}\end{cases}\] An analogous statement holds for \(\varphi(x),\varphi(y)\) with respect to \(d^{\prime}\) and \(\mu^{\prime}\). We then have for \(x,y\in dom\)\(\varphi\) with \(y\in[r,x]\) that \[\left|d(x,y)-d^{\prime}(\varphi(x),\varphi(y))\right| =\left|\left(\mu(F_{\textbf{T}}(y))-\mu(F_{\textbf{T}}(x))\right) -\left(\mu^{\prime}(F_{\textbf{T}^{\prime}}(\varphi(y)))-\mu^{\prime}(F_{ \textbf{T}^{\prime}}(\varphi(x)))\right)\right|\] \[\leq\sup_{x\in\textbf{T}}2\left|\mu(F_{\textbf{T}}(z))-\mu^{ \prime}(F_{\textbf{T}}(\varphi(z)))\right|,\] where we have used the triangle inequality Similarly, if \(y\notin[r,x]\) and \(x\notin[r,y]\) then \[\left|d(x,y)-d^{\prime}(\varphi(x),\varphi(y))\right|\leq\sup_{\varepsilon\in \textbf{T}}4\left|\mu(F_{\textbf{T}}(z))-\mu^{\prime}(F_{\textbf{T}}(\varphi(z) ))\right|.\qed\] **Lemma 4.5**.: _Let \((\textbf{T},d,r,\mu)\) be an IP-tree. Then for any \(\varepsilon>0\) there exist \(m_{1},m_{2}\in\mathbb{N}\) and measurable sets \(A_{1},\ldots A_{m_{1}},B_{1},\ldots,B_{m_{2}},S\subset\textbf{T}\) such that_ 1. _the sets_ \(A_{1},\ldots A_{m_{1}},B_{1},\ldots,B_{m_{2}},S\subset\textbf{T}\) _partition_ _T,_ 2. _we have_ \(\mu(S)\leq\varepsilon\) _and for all_ \(i\in[m_{1}],j\in[m_{2}]\) _with_ \[\text{diam}(A_{i})\leq\varepsilon,\ \ \mu(A_{i})\leq\varepsilon\ \ \text{ and }\ \ \#B_{j}=1,\] 3. _and the closure of_ \(\bigcup_{i=1}^{m_{1}}A_{i}\cup\bigcup_{j=1}^{m_{2}}B_{j}\) _is connected,_ 4. _for every_ \(x\in\textbf{T}\) _there are_ \(I_{x}\subseteq[m_{1}],J_{x}\subseteq[m_{2}]\) _and_ \(k_{x}\in[m_{1}]\) _we have_ \[F_{\textbf{T}}(x)\Delta\left(\bigcup_{i\in I_{x}}A_{i}\cup\bigcup_{j\in J_{x} }B_{j}\right)\subset A_{k_{x}}\cup S,\] _where_ \(\Delta\) _denotes the symmetric difference of two sets._ Proof.: First, we consider the atoms of \(\mu\). Enumerate them by \(\{a_{j},1\leq j\leq J\}\) with \(J\in\mathbb{N}_{0}\cup\{\infty\}\) so that \(\mu(a_{j})\geq\mu(a_{j+1})\). Choose \(m_{2}\) in such a way that \[\sum_{j=m_{2}+1}^{J}\mu(a_{j})\leq\frac{\varepsilon}{2}. \tag{4.3}\] Note that if \(\mu\) has no atoms we have \(m_{2}=0\). We then define \[B_{j}=\{a_{j}\},\] for all \(j\in[m_{2}]\). Next, we construct \(A_{1},\ldots,A_{m_{1}}\). For this, let \(L=\lfloor 2/\varepsilon\rfloor\). For a given \(0\leq c<1\) consider the set \[D(c)=\{x\in\textbf{T}:d(r,x)=c\}.\] By Lemma 4.4\((ii)\), this is always at most a countable set. Without loss of generality assume \(L2/\varepsilon<1\). Define \(D=\bigcup_{\ell=0}^{L}D(\ell\varepsilon/2)\) which is also a countable set. For \(x\in D\) we set \[T(x)=\{y\in F_{\textbf{T}}(x):0\leq d(x,y)<\varepsilon/2\}.\] Note that \(\{T(x):x\in D\}\) is a partition of **T** and \(diam(T(x))\leq\varepsilon\). Choose a finite subset \(C\subset D\) such that \(r\in C\), \(\bigcup_{x\in C}T(x)\) is connected and such that \[\mu\left(\bigcup_{x\in D\setminus C}T(x)\right)\leq\frac{\varepsilon}{2}. \tag{4.4}\] This is always possible because \(\mu\) is a probability measure. Further, we choose \(C\) so that \(C\neq\emptyset\). For every \(x\in C\) we set \[A(x)=T(x)\backslash\bigcup_{j=1}^{m_{2}}\{a_{j}\},\] this means we remove any atoms from \(T(x)\) that are already included in \(\{B_{j}\}_{j}\). Observe that \(\mu(A(x))\leq\varepsilon\): Indeed, assume we had \(\mu(A(x))>\varepsilon\), then there would exist \(y\in F_{\textbf{T}}(x)\) with \(d(x,y)\leq\varepsilon/2\) and \(\mu(F_{\textbf{T}}(y))<\mu(F_{\textbf{T}}(x))-\varepsilon/2\). Because of the IP-tree property this would imply \(d(x,y)>\varepsilon/2\) which is a contradiction. Choosing \(y\) precisely is tedious as the IP-tree property does not necessarily apply to any \(z\in F_{\mathbf{T}}(x)\) with \(d(x,z)=\varepsilon/2\), we leave the details to the reader. Now, let \(m_{1}=|C|\) and enumerate \(\{A(x),x\in C\}\) by \(A_{1},\ldots,A_{m_{1}}\). Lastly, we define \[S=\bigcup_{x\in D\setminus C}T(x)\cup\bigcup_{\begin{subarray}{c}j=m_{2}+1\\ \forall i\leq m_{1}:j\notin A_{i}\end{subarray}}^{J}\{a_{j}\}.\] We include all atoms that are not in \(\bigcup_{i=1}^{m_{1}}A_{i}\cup\bigcup_{j=1}^{m_{2}}B_{j}\). By combining (4.3) and (4.4) we can see that \(\mu(S)\leq\varepsilon\). Finally, to complete the proof of the lemma, for a given \(x\in\mathbf{T}\) we specify \(I_{x}\subseteq[m_{1}],J_{x}\subseteq[m_{2}]\) and \(k_{x}\in[m_{1}]\) with the required properties. If \(x\in S\), then we set \(I_{x}=J_{x}=\emptyset\) and we choose \(k_{x}\) arbitrarily, say \(k_{x}=1\). Assume that \(x\notin S\). Then there is \(z\in C\) such that \(x\in T(z)\). Choose \(k_{x}\) so that \(A_{k_{x}}=A(z)\). For every \(i\in[m_{1}]\backslash\{k_{x}\}\) we have either \(A_{i}\subset F_{\mathbf{T}}(x)\) or \(A_{i}\cap F_{\mathbf{T}}(x)=\emptyset\). Based on this, we set \[I_{x}=\left\{i\in[m_{1}]:A_{i}\cap F_{\mathbf{T}}(x)=A_{i}\right\}.\] Similarly, we set \(J_{x}=\{j\in[m_{2}]:B_{j}\cap F_{\mathbf{T}}(x)=B_{j}\}\). This is the set of atoms \(a_{j}\) with \(j\leq m_{2}\) that are contained in \(F_{\mathbf{T}}(x)\). By construction, we have \[F_{\mathbf{T}}(x)\Delta\left(\bigcup_{i\in I_{x}}A_{i}\cup\bigcup_{j\in J_{x} }B_{j}\right)\subset A_{k_{x}}\cup S.\] Property 3.) follows from the fact that if we take the closure, we have that \(T(x)\subseteq\overline{A(x)}\) and that \(\bigcup_{x\in C}T(x)\) is connected. Now that we have shown some general properties of IP-trees we show that for large \(n\)\(\mu_{n}\) approximates \(\mu\) well in the following sense. **Lemma 4.6**.: _For any \(\varepsilon>0\), there is a random variable \(N_{1}=N_{1}(\varepsilon)\) such that for \(n\geq N_{1}\) we have almost surely_ \[\sup_{x\in\mathbf{T}}\bigl{|}\mu(F_{\mathbf{T}}(x))-\mu_{n}(F_{\mathbf{T}}(x) )\bigr{|}\leq\varepsilon.\] Proof.: We make use of Lemma 4.5 with constant \(\varepsilon/5\). For \(x\in S\), we set \(I_{x}=\emptyset,J_{x}=\emptyset\) and by abuse of notation \(A_{k_{x}}=\emptyset\). For all \(x\in\mathbf{T}\) we let \[\tilde{F}_{\mathbf{T}}(x)=\bigcup_{i\in I_{x}}A_{i}\cup\bigcup_{j\in J_{x}}B_{ j}\cup S. \tag{4.5}\] By use of the triangle inequality we get \[\bigl{|}\mu(F_{\mathbf{T}}(x))-\mu_{n}(F_{\mathbf{T}}(x))\bigr{|}\leq\bigl{|} \mu(F_{\mathbf{T}}(x))-\mu(\tilde{F}_{\mathbf{T}}(x))\bigr{|}+\bigl{|}\mu_{n }(F_{\mathbf{T}}(x))-\mu_{n}(\tilde{F}_{\mathbf{T}}(x))\bigr{|}+\bigl{|}\mu( \tilde{F}_{\mathbf{T}}(x))-\mu_{n}(\tilde{F}_{\mathbf{T}}(x))\bigr{|}.\] Figure 13: The partition of \(\mathbf{T}\) in Lemma 4.5. We then have by Lemma 4.5 \[\big{|}\mu(F_{\mathbf{T}}(x))-\mu(\tilde{F}_{\mathbf{T}}(x))\big{|}\leq\mu(A_{k_{ x}})+\mu(S)\leq\frac{2}{5}\varepsilon,\] as well as \[\big{|}\mu_{n}(F_{\mathbf{T}}(x))-\mu_{n}(\tilde{F}_{\mathbf{T}}(x))\big{|}\leq \mu_{n}(A_{k_{x}})+\mu_{n}(S),\] for all \(n\geq 1\). Using the definition of \(\tilde{F}_{\mathbf{T}}(x)\) we get \[\big{|}\mu(\tilde{F}_{\mathbf{T}}(x))-\mu_{n}(\tilde{F}_{\mathbf{ T}}(x))\big{|} \leq\big{|}\mu(S)-\mu_{n}(S)\big{|}+\sum_{i\in I_{x}}|\mu(A_{i})- \mu_{n}(A_{i})|+\sum_{j\in J_{x}}|\mu(B_{j})-\mu_{n}(B_{j})|\] \[\leq\big{|}\mu(S)-\mu_{n}(S)\big{|}+\sum_{i=1}^{m_{1}}|\mu(A_{i}) -\mu_{n}(A_{i})|+\sum_{j=1}^{m_{2}}|\mu(B_{j})-\mu_{n}(B_{j})|.\] And lastly, using the bound \[\mu_{n}(A_{k_{x}})\leq\mu(A_{k_{x}})+|\mu(A_{k_{x}})-\mu_{n}(A_{k_{x}})|\leq \frac{\varepsilon}{5}+\sum_{i=1}^{m_{1}}|\mu(A_{i})-\mu_{n}(A_{i})|,\] and similarly \[\mu_{n}(S)\leq\frac{\varepsilon}{5}+|\mu(S)-mu_{n}(S)|,\] we obtain \[\big{|}\mu(F_{\mathbf{T}}(x))-\mu_{n}(F_{\mathbf{T}}(x))\big{|}\leq\frac{4}{5 }\varepsilon+2|\mu(S)-\mu_{n}(S)|+2\sum_{i=1}^{m_{1}}|\mu(A_{i})-\mu_{n}(A_{i} )|+\sum_{j=1}^{m_{2}}|\mu(B_{j})-\mu_{n}(B_{j})|. \tag{4.6}\] Note that this estimate is uniform in \(x\in\mathbf{T}\). By the strong law of large numbers the family of random variables \(\{|\mu(S)-\mu_{n}(S)|,(|\mu(A_{i})-\mu_{n}(A_{i})|)_{i=1,\ldots,m_{1}},(|\mu(B _{j})-\mu_{n}(B_{j})|)_{j=1,\ldots,m_{2}}\}\) converges jointly \(\mathbb{P}\)-almost surely to \(0\). Applying this to (4.6) this yields the existence of a random variable \(N_{1}\) such that for every \(n\geq N_{1}\) we have \[\sup_{x\in\mathbf{T}}\big{|}\mu(F_{\mathbf{T}}(x))-\mu_{n}(F_{\mathbf{T}}(x) )\big{|}\leq\frac{4}{5}\varepsilon+\frac{1}{5}\varepsilon.\] After having established control over \(\mu_{n}\), we can show that \(d_{\mathrm{GP}}(\mathbf{S}_{n},\mathbf{T})\) and \(d_{\mathrm{GP}}(T_{n}^{\mathrm{trim}},\mathbf{S}_{n})\) are small. **Lemma 4.7**.: _For any \(\varepsilon>0\), there is a random variable \(N_{2}=N_{2}(\varepsilon)\) such that for \(n\geq N_{2}\) we have almost surely_ \[d_{\mathrm{GP}}\left((\mathbf{S}_{n},d_{n},r,\mu_{n}),(\textbf{T},d,r,\mu) \right)\leq\varepsilon.\] Proof.: Fix \(\varepsilon>0\). Recall the defining property of the metric of IP-trees. For \(x\in supp(\mu)\) we have \[d(r,x)=1-\mu(F_{\mathbf{T}}(x)).\] An analogous statement holds for \(\mathbf{S}_{n}\) and \(d_{n}\) with \(\mu_{n}\). This means that to understand the metric, we only need to understand the measure. Note that for every \(x\in\mathbf{S}_{n}\) we have \[\mu_{n}(F_{\mathbf{S}_{n}}(x))=\mu_{n}(F_{\mathbf{T}}(x)), \tag{4.7}\] by extending \(\mu_{n}\) to \(\mathbf{T}\). Lemma 4.6 allows us to control the expressions above. To use Definition 1.15 to estimate \(d_{\mathrm{GP}}(\mathbf{T},\mathbf{S}_{n})\), we need to couple \(\mu\) and \(\mu_{n}\). To do this, we apply Lemma 4.5 with parameter \(\varepsilon/12\). Note that \(\mathrm{supp}(\mu_{n})\subset\mathrm{supp}(\mu)\)\(\mathbb{P}\)-almost surely. Conditional on \(\xi_{1},\ldots,\xi_{n}\), consider any coupling \(\nu_{n}\) of \(\mu\) and \(\mu_{n}\) such that for every \(i\in[m_{1}],j\in[m_{2}]\) \[\nu_{n}(A_{i}\times A_{i})=\min\{\mu(A_{i}),\mu_{n}(A_{i})\},\ \ \nu_{n}(B_{j}\times B_{j})=\min\{\mu(B_{j}),\mu_{n}(B_{j})\}.\] Note that such a coupling always exists. Consider the following subset of \(\mathbf{T}\times\mathbf{T}\), \[R=\bigcup_{i=1}^{m_{1}}(A_{i}\times A_{i})\cup\bigcup_{j=1}^{m_{2}}(B_{j}\times B _{j}),\] here we again use that \(\mathbf{S}_{n}\) is a subset of \(\mathbf{T}\). By the strong law of large numbers the family of random variables \(\{|\mu(S)-\mu_{n}(S)|,(|\mu(A_{i})-\mu_{n}(A_{i})|)_{i=1,\ldots,m_{1}},(|\mu(B_ {j})-\mu_{n}(B_{j})|)_{j=1,\ldots,m_{2}}\}\) converges jointly \(\mathbb{P}\)-almost surely to \(0\). Hence, there exists a random variable \(N_{2}^{*}(\varepsilon)\) such that for every \(n\geq N_{2}^{*}\) we have \[\nu_{n}\big{(}R\big{)}\geq 1-\varepsilon.\] By Definition 1.15, it suffices to show for \(n\) sufficiently large that \[\sup_{(x,y),(x^{\prime},y^{\prime})\in R}|d(x,x^{\prime})-d_{n}(y,y^{\prime})| \leq\varepsilon.\] First, for \(x\in A_{i}\) and \(y\in A_{j}\), we want to decompose \(d(x,y)\). We implicitly restrict ourselves to \(x,y\in\operatorname{supp}(\mu_{n})\subset\operatorname{supp}(\mu)\) so that we can later apply Lemma 4.4\((iii)\). For every \(i\in[m_{1}]\), choose \(r_{i}\in A_{i}\) arbitrarily. In fact, if we look into the proof of Lemma 4.5, we see that we can choose \(r_{i}\) to be the root of \(A_{i}\) but we will not use this here. By the triangle inequality we have \[d(r_{i},r_{j})-d(x,r_{i})-d(y,r_{j})\leq d(x,y)\leq d(r_{i},r_{j})+d(x,r_{i})+ d(y,r_{j}),\] and by using that \(diam(A_{i})\leq\varepsilon/12\) and \(diam(A_{j})\leq\varepsilon/12\) we get \[d(r_{i},r_{j})-\frac{1}{6}\varepsilon\leq d(x,y)\leq d(r_{i},r_{j})+\frac{1}{6}\varepsilon.\] Next, we must show a similar statement for \(d_{n}\). For this we need to estimate \(diam_{n}A_{i}\), the diameter of \(A_{i}\) under the metric \(d_{n}\). We apply Lemma 4.6, for \(n\geq N_{1}(\varepsilon/48)\) we have \[diam_{n}(A_{i}) \leq 2\sup_{x\in A_{i}}d_{n}(r_{i},x)\] \[\leq 2\sup_{x\in A_{i}}d(r_{i},x)+2\sup_{x\in A_{i}}|d(r_{i},x)-d _{n}(r_{i},x)|\] \[\leq 2\ diam(A_{i})+8\sup_{z\in\mathbf{T}}|\mu(F_{\mathbf{T}}(z))- \mu_{n}(F_{\mathbf{T}}(z))|\] \[\leq\frac{\varepsilon}{6}+\frac{\varepsilon}{6}\] where we also applied Lemma 4.4\((iii)\); the map \(\varphi\) here is the inclusion \(\mathbf{S}_{n}\hookrightarrow\mathbf{T}\). This yields that for \(x\in A_{i}\) and \(y\in A_{j}\) we have \[d_{n}(r_{i},r_{j})-\frac{2}{3}\varepsilon\leq d_{n}(x,y)\leq d_{n}(r_{i},r_{j })+\frac{2}{3}\varepsilon.\] The same reasoning works if \(x\in A_{i}\) and \(y\in B_{j}\) for some \(i\in[m_{1}],j\in[m_{2}]\). In that case we have also \[|d(x,y)-d(r_{i},y)|\leq\frac{\varepsilon}{12},\] and \[|d_{n}(x,y)-d_{n}(r_{i},y)|\leq\frac{\varepsilon}{3}.\] As a consequence, for \((x,y),(x^{\prime},y^{\prime})\in R\) we have \[|d(x,x^{\prime})-d_{n}(y,y^{\prime})|\leq\frac{5}{6}\varepsilon+\max\left\{ \left|d(r^{\prime},r^{\prime\prime})-d_{n}(r^{\prime},r^{\prime\prime})\right| ;r^{\prime},r^{\prime\prime}\in\{r_{i},i\in[m_{1}]\}\cup\bigcup_{j=1}^{m_{2}}B _{j}\right\}.\] And by Lemma 4.4\((iii)\) and Lemma 4.6 as above, \[|d(x,x^{\prime})-d_{n}(y,y^{\prime})|\leq\frac{5}{6}\varepsilon+4\sup_{z\in{\bf T }}|\mu(F_{\bf T}(z))-\mu_{n}(F_{\bf T}(z))|\leq\frac{5}{6}\varepsilon+\frac{1}{ 12}\varepsilon<\varepsilon,\] for \(n\geq N_{1}(\varepsilon/48)\). We now have \[\sup_{(x,y),(x^{\prime},y^{\prime})\in R}|d(x,x^{\prime})-d_{n}(y,y^{\prime})| \leq\varepsilon\] Recall that \(\nu_{n}(R)\geq 1-\varepsilon\). With Definition 1.15 this yields \(d_{\rm GP}({\bf T},{\bf S}_{n})\leq\varepsilon\), \(\mathbb{P}\)-almost surely for \(n\geq N_{2}:=\max\{N_{1}(\varepsilon/48),N_{2}^{*}\}\). **Lemma 4.8**.: _For any \(\varepsilon>0\), there is a random variable \(N_{3}=N_{3}(\varepsilon)\) such that for \(n\geq N_{3}\) we have almost surely_ \[d_{GP}\left((T_{n}^{trim},d_{n}^{trim},r_{n},\mu_{n}^{trim}),({\mathbf{S}}_{n},d _{n},r,\mu_{n})\right)\leq\varepsilon.\] Proof.: Fix \(\varepsilon>0\) small. First, we need to understand \(\eta^{n}\) better, recall the definition of \(\eta^{n}\) from (4.1). To this end, consider the sets \[{\bf B} =\left\{x\in{\bf T}:d(r,x)=1-\varepsilon/16\right\},\] \[{\bf C} =\left\{x\in{\bf T}:\mu(\{x\})>\varepsilon/16\text{ and }F_{\bf T}(x)=\{x\} \right\}.\] This means that \({\bf B}\) is the level set at height \(1-\varepsilon/16\) and \({\bf C}\) is the set of atoms with mass greater than \(\varepsilon/16\) that are also leaves. By Lemma 4.4\((ii)\) the set \({\bf B}\) is at most countably infinite and \({\bf C}\) is clearly finite. Enumerate \({\bf B}\cup{\bf C}\) by \(\{z_{i},i\geq 1\}\), \[{\bf B}\cup{\bf C}=\bigcup_{i=1}^{\infty}\{z_{i}\}.\] Observe that \[{\bf T}=\bigcup_{i=1}^{\infty}[r,z_{i})\cup F_{\bf T}(z_{i})=\bigsqcup_{i=1}^ {\infty}\left([r,z_{i})\backslash\bigg{(}\bigcup_{j=1}^{i-1}[r,z_{j})\bigg{)} \right)\cup F_{\bf T}(z_{i}),\] where \(\sqcup\) is a disjoint union. This implies that we can choose \(K\) large enough such that \[\mu\bigg{(}{\bf T}\backslash\bigcup_{i=1}^{K}[r,z_{i})\cup F_{\bf T}(z_{i}) \bigg{)}\leq\frac{\varepsilon}{2}\] Such a \(K\) exists because for every \(z\in{\bf B}\) we have \(\mu(F_{\bf T}(z))\leq 1-d(r,z)\leq\varepsilon/16\) by Lemma 4.4\((i)\) and the definition of \({\bf B}\). We now define the sets \[{\bf L}=\bigcup_{i=1}^{K}F_{\bf T}(z_{i})\quad\text{and}\quad{\bf D}=\bigcup_ {i=1}^{K}[r,z_{i}).\] Note that \({\bf L}\) and \({\bf D}\) are disjoint and \(\mu({\bf L})+\mu({\bf D})\geq 1-\varepsilon/2\) due to our choice of \(K\). We think of \({\bf L}\) as the part of \({\bf T}\) that is close to the leaves and of \({\bf D}\) as the skeleton of the tree that leads to \({\bf L}\). Observe that for every \(1\leq i\leq n\) we have \(\eta^{n}(\xi_{i})=\xi_{i}\) if \[\left|F_{\bf T}(\xi_{i})\cap\left\{\xi_{j};1\leq j\leq n\right\}\right|\geq 2.\] This leads us to consider the event \[{\cal A}_{n}=\left\{\forall 1\leq j\leq K:\left|F_{\bf T}(z_{j})\cap\left\{\xi_{i };1\leq i\leq n\right\}\right|\geq 2\right\}.\] On the event \(\mathcal{A}_{n}\) we have for all \(1\leq i\leq n\) with \(\xi_{i}\in\mathbf{D}\cup\mathbf{L}\) that \[\begin{cases}\eta^{n}(\xi_{i})=\xi_{i}&\text{if}\quad\xi_{i}\in\mathbf{D},\\ \eta^{n}(\xi_{i})\in F_{\mathbf{T}}(z_{j})&\text{if}\quad\xi_{i}\in F_{ \mathbf{T}}(z_{j})\text{ for some }j\leq K.\end{cases} \tag{4.8}\] Next, we want to start estimating \(d_{\mathrm{GP}}\left((T_{n}^{\mathrm{trim}},d_{n}^{\mathrm{trim}},r_{n},\mu_{ n}^{\mathrm{trim}}),(\mathbf{S}_{n},d_{n},r,\mu_{n})\right)\). We use Definition 1.15 to compute \(d_{\mathrm{GP}}\). Consider the following subset of \(\mathbf{T}\times\mathbf{T}\) \[R_{n}=\bigcup_{i\in[n]:\xi_{i}\in\mathbf{D}\cup\mathbf{L}}\{\xi_{i}\}\times\{ \eta^{n}(\xi_{i})\}.\] We choose the natural coupling \(\nu_{n}\) of \(\mu_{n}\) and \(\mu_{n}^{\mathrm{trim}}\), that places mass \(1/n\) on \((\xi_{i},\eta^{n}(\xi_{i}))\), \[\nu_{n}=\sum_{i=1}^{n}\frac{1}{n}\delta_{(\xi_{i},\eta^{n}(\xi_{i}))}.\] Note that as \(n\to\infty\), we have \[\liminf_{n\to\infty}\nu_{n}(R_{n})=\liminf_{n\to\infty}\mu_{n}\left(\mathbf{D} \cup\mathbf{L}\right)\geq 1-\frac{\varepsilon}{2}, \tag{4.9}\] \(\mathbb{P}\)-almost surely. Further, note that \(\mathcal{A}_{n-1}\subset\mathcal{A}_{n}\) and \(\lim_{n\to\infty}\mathbb{P}(\mathcal{A}_{n})=1\). Hence, there is a random variable \(N_{3}^{*}=N_{3}^{*}(\varepsilon)\) such that for all \(n\geq N_{3}^{*}\) we have \(\nu_{n}(R_{n})\geq 1-\varepsilon\). Let us now estimate \[\sup_{(x,y),(x^{\prime},y^{\prime})\in R_{n}}\left|d_{n}(x,x^{\prime})-d_{n}^ {\mathrm{trim}}(y,y^{\prime})\right|,\] on the event \(\mathcal{A}_{n}\). Let \(I_{n}=\{i\in[n]:\xi_{i}\in\mathbf{D}\cup\mathbf{L}\}\). Note that by the definition of \(R_{n}\) we have \[\sup_{(x,y),(x^{\prime},y^{\prime})\in R_{n}}\left|d_{n}(x,x^{\prime})-d_{n}^ {\mathrm{trim}}(y,y^{\prime})\right|=\sup_{i,j\in I_{n}}\left|d_{n}(\xi_{i}, \xi_{j})-d_{n}^{\mathrm{trim}}(\eta^{n}(\xi_{i}),\eta^{n}(\xi_{j}))\right|. \tag{4.10}\] We now apply Lemma 4.4\((iii)\) where \(\varphi\) is given by \(\eta^{n}\) restricted to \(\{\xi_{i},i\in I_{n}\}\). We combine this with (4.8) to obtain \[\sup_{(x,y),(x^{\prime},y^{\prime})\in R_{n}}\left|d_{n}(x,x^{ \prime})-d_{n}^{\mathrm{trim}}(y,y^{\prime})\right| \leq 4\sup_{i\in I_{n}}\left|\mu_{n}\left(F_{\mathbf{T}}(\xi_{i}) \right)-\mu_{n}\left(F_{\mathbf{T}}(\eta^{n}(\xi_{i}))\right)\right|\] \[\leq 4\sup_{\begin{subarray}{c}1\leq j\leq K\\ z_{j}\in\mathbf{B}\end{subarray}}\sup_{x,y\in F_{\mathbf{T}}(z_{j})}\left| \mu_{n}\left(F_{\mathbf{T}}(x)\right)-\mu_{n}\left(F_{\mathbf{T}}(y)\right) \right|.\] By using the triangle inequality twice we have \[4\sup_{\begin{subarray}{c}1\leq j\leq K\\ z_{j}\in\mathbf{B}\end{subarray}}\sup_{x,y\in F_{\mathbf{T}}(z_{j})}\left| \mu_{n}\left(F_{\mathbf{T}}(x)\right)-\mu_{n}\left(F_{\mathbf{T}}(y)\right)\right| \leq 8\sup_{\begin{subarray}{c}1\leq j\leq K\\ z_{j}\in\mathbf{B}\end{subarray}}\sup_{x\in F_{\mathbf{T}}(z_{j})}\left|\mu_ {n}\left(F_{\mathbf{T}}(x)\right)-\mu\left(F_{\mathbf{T}}(x)\right)\right|\] \[\leq 8\sup_{x\in\mathbf{T}}\left|\mu_{n}\left(F_{\mathbf{T}}(x) \right)-\mu\left(F_{\mathbf{T}}(x)\right)\right|+8\sup_{\begin{subarray}{c}1\leq j \leq K\\ z_{j}\in\mathbf{B}\end{subarray}}\mu\left(F_{\mathbf{T}}(z_{j})\right).\] By construction, we have \(\mu\left(F_{\mathbf{T}}(z_{j})\right)\leq\varepsilon/16\) for every \(z_{j}\in\mathbf{B}\). The other term, \(\sup_{x\in\mathbf{T}}\left|\mu_{n}\left(F_{\mathbf{T}}(x)\right)-\mu\left(F_{ \mathbf{T}}(x)\right)\right|\), is controlled by Lemma 4.6 - we apply it with parameter \(\varepsilon/16\). This means that for \(n\geq N_{3}:=\max\{N_{1}(\varepsilon/16),N_{3}^{*}\}\) we have \[\sup_{(x,y),(x^{\prime},y^{\prime})\in R_{n}}\left|d_{n}(x,x^{\prime})-d_{n}^{ \mathrm{trim}}(y,y^{\prime})\right|\leq 8\frac{\varepsilon}{16}+8\frac{\varepsilon}{16}. \tag{4.11}\] Recall that for \(n\geq N_{3}^{*}\) we have \(\nu_{n}(R_{n})\geq 1-\varepsilon\). This implies that for \(n\geq N_{3}\) \[d_{\mathrm{GP}}\left((T_{n}^{\mathrm{trim}},d_{n}^{\mathrm{trim}},r_{n},\mu_{n} ^{\mathrm{trim}}),(\mathbf{S}_{n},d_{n},r,\mu_{n})\right)\leq\varepsilon,\] which is the statement of the lemma. Finally, we prove Theorem 4.3. Proof of Theorem 4.3.: This now follows straight away from Lemmas 4.7 and 4.8. We have by the triangle inequality \[d_{\mathrm{GP}}\big{(}(T_{n}^{\mathrm{trim}},d_{n}^{\mathrm{trim }},r_{n},\mu_{n}^{\mathrm{trim}}),(\mathbf{T},d,r,\mu)\big{)}\] \[\qquad\leq 2\varepsilon.\qed\] ## Appendix A Appendix: proof of Proposition 3.6 Here we will sketch the proof of Proposition 3.6. Large parts of it are analogous to arguments seen in Evans, Grubel, Wakolbinger [8, Sections 6 and 7] with the difference being that the authors of [8] consider only binary trees. We present the proof to give the reader a more complete picture. The proof consists of three steps: First, we go from the dendritic system to an ultra-metric on \(\mathbb{N}\) which can be represented in a coalescent. Secondly, we apply a theorem of Gufler [14] to derive a sampling procedure for the ultra-metric. The ultra-metric can be represented by sampling points from a real tree. In doing this, we lose information about the planar structure. We recover the planar structure in a third step by use of the Aldous-Hoover-Kallenberg theory of exchangeable arrays, see the book of Kallenberg [17, Chapter 28] for a general reference. In the following sections we encode a given exchangeable, ergodic dendritic system \(\mathcal{D}=(\mathbb{N},\sim,\preceq,p)\). This dendritic system was obtained from a tree-valued Markov chain \((T_{n},n\geq 1)\) with uniform backward dynamics. We will end up with the objects of Proposition 3.6: a rooted real tree \((\mathbf{T},d_{\mathbf{T}},r)\), a probability measure \(\mu\) and a function \(F\) which encodes the planarity function \(p\) of \(\mathcal{D}\). ### From the dendritic system to a first real tree The first thing we need to do is to derive an ultra-metric. To this end, given \(i,j\in\mathbb{N}\) and any leaf \(k\in\mathbb{N}\) we set \[I_{k}:=\mathds{1}\{(i,j)\preceq k\}.\] By exchangeability of \(\mathcal{D}\) the sequence \((I_{k})_{k>\max\{i,j\}}\) is also exchangeable. Hence the limit \[d(i,j)=\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^{n}I_{k}\] exists almost surely by de Finetti's theorem. **Lemma A.1**.: \(d\) _is almost surely an ultra-metric on \(\mathbb{N}\), that is for all \(i,j,k\in\mathbb{N}\) we have:_ 1. \(d(i,j)\geq 0\)_, and_ \(d(i,j)=0\Leftrightarrow i=j\)_._ 2. \(d(i,j)=d(j,i)\)_._ 3. \(d(i,k)\leq\max\{d(i,j),d(j,k)\}\)_._ Proof.: The proof of this lemma is analogous to the proof of [8, Lemma 6.1], we repeat it for the reader's convenience. Notice that little changes in going from the binary trees of [8] to multi-furcating in our setting. The symmetry of \(d\) and \(d(i,i)=0\) follow readily from the definition of \(I_{k}\). We show that \(i\neq j\) implies \(d(i,j)>0\) almost surely. To this end, we first observe that the events \(\{d(i,j)=0\}\) and \(\{\forall k\notin\{i,j\}:I_{k}=0\}\) agree almost surely. Indeed, if we had \(\mathbb{P}(I_{k}=1)>0\) for some \(k\), then this would be true for infinitely many \(k\), by exchangeability, and then by de-Finetti's theorem we would then have \(d(i,j)>0\). Hence, we almost surely have \[\left\{d(i,j)=0\right\}=\left\{\forall k\notin\left\{i,j\right\}:I_{k}=0 \right\}=\left\{\nexists k\notin\left\{i,j\right\}:(i,j)\preceq k\right\}\] where the second equality follows from the definition of \(I_{k}\). On the level of trees this means that for all \(n\geq\max\{i,j\}\), the leaves labelled \(i\) and \(j\) are attached to the same vertex in \(T_{n}\) and no other leaves or subtrees are attached to the same vertex. The authors of [8] call this a _cherry_. We now want to estimate the probability of the event that \(i\) and \(j\) are part of the same cherry in \(T_{n}\), equivalently in the dendritic system restricted to \([n]\). Because \(T_{n}\) has \(n\) leaves, the number of cherries is at most \(\frac{n}{2}\). Recall that the leaves are labelled exchangeably. This means that we can relabel the leaves of \(T_{n}\) uniformly without changing the distribution. The probability that the labels \(i\) and \(j\) are part of the same cherry is at most \(\frac{n}{2}\frac{2}{n(n-1)}\). This allows us to conclude \[\mathbb{P}\big{(}d(i,j)=0\big{)}\leq\limsup_{n\to\infty}\mathbb{P}\big{(}i \text{ and }j\text{ form a cherry}\big{)}\leq\limsup_{n\to\infty}\frac{n}{2}\frac{2}{n(n-1)}=0\] which is equivalent to \(d(i,j)>0\) almost surely for \(i\neq j\). Lastly, by consider the subtree spanned in \(T_{n}\) by the leaves labelled \(i,j,k\). We see that we have either \((i,k)=(j,k)\preceq(i,j)\) or a permutation thereof for all \(n\geq\max\{i,j,k\}\). In the limit as \(n\to\infty\) this entails \(d(i,k)=d(j,k)\geq d(i,j)\) or a permutation thereof. In any case, the ultra-metric inequality \(d(i,k)\leq\max\{d(i,j),d(j,k)\}\) is satisfied. Given an ultra-metric \(d\) on \(\mathbb{N}\) that is bounded above by \(1\), we can associate a coalescent and a real tree to it in a canonical way, see for example Evans [7, Example 3.41] We explain this procedure here. Define a family of equivalence relations \((\equiv_{t},t\in[0,1])\) on \(\mathbb{N}\) by declaring \(i\equiv_{t}j\) if and only if \(d(i,j)\leq t\). Notice that elements of \(\mathbb{N}\) can be identified with equivalence classes of \(\equiv_{0}\) and that all elements of \(\mathbb{N}\) are equivalent under \(\equiv_{1}\). Now, we extend \(d\) to a metric of pairs of the form \((A,t)\) where \(A\) is an equivalence class of \(\equiv_{t}\), call this set \(\mathbf{S}^{\circ}\). Given \((A,t)\) and \((B,s)\) we set \[H((A,t),(B,s))=\inf\left\{u\geq\max\{s,t\}:k\equiv_{u}\ell,\forall k\in A,\ell \in B\right\}\] and \[d((A,t),(B,s))=H((A,t),(B,s))-\frac{s+t}{2}.\] One can check that \(d((\{i\},0),(\{j\},0))=d(i,j)\), so \(d\) is an extension of the previous metric. Further, one can check that \(d\) is indeed a metric and that the metric completion of \((\mathbf{S}^{\circ},d)\) is a real tree, call it \((\mathbf{S},d)\). \(\mathbf{S}\) can be endowed by an ancestral order \(<\) by setting \((A,t)<(B,s)\) if \(s<t\) and \(B\subset A\). Root \(\mathbf{S}\) at the minimal element of \(<\), call the root \(r\). Recall that we obtained the dendritic system \(\mathcal{D}=(\mathbb{N},\sim,\preceq,p)\) from a tree-valued Markov chain \((T_{n},n\geq 1)\). \(\mathcal{D}\) restricted to \([n]\) corresponds to \(T_{n}^{+}\) via Lemma 2.8 where the \(+\) signifies that we have added leaf labels. Remove the planar order from \(T_{n}\) but keep the leaf labels and call the resulting tree \(T_{n}^{\text{unordered}}\). Similarly to Lemma 2.8, \(T_{n}^{\text{unordered}}\) corresponds to \(([n],\sim,\preceq)\). Further, we can consider \(\mathbf{S}\) restricted to the set spanned by \(\{(\{1\},0),\ldots,(\{n\},0),r\}\). This corresponds to a combinatorial, leaf-labelled tree \(S_{n}\). Write \((i,j)\approx_{n}(k,\ell)\) if the most recent common ancestor of \(i\) and \(j\) is also the most recent common ancestor of \(k\) and \(\ell\) in \(S_{n}\). **Lemma A.2**.: _As leaf-labelled, non-plane trees, we have \(S_{n}=T_{n}^{\text{unordered}}\) almost surely._ Proof.: Recall that \(T_{n}^{\text{unordered}}\) corresponds to \(([n],\sim,\preceq)\) and that \(S_{n}\) corresponds to \(([n],\approx_{n},<)\) where \(<\) is induced by the ancestral order on \(\mathbf{S}\). Hence it suffices to check that \(([n],\sim,\prec)\) and \(([n],\approx_{n},<)\) have the same distribution. Fix distinct \(i,j,k\in[n]\). Observe that \((i,k)\approx_{n}(j,k)<(i,j)\) if and only if \(d(i,j)<d(i,k)=d(j,k)\), and \((i,j)\approx_{n}(i,k)\approx_{n}(j,k)\) if and only if \(d(i,j)=d(i,k)=d(j,k)\). This holds because \(S_{n}\) is derived from \(\mathbf{S}\). We will prove that \((i,k)\sim(j,k)\prec(i,j)\) if and only if \(d(i,j)<d(i,k)=d(j,k)\) as well as \((i,j)\sim(i,k)\sim(j,k)\) if and only if \(d(i,j)=d(i,k)=d(j,k)\). Because trees are uniquely determined by the relationship between triples of vertices, this will imply that \(S_{n}=T_{n}^{\text{unordered}}\). To show this, we use ideas of [8, Lemma 6.2] which are similar to ideas used in the above proof of Lemma A.1. Note that \((i,k)\sim(j,k)\) if and only if we do not have \((i,k)\prec(j,k)\) or \((j,k)\prec(i,k)\). Similarly, \(d(i,k)=d(j,k)\) if and only if we do not have \(d(i,k)<d(j,k)\) or \(d(j,k)<d(i,k)\). By contraposition it thus suffices to show that \((j,k)\prec(i,j)\) if and only if \(d(j,k)>d(i,j)\). On the one hand, \(d(j,k)>d(i,j)\) implies \((j,k)\prec(i,j)\) in \(\mathcal{D}\) and this does not change when we restrict \(\mathcal{D}\) to \([n]\). On the other hand, \((j,k)\prec(i,j)\) clearly implies \(d(j,k)\geq d(i,j)\). The crucial part is to show that \((j,k)\prec(i,j)\) implies \(d(j,k)\neq d(i,j)\). Similar to Lemma A.1, by exchangeability and de-Finetti's theorem we almost surely have \[\big{\{}(j,k)\prec(i,j)\big{\}}\cap\big{\{}d(j,k)=d(i,j)\big{\}}=\big{\{}(j,k )\prec(i,j)\big{\}}\cap\big{\{}\nexists\ell\in\mathbb{N}\backslash\{k\}:(j,k )\prec\ell,(i,j)\not\prec k\big{\}}.\] We now want to estimate the probability of the latter event for the tree \(T_{m}^{\text{unordered}}\) where \(m>n\). Here, the event corresponds to the vertices \((j,k)\) and \((i,j)\) being connected by a single edge and further \((j,k)\) has only one other offspring, namely the leaf labelled \(k\). Call this event \(\mathcal{A}\). We now proceed similarly to Lemma A.1. Recall that the leaves are labelled exchangeably. This means that we can relabel the leaves of \(T_{n}^{\text{unordered}}\) uniformly without changing the distribution. We do this, and condition on \((j,k)\prec(i,j)\). The conditional probability of \(\mathcal{A}\) then is \(0\) if \((j,k)\) has more than two children or if the child of \((j,k)\) which is not an ancestor of \((i,j)\) has children on its own. If this is not the case, i.e. when the only child of \((j,k)\) besides \((i,j)\) is a single leaf, then the probability that that leaf is labelled \(k\) is \(\frac{1}{m-2}\) which converges to \(0\) as \(m\to\infty\). This yields \[\mathbb{P}\big{(}\big{\{}(j,k)\prec(i,j)\big{\}}\cap\big{\{}d(j,k)=d(i,j) \big{\}}\big{)}=0,\] which implies \((j,k)\prec(i,j)\) implies \(d(j,k)\neq d(i,j)\). The considerations of this section lead us to the following statement, an analogue of [8, Proposition 6.3]. This proposition states that we can encode all information contained in \(\mathcal{D}\) except for the planar order in the real tree \(\mathbf{S}\). **Proposition A.3**.: _There is a rooted real tree \((\boldsymbol{S},d,r)\) and an injective map \(\iota:\mathcal{D}\to(\boldsymbol{S},d,r)\) such that the span of \(\iota([n])\) corresponds to \(T_{n}^{\text{unordered}}\) for all \(n\in\mathbb{N}\) as combinatorial tree. Further, the ancestral order \(\preceq\) of \(\mathcal{D}\) coincides with the natural partial order of \(\boldsymbol{S}\)._ Proof.: By Lemma 2.8\((\mathbb{N},\sim,\preceq)\) restricted to \([n]\) stands in correspondence to \(T_{n}^{\text{unordered}}\). By Lemma A.2\(T_{n}\) and \(S_{n}\) are almost surely the same tree. We constructed \(S_{n}\) from a subset of \(\mathbf{S}\) which implies the existence of \(\iota\to\mathbf{S}\). Note that \(\mathbf{S}\) is random because \(\mathcal{D}\) is random. In the next section we encode its distribution. ### From the first real tree to a sampling from a real tree We have encoded the dendritic system in an ultra-metric \(d\) which we represented as a real tree. This coalescent tree is random and we want to find a more algorithmic representation for it. This is done by applying Gufler [14, Theorem 3.9], we state this theorem in our notation. Assume we are given a rooted real tree \((\mathbf{T},d_{\mathbf{T}},r)\) and a probability measure \(m\) on \(\mathbf{T}\times\mathbb{R}_{+}\). Consider a sequence of \(i.i.d.\) samples \((\xi_{i},v_{i})_{i\in\mathbb{N}}\) where \((\xi_{i},v_{i})\) is distributed according to \(m\) for every \(i\). Define \[\delta(i,j)=\big{(}d_{\mathbf{T}}(\xi_{i},\xi_{j})+v_{i}+v_{j})\mathds{1}_{i \neq j},\] which can be shown to be a pseudo-metric. The idea behind this construction is that we attach a leaf labelled \(i\) with a branch of length \(v_{i}\) to the point \(\xi_{i}\) in \(\mathbf{T}\). \(\delta\) is then the induced path metric on \(\mathbf{T}\) with the added branches restricted to the leaves. Let \((\mathbf{S},d_{\mathbf{S}},r)\) be the real tree of Proposition A.3. Let \(d\) be the induced ultra-metric on \(\mathbb{N}\) by restricting \(\mathbf{S}\) to \(\iota(\mathbb{N})\). This is the ultra-metric which we used to construct \(\mathbf{S}\). Let \(\pi\) be the map that maps isolated leaves, i.e. leaves that are not accumulation points of other leaves, to the closest branchpoint. If \(x\) is a leaf but not isolated, we set \(\pi(x)=x\). **Theorem A.4**.: _[_14_, Theorem 3.9]_ _There exists a rooted real tree \((\textbf{T},d_{\textbf{T}},r)\) and a probability measure \(m\) on \(\textbf{T}\times\mathbb{R}_{+}\) such that \(\delta=d\) where \(d\) is the ultra-metric of the coalescent and \(\delta\) is constructed as above. More precisely, we can choose **T** to be the span of \(\{\pi(\iota(i))\}_{i\in\mathbb{N}}\). Moreover under the assumption of ergodicity on \(d\), \((\textbf{T},d_{\textbf{T}},r)\) and \(m\) are deterministic. Denote the marginal distribution of \(m\) on **T** by \(\mu\)._ Alternatively, this means that **S** equals **T** with additional leaves attached: we attach a leaf of length \(v_{i}\) to \(\xi_{i}\) for every \(i\). In our setting, \(v_{i}\) is simply a function of \(\xi_{i}\) as we need to have \(d_{\textbf{T}}(r,\xi_{i})+v_{i}=1\). Let us comment on how this theorem is proved. Define **T** to be closure of the smallest subtree of **S** that contains \((\pi(y_{i}))_{i\geq 1}\). Next, let \(m_{n}=\sum_{i=1}^{n}\delta_{(\pi(y_{i}),\mathrm{d}s(y_{i},\pi(y_{i})))}\) the empirical measure on \(\pi(y_{i})\) with the associated leaf lengths. \(m\) then is the weak limit of \(m_{n}\) as \(n\to\infty\). Gufler's proof makes use of exchangeability and de Finetti-style theorems which are used to show that the weak limit exists as well as that ergodicity implies that **T** is deterministic. At this point we have successfully encoded \((C1)\)-\((C4)\) in a real tree \((\textbf{T},r)\) with associated probability measure \(\mu\), the distribution \(m\) does not matter if we only want to retrieve the dendritic system. Let us explain how to obtain \((\mathbb{N},\sim,\preceq)\) of our dendritic system from \((\textbf{T},r\mu)\). Given \((\textbf{T},r,\mu)\), sample \((\xi_{i})_{i\in\mathbb{N}}\)\(i.i.d.\) from \(\mu\). We define a random equivalence relation \(\sim^{*}\) and a random ancestral order \(\preceq^{*}\) on \(\mathbb{N}\times\mathbb{N}\). 1. \((i,i)\sim^{*}(k,l)\) if and only if \((i,i)=(k,l)\). 2. \((i,j)\sim^{*}(k,l)\) for \(i\neq j,k\neq l\) if and only if \([r,\xi_{i}]\cap[r,\xi_{j}]=[r,\xi_{l}]\cap[r,\xi_{k}]\). 3. The partial order \(\preceq^{*}\) is inherited from the partial order on \(S\) and adding \((i,j)\preceq^{*}(i,i)\) for \(i\neq j\). This means for distinct \(i,j,k,\ell\in\mathbb{N}\) we have \[(k,\ell)\prec^{*}(i,j)\quad\text{ if }\quad[r,\xi_{k}]\cap[r,\xi_{\ell}]\subset[r,\xi_{i}]\cap[r,\xi_{j}].\] **Proposition A.5**.: _The above-defined random relations \((\mathbb{N},\sim^{*},\preceq^{*})\) have the same distribution as \((\mathbb{N},\sim,\preceq)\) of the dendritic system \(\mathcal{D}=(\mathbb{N},\sim,\preceq,p)\)._ Proof.: This is a combination of Theorem A.4 and Proposition A.3. This means we have almost proven Proposition 3.6, except for the representation of the planarity function \(p\). We will do this in the next section. ### Encoding the planar structure We complete the proof of Proposition 3.6 by encoding the planar structure, i.e. the planarity function \(p\). Proof of Proposition 3.6.: Recall Proposition A.3 and let \(\pi\) be the map that maps a leaf of **S** to the closest branchpoint. Set \(\xi_{i}=\pi(\iota(i))\). By Theorem A.4 and Proposition A.5, \((\xi_{i})_{i\in\mathbb{N}}\) is an exchangeable sequence of \(\mu\)-distributed random variables on **T**. Consider the array \[\{(\xi_{i},\xi_{j},p(i,j))\}_{i,j\in\mathbb{N},i\neq j}.\] This array takes values in sequences of \(\textbf{T}^{2}\times\{\pm 1\}\)-valued random variables and is jointly exchangeable. This implies that there is a measurable function \(F:(\textbf{T}\times[0,1])^{2}\times[0,1]\to\{\pm 1\}\) with \[\{\xi_{i},\xi_{j},p(i,j))\}_{i,j\in\mathbb{N},i\neq j}\stackrel{{ d}}{{=}}\{\xi_{i},\xi_{j},F(\xi_{i},U_{i},\xi_{j},U_{j},U_{ij})\}_{i,j\in \mathbb{N},i\neq j}.\] (A.1) where \((U_{i})_{i\in\mathbb{N}},(U_{ij})_{i,j\in\mathbb{N},i\neq j}\) are independent uniform random variables on \([0,1]\) with \(U_{ij}=U_{ji}\) which are independent of \((\xi_{i})_{i\in\mathbb{N}}\). This is a general result from the Aldous-Hoover-Kallenberg theory for exchangeable arrays that we state and deduce as Lemma A.6 later. The function \(F\) satisfies some consistency relations which we will state and prove here. Let \(Leb\) be the Lebesgue measure on \([0,1]\). For \(\mu\)-almost every \(x,y,z\in\textbf{T}\) and \(Leb\)-almost every \(u,v,w,a,b,c\) we the following consistency relations. 1. \(F(x,u,y,v,a)=-F(y,v,x,u,a)\), * if \(F(x,u,y,v,a)=F(y,v,z,w,b)\) then also \(F(x,u,z,w,c)=F(x,u,y,v,a)\), * if \([r,x]\cap[r,y]\notin\{[r,x],[r,y]\}\) and \([r,y]\subsetneq[r,z]\) then \(F(x,u,y,v,a)=F(x,u,z,w,b)\), * if \([r,x]\subsetneq[r,y]\subsetneq[r,z]\) then \(F(x,u,y,v,a)=F(x,u,z,w,c)\). Let us prove these claims. Recall the consistency relations of \(p\) as defined in Definition 2.6. Let \(\xi_{i},\xi_{j},\xi_{k}\) be \(i.i.d.\)\(\mu\)-random variables and let \(U_{i},U_{j},U_{k},U_{ik},U_{ij},U_{jk}\) be independent \(i.i.d.\) uniform random variables on \([0,1]\). By Skorokhod's representation theorem, we can work on a probability space where the (A.1) is an almost-sure equality. The statements in the new probability will translate back to the claimed distributional statements claimed above. * Firstly, by \((P1)\), we have \[p(i,j)=-p(i,j)\quad a.s.\implies F(\xi_{i},U_{i},\xi_{j},U_{j},U_{ij})=-F(\xi _{j},U_{j},\xi_{i},U_{i},U_{ij})\quad a.s..\] * Secondly, consider the event \(A_{ijk}=\{F(\xi_{i},U_{i},\xi_{j},U_{ij},U_{ij})=F(\xi_{j},U_{j},\xi_{k},U_{k },U_{jk})=1\}\). By \((P3)\) we have \[\begin{cases}p(i,j)=1&\xrightarrow{(P3)}\\ p(j,k)=1&\xrightarrow{}\end{cases}\quad p(i,k)=1\quad a.s.\implies\quad F(\xi _{i},U_{i},\xi_{k},U_{k},U_{ik})=1\quad\text{on }A_{ijk}.\] By \((P1)\) and \((F1)\), the same works if we replace \(1\) by \(-1\). * Thirdly, consider the event \(B_{ijk}=\{[r,\xi_{i}]\cap[r,\xi_{j}]\notin\{[r,\xi_{i}],[r,\xi_{j}]\}\) and \([r,\xi_{j}]\subsetneq[r,\xi_{k}]\}\). This implies that in the dendritic system we have \((i,j)\prec(j,k)\). Then on the intersection of \(B_{ijk}\) and \(\{p(i,j)=1,\)\(p((i,i),(j,k))=1\}\), we have by \((P4)\) that \(p(i,k)=1\) which in turn is equivalent to \(F(\xi_{i},U_{i},\xi_{k},U_{k},U_{ik})=1\) on these events. The same works if we replace \(1\) by \(-1\) by \((P1)\) and \((F1)\). * Fourthly, consider the event \(\{[r,\xi_{i}]\subsetneq[r,\xi_{j}]\subsetneq[r,\xi_{k}]\}\). On this event we have that \((i,j)\prec(j,k)\). In this case \((P4)\) states that \(p(i,j)=1\) implies that \(p((i,i),(j,k))=1\) which in turn implies \(p(i,k)=1=F(\xi_{i},U_{i},\xi_{k},U_{k},U_{ik})\). The same works if we replace \(1\) by \(-1\) by \((P1)\) and \((F1)\). Lastly, let us comment on why there is no consistency relation for \(F\) which is derived from \((P2)\). To apply \((P2)\), we need two vertices of our dendritic system \(x,y\in\mathcal{D}\) which satisfy \(x\prec y\). This will never be the case for leaves of \(\mathcal{D}\) and \(F\) is only used to determine the planar relation between leaves. Finally, we prove a lemma that we skipped earlier. Assume we work on the probability space \((\Omega,\mathcal{A},\mathbb{P})\). **Lemma A.6**.: _Assume we have a jointly exchangeable, ergodic array \(\{\xi_{i},\xi_{j},\zeta_{ij}\}_{i,j\in\mathbb{N};i\neq j}\) of random variables taking values in \(S_{1}\times S_{1}\times S_{2}\) where \(S_{1}\) and \(S_{2}\) are some Borel spaces. We can enlarge the probability space so that there exists an array \(\{U_{i},U_{ij}\}_{i,j\in\mathbb{N};i<j}\) of \(i.i.d.\) uniform \([0,1]\) random variables which is independent of \(\{\xi_{i}\}_{i\in\mathbb{N}}\). Set \(U_{ji}=U_{ij}\) for \(i<j\). We then have_ \[\{\xi_{i},\xi_{j},\zeta_{ij}\}_{i,j\in\mathbb{N};i\neq j}\overset{d}{=}\{F( \xi_{i},U_{i},\xi_{j},U_{j},U_{ij})\}_{i,j\in\mathbb{N};i\neq j},\] _for some measurable function \(F:S_{1}\times[0,1]\times S_{1}\times[0,1]\times[0,1]\to S_{2}\)._ Proof.: Without loss of generality we can assume that \(S_{1}=S_{2}=[0,1]\). We use the Aldous-Hoover-Kallenberg theory of exchangeable arrays. The representation theorem [18, Theorem 7.22] for arrays of exchangeable random variables yields the existence of a measureable function \(G^{\prime}:[0,1]^{4}\to[0,1]^{3}\) such that \[\{(\xi_{i},\xi_{j},\zeta_{ij})\}_{i,j\in\mathbb{N},i\neq j}\overset{d}{=}\{G ^{\prime}(V,V_{i},V_{j},V_{ij})\}_{i,j\in\mathbb{N},i\neq j},\] (A.2) where \(V,(V_{i})_{i\in\mathbb{N}},(V_{ij})_{i,j\in\mathbb{N},i<j}\) are independent uniform random variables on \([0,1]\) and we set \(V_{ij}=V_{ji}\). Recall our assumption of ergodicity of the exchangeable array \(\{(\xi_{i},\xi_{j},\zeta_{ij})\}\). [18, Lemma 7.35] now yields that our representation (A.2) does not depend on \(V\). More precisely, there is a measurable function \(G:[0,1]^{3}\to[0,1]^{3}\) such that \[\{(\xi_{i},\xi_{j},\zeta_{ij})\}_{i,j\in\mathbb{N},i\neq j}\stackrel{{ d}}{{=}}\{G(V_{i},V_{j},V_{ij})\}_{i,j\in\mathbb{N},i\neq j}.\] (A.3) We can work in a probability space where (A.3) is a \(\mathbb{P}\)-almost sure equality. We now condition on \(\{\xi_{i}\}_{i\in\mathbb{N}}=\{x_{i}\}_{i\in\mathbb{N}}\) for some sequence in \([0,1]\). Choose a family of regular conditional distributions \(\mathbb{P}^{x}\) under which the \(\{V_{i},V_{ij}\}\) are still all independent of each other, \(\{V_{ij}\}\) is still uniformly distributed but \(\{V_{i}\}\) are not necessarily uniformly distributed. For \(t\in[0,1]\), consider \[\Phi_{x_{i}}(t)=\mathbb{P}^{x}(V_{i}\leq t).\] Observe that \(\Phi_{x_{i}}(t)\) and \(\Phi_{x_{i},x_{j}}(t)\) depend measurably on \(x_{i}\) and \(x_{j}\) for any \(i,j\). Enlarge the probability space again so that there is \(\{U_{i}\}_{i\in\mathbb{N};i}\), an array of \(i.i.d.\) uniform \([0,1]\) random variables. We then have the distributional equality under \(\mathbb{P}^{x}\), \[\{\zeta_{ij}\}_{i,j\in\mathbb{N},i\neq j}\stackrel{{ d}}{{=}}\{G_{3}( \Phi_{x_{i}}^{-1}(U_{i}),\Phi_{x_{j}}^{-1}(U_{j}),V_{ij}\}_{i,j\in\mathbb{N},i \neq j}.\] Here \(G_{3}\) is the third coordinate of \(G\), i.e. \(G(\cdot)=(G_{1}(\cdot),G_{2}(\cdot),G_{3}(\cdot))\in[0,1]^{3}\). This means that there exists a measurable function \(F:[0,1]^{5}\to[0,1]\) so that \[\{\zeta_{ij}\}_{i,j\in\mathbb{N},i\neq j}\stackrel{{ d}}{{=}}\{F(x _{i},U_{i},x_{j},U_{j},U_{ij})\}_{i,j\in\mathbb{N},i\neq j}.\] Because we are using the same random variables \((U_{i},U_{ij})_{i,j}\) regardless of the choice of \(\{x_{i}\}_{i\in\mathbb{N}}\), we have that \(\{U_{i},U_{ij}\}_{i,j\in\mathbb{N},i\neq j}\) is independent of \(\{\xi_{i},\xi_{j}\}_{i,j\in\mathbb{N},i\neq j}\). **Acknowledgements** The author would like to thank his PhD-supervisor Matthias Winkel for many useful discussions and for reading countless drafts. He would also like to acknowledge the support of EPSRC grant EP/W523781/1.
2309.05028
SC-NeRF: Self-Correcting Neural Radiance Field with Sparse Views
In recent studies, the generalization of neural radiance fields for novel view synthesis task has been widely explored. However, existing methods are limited to objects and indoor scenes. In this work, we extend the generalization task to outdoor scenes, trained only on object-level datasets. This approach presents two challenges. Firstly, the significant distributional shift between training and testing scenes leads to black artifacts in rendering results. Secondly, viewpoint changes in outdoor scenes cause ghosting or missing regions in rendered images. To address these challenges, we propose a geometric correction module and an appearance correction module based on multi-head attention mechanisms. We normalize rendered depth and combine it with light direction as query in the attention mechanism. Our network effectively corrects varying scene structures and geometric features in outdoor scenes, generalizing well from object-level to unseen outdoor scenes. Additionally, we use appearance correction module to correct appearance features, preventing rendering artifacts like blank borders and ghosting due to viewpoint changes. By combining these modules, our approach successfully tackles the challenges of outdoor scene generalization, producing high-quality rendering results. When evaluated on four datasets (Blender, DTU, LLFF, Spaces), our network outperforms previous methods. Notably, compared to MVSNeRF, our network improves average PSNR from 19.369 to 25.989, SSIM from 0.838 to 0.889, and reduces LPIPS from 0.265 to 0.224 on Spaces outdoor scenes.
Liang Song, Guangming Wang, Jiuming Liu, Zhenyang Fu, Yanzi Miao, Hesheng
2023-09-10T13:55:41Z
http://arxiv.org/abs/2309.05028v1
# SC-NeRF: Self-Correcting Neural Radiance Field with Sparse Views ###### Abstract In recent studies, the generalization of neural radiance fields for novel view synthesis task has been widely explored. However, existing methods are limited to objects and indoor scenes. In this work, we extend the generalization task to outdoor scenes, trained only on object-level datasets. This approach presents two challenges. Firstly, the significant distributional shift between training and testing scenes leads to black artifacts in rendering results. Secondly, viewpoint changes in outdoor scenes cause ghosting or missing regions in rendered images. To address these challenges, we propose a geometric correction module and an appearance correction module based on multi-head attention mechanisms. We normalize rendered depth and combine it with light direction as query in the attention mechanism. Our network effectively corrects varying scene structures and geometric features in outdoor scenes, generalizing well from object-level to unseen outdoor scenes. Additionally, we use appearance correction module to correct appearance features, preventing rendering artifacts like blank borders and ghosting due to viewpoint changes. By combining these modules, our approach successfully tackles the challenges of outdoor scene generalization, producing high-quality rendering results. When evaluated on four datasets (Blender, DTU, LLFF, Spaces), our network outperforms previous methods. Notably, compared to MVSNeRF, our network improves average PSNR from 19.369 to 25.989, SSIM from 0.838 to 0.889, and reduces LPIPS from 0.265 to 0.224 on Spaces outdoor scenes. Novel view synthesis, Generalization, Multi-view stereo, Multi-head attention. ## I Introduction Novel view synthesis (NVS) is a promising and long-standing problem that plays a fundamental role in both the computer vision [39, 42], robotic [40] and graphics [1]. NVS aims to capture visual information from a sparse set of reference views to render an unseen target view. Early methods [2, 3] produce a target view by interpolating in the ray [2] or pixel plane [3]. Subsequent works [4, 38] have exploited dense input views or geometric constraints, such as epipolar consistency [4], for depth-aware warping of the input views [38]. However, these methods are susceptible to artifacts caused by occlusion, the density of input views, and inaccurate geometry. To solve this problem, the multiplane image (MPI) approachs [5, 6] offer real-time rendering and generalization capabilities by representing the scene using a set of parallel planes derived from several input images. Nevertheless, when the perspective difference between the input view and target view is significant, there may be occurrences of edge rendering overlap [37]. Recently, Neural radiance fields (NeRF) [7] and subsequent works [8, 9] have the strong ability to produce realistic new view synthesis results. However, there are two main drawbacks: 1) It requires densely captured images for each scene. 2) It needs to be trained from scratch to overfit the new scene, with no generalization to unknown scenes. To address the aforementioned shortcomings of NeRF, many methods [11, 12, 13, 14] usually build a large composite dataset to fit the network to different scenarios, including object, indoor, and outdoor scenarios. However, recent works [10, 12] can not effectively generalize to outdoor scenes when trained on only object-level datasets. When MVSNeRF [10] generalizes to an outdoor scene, black artifacts appear in the sky or border, as shown in the blue box in Figure 1. This is because the space scale and structure between the training scene and the test scene is extremely different and there maybe also exists reflective material in the outdoor scene. When the perspective gap between the input view and the target view increases, the result rendered by the IBRNet [12] method will appear blank at the boundary, as shown in the red box in Figure 1. To solve these problems, we propose SC-NeRF, a novel approach that can be well generalized to different scenes by reconstructing radiation fields from only three unstructured multi-view input images. The SC-NeRF is trained only in an Figure 1: Comparison with previous methods IBRNet [12] and MVSNeRF [10] on Spaces. We train both their and our networks on DTU and generalize to outdoor scenes in Spaces. The left, middle, and right images respectively show the rendering result of [12, 10], and ours. object-level dataset, while it can be generalized to a variety of different scenarios, especially outdoor scenarios. Due to the strong generalization ability, the SC-NeRF avoids time-consuming per-scene optimization and can directly regress realistic images from novel viewpoints of outdoor scenes. To be specific, a low-resolution 3D geometric cost volume is constructed from sparse multi-view input images. This geometric cost volume can provide continuous geometric priors, when there is a non-covisual region between the input and target perspectives. In order to solve the problem of artifacts in the rendered outdoor scene, the rendered features are corrected in terms of appearance and geometry. Specifically, a multi-head attention mechanism is leveraged to correct rendered characteristics using direction embedding as query, geometric or appearance features as key, and rendered features as value. Although it alleviates the shadow problem in the distance to some extent, it will cause shadow transfer in the render view. This is mainly because using only the direction as the query can not effectively get complete structure information of the scene. Therefore, we combine the rendered depth value with direction embedding as query, effectively solving the shadow transfer problem. Our approach is completely differentiable, which can be trained in end-to-end manner from sparse view inputs. Our experiments show that with just three input views, our network can synthesize photo-realistic images on DTU [15], Blender [7], LLFF [16], Spaces [6]. Overall, our contributions are as follows: * We propose a novel end-to-end network for synthesizing realistic images from sparse input views. We firstly propose a geometry correction module based on multi-head attention. It can address the issue of black artifacts in rendered views, caused by inconsistencies in scale and structure between training and testing scenes. * Building on the geometry correction module, we also design an appearance correction module to alleviate boundary blank and ghosting artifacts in rendered views caused by relatively large viewpoint changes. * We validate the effectiveness of our model on four datasets, including Blender, LLFF, DTU, and Spaces. Notably, on the outdoor scenes in the Spaces dataset, our model outperforms MVSNeRF by 34.17% in terms of PSNR, and IBRNet by 19.9%. ## II Related Work ### _Novel View Synthesis via NeRF_ In recent years, various neural scene representations have been proposed to implement view synthesis [17, 18, 7, 41, 43]. NeRF [7] has achieved very impressive results in novel view synthesis by optimizing the 5D neural radiation field of a scene. However, it must be optimized for each new scenario, which takes hours or days to converge. There are some methods proposed to extend NeRF's generalization capabilities [10, 11, 12, 13, 19]. GRF [11] projects the learned local image features onto three-dimensional points to obtain a general and rich point representation. MVSNeRF [10] utilizes plane sweep cost volume for neural radiation field reconstruction. NeuRay [14] enables the construction of radiation fields to focus on visible image features by modeling the visibility of 3D points in the input view. However, none of these methods consider how to train the network only on an object-level dataset and be generalized to outdoor scenes. ### _Multi-View Stereo_ Multi-view stereo (MVS) is a core problem in the field of computer vision. Multi-view stereo matching reconstruction can be regarded as the inverse process of taking pictures of a certain scene. Its purpose is to restore the real 3D scene through images taken from different viewpoints. A large number of traditional methods [20, 21, 22, 23, 24] use hand-crafted similarity metrics and regularization methods to calculate dense correspondence of scenes. These methods can achieve good results on non-Lambertian surfaces and scenes without weakly textured regions. However, the artificially designed similarity metrics become unreliable in weakly textured regions, thus leading to incomplete reconstruction results. Recently, deep learning techniques [25, 26, 27, 28] have been introduced. Among these, MVSNet [25] applies a 3D CNN for depth estimation on the plane scan cost of the reference view, and achieves high-quality 3D reconstruction. Subsequent works [26, 27, 28] extend this technique to recurrent planar sweeps [26], point-based densification [27], and cascaded cost volume [28] for improving the effect of reconstruction. We follow their ideas to build a geometrically consistent cost volume. This ensures that the network meets the consistency of multiple views, so that the network can focus on information from different views and also learn geometric priors when rendering the novel views. ### _Transformer in NeRF_ Recently, there have been some attempts to incorporate the transformer [29] architecture into the NeRF model. IBRNet [12] proposes ray transformers that dynamically correlate appearance information from multiple source views. NeFormer [30] proposes to use transformers to aggregate features between different views on the ray and learn radiance fields from the aggregated features. GNT [31] directly regresses colors to synthesize views without need for NeRF's volumetric rendering. GPNR [32] improves generalization by using several stacked "patch-based" transformers to aggregate global features. Different from the above methods, our method mainly uses transformers to correct the features for better generalization to different scenes. Specifically, we use the transformer to correct the geometric and appearance features, which make better use of information from different perspectives and improve the generalization ability of network and geometric reconstruction. ## III Method Given several sparse source views, our method uses volume rendering to synthesize a target view in a new camera pose. The core problem is how to obtain the density and colors of the continuous space by using the information from the input views, and how to make this representation generalize to other scenes, especially outdoor scenes. The overview of our SC-NeRF is shown in Fig. 2. For the sparse input \(M\) views \((M=3)\), we first warp the extracted image features into the reference perspective and construct a geometric encoding volume using 3DCNN (Sec. III-A). Then, we obtain the final radiance features through the geometric feature correction module and the appearance feature correction module based on the multi-head attention mechanism (Sec. III-B). Finally, we use an multi-layer perceptron (MLP) to regress the volume density and RGB radiance from the corrected radiance features. These volume properties are passed through the volume rendering formula to obtain the final rendered images (Sec. III-C). ### _Geometry Volume Encoding_ Inspired by the recent MVSNeRF [10], we construct the encoding volume V at the reference view, allowing for geometry-aware scene understanding. First of all, a 2D CNN \(G_{1}\) is used to extract the local appearance features of the input images. In our network, each input image \(I_{i}\in R^{H_{i}\times W_{i}\times 3}\) is converted into a 2D feature map \(F_{i}\in R^{H_{i}/4\times W_{i}/4\times C_{1}}\) by a down-sampled convolution operation: \[F_{i}=G_{1}(I_{i}), \tag{1}\] where \(H_{i}\) and \(W_{i}\) are the image height and width, and \(C_{1}\) is the number of image feature channels. Then, we transform the features of the source view into the reference view by the homographic warping operation. Given the camera intrinsic \([K]\) and extrinsic parameters \([R,T]\), we use the homographic warping: \[H_{i}(z)=K_{i}\cdot(R_{i}\cdot R_{r}^{T}+\frac{(t_{r}-t_{i})\cdot n_{r}}{z}) \cdot K_{r}^{-1}, \tag{2}\] where \(H_{i}\) is the matrix warping from the view \(i\) to the reference view \(r\) at depth \(z\). \(K_{i}\) and \(K_{r}\) are the intrinsic matrices. \(n_{r}\) denotes the unit normal vector. \(R\) and \(t\) are the camera rotation and translation matrices. Each feature map \(F_{i}\) can be warped to the reference view by: \[F_{i,z}(u,v)=Pad(F_{i})(H_{i}(z)\left[u,v,1\right]^{T}), \tag{3}\] where \(F_{i,z}\) is the warped feature map at depth \(z\), and \((u,v)\) represents a pixel location in the reference view. \(Pad\) indicates the feature image edge padding operation. In this work, we parameterize (u, v, z) using the normalized device coordinate (NDC) at the reference view. We leverage the variance-based method [10] to compute the cost from the warped feature maps on the \(D\) sweeping planes. In particular, for each position \(P\), its cost feature vector is computed by: \[P(u,v,z)=Var(F_{i,z}(u,v)), \tag{4}\] where \(Var\) is the variance operation. Finally, we use a 3D CNN network \(G_{2}\) with a U-Net structure to encode the cost volume mentioned above. This process is expressed by: \[V=G_{2}(P), \tag{5}\] where \(V\) is encoding geometry volume. This encoded volume contains the geometry feature of the scene, and is later continuously interpolated and converted into volume density. ### _Geometric and Appearance Features Rectification_ Black artifacts appear in the rendering result of outdoor view, which are caused by the following reasons: 1) The spatial range of the object-level training set is much smaller than that of the outdoor test scenes. 2) There are some non-Lambertian reflective objects in the outdoor scene, which lead to the deviation of the feature. At the same time, when the viewpoint changes drastically, there may be significant differences between multiple viewpoints, resulting in incomplete coverage of details and textures on the object surface in a single viewpoint image. These uncovered details and textures Figure 2: Overview of SC-NeRF. We first extract image features and warp them onto a plane sweep,then use 3DCNN to build a geometric Volume. Second, we use the geometric feature rectification and appearance feature rectification modules to obtain the final radiation features. Finally, we use an MLP to obtain the volume density and RGB radiation values of any sampling point in space, and use volume rendering to obtain the rendered view. can lead to visual artifacts, such as ghosting and boundary blanking. To overcome these challenges, we design geometric and appearance feature rectification modules based on multi-head attention mechanisms. #### Iii-B1 Geometric feature Rectification Given an arbitrary 3D location \(x\), an MLP \(M_{1}\) be used to obtain radiance features \(F_{r}\), \[F_{r}=M_{1}(E(x),s), \tag{6}\] where \(s\) is the neural feature trilinearly interpolated from the volume \(V\) at the location \(x\). \(E(.)\) indicates embedding operation. Then, the corresponding volume density \(\sigma\) is regressed by an MLP \(M_{2}\), \[\sigma=M_{2}(F_{r}). \tag{7}\] We obtain the rendered depth \(\hat{D}\) of the pixel corresponding to the \(k\) sampled points based on the volume rendering formula: \[\hat{D}=\sum_{k=1}^{N}T_{k}(1-exp(-\sigma_{k}))z_{k}, \tag{8}\] \[T_{k}=exp(-\sum_{j=1}^{k-1}\sigma_{j}). \tag{9}\] Our geometric feature correction module is shown in Fig.3 (a). First, we normalize the rendered depth and get depth embedding. At the same time, we do the same operation with the ray direction. An MLP \(M_{3}\) is used to process depth embedding and direction embedding to obtain query values \(Q\): \[Q=M_{3}(E(\hat{D})\oplus E(d)), \tag{10}\] where \(\oplus\) is concat operation, d is direction vector. It is worth noting that if we only use direction embedding as the query value, the rendered view will have black shadows and white holes. Then, the radiance features \(F_{r}\) serve as the value \(V\) and the volume features \(s\) as the key \(K\). The matching matrix of attention is calculated in the sampling point channel rather than the feature channel. The reason for doing this is to allow the model to independently learn which depth sample points contribute more to the rendering. This enables assigning higher weights to these depth sample points. By focusing on assigning weights to individual points rather than features for each sample point, we can better capture the informative points and improve the rendering quality. Therefore, we choose to pay attention in the sampling point dimension and calculate the attention weight for each sampling point to overcome the above problem. Finally, we generate corrected features \(F_{c}\) through multiple attention mechanisms: \[F_{c}=Multihead(Q,K,V), \tag{11}\] \[Multihead(Q,K,V)=(head_{1}\oplus...\oplus head_{h})W^{O} \tag{12}\] \[head_{i}=Attention(QW_{i}^{Q},KW_{i}^{K},VW_{i}^{V}), \tag{13}\] \[Attention(Q,K,V)=softmax(\frac{QK^{T}}{\sqrt{d_{k}}})V, \tag{14}\] where \(QW_{i}^{Q}\) represents the weight matrix of the query (\(Q\)) for the i-th attention head, \(KW_{i}^{K}\) represents the weight matrix of the key (\(K\)) for the i-th attention head, \(VW_{i}^{V}\) represents the weight matrix of the value (\(V\)) for the i-th attention head. is the feature matrix, \(\oplus\) is contact operation. This corrected feature can adapt to the change of scene space. #### Iii-B2 Appearance feature Rectification Considering that large changes in viewpoint can lead to boundary blanking and artifacts in rendering, we also correct the appearance features through the multi-head attention mechanism. We reproject the pixel feature \(F_{i}\) back onto the sample points along the light. We preliminarily regard pixel features as the appearance features of each position in the three-dimensional space along the direction of light. With this formula, each 3D point can theoretically have the corresponding 2D appearance feature. Specifically, given a three-dimensional point \(x\), the observed 2D image \(I_{i}\) with camera intrinsics \(K\) and camera pose \(\xi\), the corresponding 2D appearance feature \(F_{a}\) can be retrieved through the following reprojection operation: \[F_{a}=\pi(\left\{\left(F_{i}\oplus I_{i}\right),\xi,K\right\},\left\{x,y,z \right\}), \tag{15}\] where the function \(\pi(.)\) follows the principle of multi-view geometry [33]. If the point is inside the image, we simply select the nearest pixel using bilinear interpolation and index its features for the 3D point. If the point is outside the image, we assign a zero vector to the 3D point, which means there is no information observed. Fig. 3: Geometry and appearance rectification modules. a) For the appearance features rectification module, direction embedding and depth embedding be used as query, sampled volume features \(F_{g}\) are as key, the radiance features \(F_{r}\) is taken as a value. b) For the appearance features rectification module, we also use the direction embedding and depth embedding as the query, the sampled image feature \(F_{a}\) as the key, and the corrected feature \(F_{c}\) as the value, so as to build a multi-head attention mechanism. As in the geometric rectification section, we also use direction embedding and depth embedding as query values \(Q\). In order to make better use of the appearance information of each source perspective, we calculate the mean of the appearance features of the three samples. We use the mean appearance feature as the key \(K\) and the correction feature \(F_{c}\) as the value \(V\). We use formula (11) to obtain the final corrected radiance feature \(F\). This process allows our network to use the input source view appearance feature to correct the appearance feature \(F_{a}\) of the render view. \(F\) can effectively adapt to both the appearance of the scene and the geometry of the scene. Finally we use an MLP \(M_{4}\) decoding structure similar to NeRF to get radiance value \(c\) : \[c=M_{4}(F,E(\hat{D}),E(d)). \tag{16}\] ### _Rendering and Training_ The method described in the previous sections generates corrected radiance \(c\) and density \(\sigma\) values. To render the color \(r\hat{gb}\) of ray through the scene, we first query the color and density of N samples on the ray, and then accumulate the color and density along them: \[\hat{rgb}=\sum_{k=1}^{N}T_{k}(1-exp(-\sigma_{k}))c_{k}, \tag{17}\] \[T_{k}=exp(-\sum_{j=1}^{k-1}\sigma_{j}), \tag{18}\] where \(\hat{rgb}\) is the final pixel color output, and \(T_{k}\) represents the volume transmittance. This volume rendering is completely differentiable, so SC-NeRF can regress the final pixel color at the target view point from the sparse input views in an end-to-end manner. We use the \(\ell_{2}\) norm of the rendered pixel versus the real pixel as a loss. \[L=\left\|rgb-1.0,0.0 #### Iii-A2 Implement Details The dimension of image features \(F_{i}\) extracted by 2D CNN is set as 32. The depth sampling planes for the homographic warping operation are 128. The volume feature \(V\) dimension of 3D CNN encoding is set as 8. The number of sampling points for each ray is 128. The dimensions of position embedding, orientation embedding \(E(\hat{d})\) and depth embedding \(E(\hat{D})\) are 63, 33 and 11, respectively. The number of heads of multi-head attention used for appearance correction and geometry correction is 4. All training and evaluation experiments are conducted on a single RTX 3090 GPU with PyTorch1.10.1. We randomly select 1024 pixels from a novel viewpoint as a batch and apply the Adam optimizer with an initial learning rate of 0.0005. ### _Comparison Results_ We compare with three recent NeRF-based works, PixelNeRF [13], IBRNet [12], and MVSNeRF [10] that also aim to improve generalization ability of NeRF. We input the three source views to retrain the three models on the DTU data for the fair comparison. We choose four groups of images in each scene from the three datasets [7, 16, 25] for testing, and finally evaluate the performance with the mean PSNR, SSIM, and LPIPS. We show the quantitative results in Tab. I. To further compare the generalization capability of our network in outdoor environments, we compare ours with the MVSNeRF and IBRNet in Tab. II. For a more intuitive comparison of experimental effects, we show visualization comparison in Fig. Fig. 4: Rendering quality comparison at object level and indoor data. We show the visual comparison results of our method and other NeRF-based generalization methods [10, 12, 13] on 3 different test sets [7, 15, 16]. For each data set, we select two sets of scenarios to show. From the red circle, it can be observed that PixelNeRF [13] has a poorer rendering effect. From the blue circle, it can be seen that IBRNet [12] lacks sufficient detail in handling edge details. From the green circle, it can be noticed that MVSNeRF [10] is slightly inferior in rendering background details. Fig. 5: Three levels of viewpoint settings with increasing difficulty. We build three levels of difficulty rendering based on the gap between the source view and the target view. The orange ones represent the input 3 source views, and the blue ones represent the target view to be rendered. We call the different difficulty levels as ”small”, ”medium” and ”large” respectively. 4 and Fig. 6. We also present the comparative results of the depth estimates at DTU dataset in Tab. III. #### Iv-B1 Comparisons of view synthesis at synthetic and indoor data Quantitative results in Tab. I show that our SC-NeRF performs the best in all datasets. Although our model is only trained on DTU, it can be well generalized to other two datasets with highly different distributions of scenes and views. On the Blender, DTU, and LLFF datasets, the PSNR evaluation values are higher than those of PixelNeRF by 16.363, 7.524, and 10.847 respectively. They are also higher than IBRNet by 1.313, 0.794, and 0.297 respectively, and higher than MVSNeRF by 0.133, 0.204, and 0.157 respectively. As shown in Fig. 4, PixelNeRF has obvious blurring and artifacts when generalized to other scenes. This is because they only consider introducing 2D image features into the NeRF model, but don't consider the scene geometry. IBRNet has achieved well generalization results due to the introduction of rays transformer, but some artifacts still appear in the details. The view rendered by MVSNeRF tends to contain artifacts around the background because its cost volume is built for a specific reference view where the camera setback may not cover the target view sufficiently. The main reason for the superiority of our model is that we not only consider the geometric features of the scene, but also its appearance features, and use the rectification-based strategy to make the two features mutually optimized. This correction mechanism can improve the rendering effect in indoor scenes for better generalization. #### Iv-B2 Comparisons of view synthesis at outdoor data To evaluate the ability to generalize to outdoor scenes, we test IBRNet, MVSNeRF, and our model on the Spaces dataset with three difficulty level settings. From Tab. II, it can be clearly found that the generalization to outdoor scenes has a significant decline in the performance of their method compared with the test scene in the object level and indoor scenes. This can be seen in the degradation of their method's performance when generalized to outdoor scenarios. However, our method can achieve the best performance. On the three difficulty levels (small, medium, large) of the Spaces dataset, the PSNR obtained from testing is higher than IBRNet by 20%, 18.9%, and 20.1% respectively. It is also higher than MVSNeRF by 34.2%, 30%, and 25.4% respectively. From Fig. 6, we can analyze the reasons for the bad generalization of their two methods to outdoor scenes. Due to the increasing gap between the input view of the scene and the rendered target view, it can be found that IBRNet can not render the non-common view regions. Therefore, the rendered view will have a rendering blank area in the border outline. From the second line of Fig. 6, it can be found that MVSNeRF will have artifacts Fig. 6: Rendering quality comparison at outdoor data. We show the results of a visual comparison of our method with two state-of-the-art methods [10, 12] on the Space dataset [6] for three settings of different difficulty levels. From the red box, it can be observed that as the viewing angle increases, IBRNet [12] produces blank spaces at the edges of the rendered image. From the blue box, it can be seen that MVSNeRF [10] exhibits black pseudo-shadows in areas such as the sky. Our method can effectively solve these issues in outdoor scenes. in the sky. This is because of a huge difference in scene depth between the training and testing set. Differently, we can effectively alleviate the depth inconsistency between the training and testing set by normalizing the rendered depth and embedding it as a part of the query value. At the same time, we use the appearance correction strategy to effectively use the appearance characteristics of different views to overcome the problem of rendering loss in non-common view areas. It can be seen from both qualitative results and quantitative metrics that our model outperforms the two state-of-the-art methods on the above comparison in outdoor scene rendering performance. ### _Comparisons of Depth Reconstruction_ In order to evaluate whether the model effectively learns the ability to model the geometry of the scene. We reconstruct the depth as [9] by weighting the depth values of the sampling points on the ray and the volume density. We compare our approach with three NeRF-based methods [10, 12, 13]. It can be seen from the Tab. III that our method achieves the best results of the estimation of the depth of the new view. it can be observed that the absolute error obtained from the tests is reduced by 90.8% compared to PixelNeRF, by 98% compared to IBRNet, and by 37% compared to MVSNeRF. Since only the local features of the image are used and the geometric structure of the scene is not considered, the rendering depth of PixelNeRF has 20 times larger errors. It is worth noting that although IBRNet can render the target view well and has strong generalization ability, it does not learn the 3D model of the scene in essence, but only an interpolation synthesis in appearance. So it suffers from extremely poor depth estimations. Due to the geometric correction strategy adopted by our method, our method can outperform MVSNeRF's depth estimation metrics. ### _Ablations and Analysis_ Tab. IV and Fig. 7 summarize the quantitative and qualitative results of SC-NeRF for different architecture choices at different difficulty levels on the Spaces dataset. We take MVSNeRF as our baseline. We will only add the appearance correction module with orientation embedding as query value on the baseline called "Appearance-V". "Appearance-VD" means adding an appearance correction module with orientation embedding and rendering depth embedding as query values. Similarly, for the geometry correction module, we distinguish them by different query values, namely "Geometry-V" and "Geometry-VD". From Tab. IV, we find that when only using direction embedding as the query, the geometric rectification will make the indicators decrease, while the appearance rectification can effectively improve the PSNR. This shows that direction embedding as a query can effectively correct appearance characteristics. When we combine depth embedding with direction embedding as a multi-head attention query, we find that it can greatly improve the performance of geometric rectification, and also promote the appearance rectification. This shows that providing geometric depth information of the scene can effectively improve the performance of novel view rendering. It can be seen from Fig. 7, when only the direction is used as the query value, white holes will appear in appearance rectification, and black shadow transfer will appear in geometric rectification. However, when the depth embedding is also used as the query value, it can solve the two problems of the above. Only using the appearance rectification module can improve the PSNR metrics very well, but the structural index SSIM will be lower than the Baseline. On the contrary, if only the geometric correction module is used, the SSIM and LPIPS indicators can be improved very well. By comparing the sequence of geometric correction and appearance correction, we can find that the feature planning Figure 7: Qualitative ablation study. We show the visualization results of adding geometry correction module and appearance correction module on Baseline. We mark the area of our concern with a box to highlight problems such as rendering holes and artifact transfer. and correction before the appearance correction of the feature can obtain better rendering performance. Therefore, our final model firstly corrects geometric features, and then appearance features. ## V Conclusions We propose a new generalizable approach for neural rendering. It provides a more practical neural rendering technique using a small number of images as input. Through the proposed geometric feature correction module and appearance feature correction module, our network can be trained on only object-level scenes to effectively generalize to outdoor scenes. We show that our rectification strategy can provide valuable geometry and appearance cues, leading to state-of-the-art performance under several challenging settings on four benchmark datasets.
2309.04450
Density of $3$-critical signed graphs
We say that a signed graph is $k$-critical if it is not $k$-colorable but every one of its proper subgraphs is $k$-colorable. Using the definition of colorability due to Naserasr, Wang, and Zhu that extends the notion of circular colorability, we prove that every $3$-critical signed graph on $n$ vertices has at least $\frac{3n-1}{2}$ edges, and that this bound is asymptotically tight. It follows that every signed planar or projective-planar graph of girth at least $6$ is (circular) $3$-colorable, and for the projective-planar case, this girth condition is best possible. To prove our main result, we reformulate it in terms of the existence of a homomorphism to the signed graph $C_{3}^*$, which is the positive triangle augmented with a negative loop on each vertex.
Laurent Beaudou, Penny Haxell, Kathryn Nurse, Sagnik Sen, Zhouningxin Wang
2023-09-08T17:15:43Z
http://arxiv.org/abs/2309.04450v1
# Density of \(3\)-critical signed graphs ###### Abstract We say that a signed graph is _\(k\)-critical_ if it is not \(k\)-colorable but every one of its proper subgraphs is \(k\)-colorable. Using the definition of colorability due to Naserasr, Wang, and Zhu [20] that extends the notion of circular colorability, we prove that every \(3\)-critical signed graph on \(n\) vertices has at least \(\frac{3n-1}{2}\) edges, and that this bound is asymptotically tight. It follows that every signed planar or projective-planar graph of girth at least \(6\) is (circular) \(3\)-colorable, and for the projective-planar case, this girth condition is best possible. To prove our main result, we reformulate it in terms of the existence of a homomorphism to the signed graph \(C_{3}^{*}\), which is the positive triangle augmented with a negative loop on each vertex. **Keywords:** Homomorphism, critical signed graphs, edge-density, circular coloring. ## 1 Introduction For a graph property \(P\), we say that a graph \(G\) is _critical_ for \(P\) if every proper subgraph of \(G\) satisfies \(P\) but \(G\) itself does not. Thus in particular, every graph that is not \(k\)-colorable contains a critical subgraph for \(k\)-colorability, and hence the study of critical graphs for coloring has been of key importance in the study of chromatic numbers. In 2014, Kostochka and Yancey [10, 11] proved the following precise lower bound on the density of graphs that are critical for \(3\)-colorability. Note that the term \(4\)-_critical_ is used instead of "critical for \(3\)-colorability" in [10, 11] but we avoid it here for consistency. **Theorem 1.1**.: [10, 11] _If a graph \(G\) is critical for \(3\)-colorability, then \(|E(G)|\geq\frac{5|V(G)|-2}{3}\)._ Their short proof of this theorem in [11], coupled with a standard argument about the density of planar graphs, provided a new and elegant proof of the classical theorem of Grotzsch that every triangle-free planar graph is \(3\)-colorable. In addition, Theorem 1.1 resolved the first open case of a well-known and decades-old conjecture of Ore [21] on the density of critical graphs for \(k\)-colorability for every \(k\). In [10], Kostochka and Yancey proved a corresponding density bound for general \(k\), thus solving Ore's Conjecture exactly or almost exactly in every case. Our aim in this paper is to address the analogous density question in the setting of signed graphs. A _signed graph_\((G,\sigma)\) is a graph \(G\) together with a signature \(\sigma:E(G)\to\{+,-\}\). Thus a graph \(G\) can be regarded as a signed graph \((G,+)\) with all edges being positive (i.e., assigned with \(+\)). One of the main notions distinguishing signed graphs from \(2\)-edge-colored graphs (whose edges are simply partitioned into two distinct types) is the operation of vertex switching. A _switching_ at a vertex \(v\) of a signed graph corresponds to multiplying the signs of all (non-loop) edges incident to \(v\) by \(-\). Thus the sign of a loop is invariant under switching. Two signed graphs are said to be _switching equivalent_ if one can be obtained from the other by a sequence of vertex switchings. Accordingly, meaningful parameters of signed graphs should be invariant under vertex switching. In the seminal paper [25], Zaslavsky introduced a natural definition of coloring of signed graphs with an even number of colors, half of which are positive, the other half negative. This notion has been extended to odd numbers by Macajova, Raspaud, and Skoviera [14] by introducing the color \(0\), which has a special status, and in a different and more symmetric way by Naserasr, Wang, and Zhu [20] who generalized the definition of circular coloring of graphs to the case of signed graphs. In this paper, we use the latter definition, which we now describe. In the graph setting, a _circular \(\frac{p}{q}\)-coloring_ of a graph \(G\) is a mapping \(f:V(G)\to\{0,1,\ldots,p-1\}\) such that for each edge \(uv\), \(q\leq|f(u)-f(v)|\leq p-q\). We can think of this as assigning to each vertex a color chosen from a circular arrangement of \(p\) colors, such that adjacent vertices receive colors that are at distance at least \(q\) on the circle. This well-studied concept refines the usual definition of coloring, coinciding with the definition of \(k\)-coloring when \(k=\frac{p}{q}\) is an integer. For signed graphs, given positive integers \(p,q\) with \(p\geq 2q\) and with \(p\) even, a _circular \(\frac{p}{q}\)-coloring_ of a signed graph \((G,\sigma)\) is a mapping \(\varphi:V(G)\to\{0,1,\ldots,p-1\}\) such that * for each positive edge \(uv\), \(q\leq|\varphi(u)-\varphi(v)|\leq p-q\) and * for each negative edge \(uv\), either \(|\varphi(u)-\varphi(v)|\leq\frac{p}{2}-q\) or \(|\varphi(u)-\varphi(v)|\geq\frac{p}{2}+q\). Intuitively, vertices adjacent via a positive edge should have colors at distance at least \(q\) on the \(p\)-cycle of colors as in the graph case, while for vertices \(u\) and \(v\) adjacent via a negative edge, the color of \(u\) should be at distance at least \(q\) from the _antipodal color_\(\frac{p}{2}+\varphi(v)\pmod{p}\) of \(v\). The _circular chromatic number_ of \((G,\sigma)\) is defined to be \[\chi_{c}(G,\sigma)=\min\Big{\{}\frac{p}{q}\mid(G,\sigma)\text{ admits a circular }\frac{p}{q}\text{-coloring}\Big{\}}.\] It is not difficult to see that these definitions are invariant under vertex switching. The notion of criticality then extends in a natural way, and in particular, we say that a signed graph \((G,\sigma)\) is _(circular) \(3\)-critical_ if \(\chi_{c}(G,\sigma)>3\) but \(\chi_{c}(H,\sigma)\leq 3\) for every proper subgraph \(H\) of \(G\). Our main result gives a signed graphs analogue of Theorem 1.1. **Theorem 1.2**.: _If \((G,\sigma)\) is a signed graph that is (circular) 3-critical, then_ \[|E(G)|\geq\frac{3|V(G)|-1}{2}.\] Moreover, we show in Section 4 that there is an infinite sequence of such signed graphs whose edge density is precisely \(\frac{3}{2}\). Hence our density bound is asymptotically tight. One essentially immediate corollary of Theorem 1.2 is the following result (see Subsection 1.1), which is analogous to the simple derivation of Grotzsch's theorem from Theorem 1.1. **Corollary 1.3**.: _Let \(G\) be a planar or projective-planar graph of girth at least \(6\). Then for every signature \(\sigma\) on \(G\), the signed graph \((G,\sigma)\) is (circular) \(3\)-colorable._ We show in Section 4 that this girth bound is best possible for the class of signed projective-planar graphs. For the planar case, it improves the previous best known bound of \(7\), proved in [18], but at present, we do not know whether the bound of \(6\) is tight. Constructions given in [20, 9] show that the correct bound cannot be smaller than \(5\). Homomorphisms.In fact, it will be more natural for us to formulate and prove Theorem 1.2 in terms of homomorphisms. Recall that in the graph setting, a _homomorphism_ of a graph \(G\) to a graph \(H\) is a vertex mapping \(\varphi:V(G)\to V(H)\) such that adjacency is preserved. This is a generalization of the definition of coloring, for example, any proper vertex \(k\)-coloring of \(G\) can be viewed as a homomorphism of \(G\) to the complete graph \(K_{k}\), and it is well known that a graph admits a circular \(\frac{2k+1}{k}\)-coloring if and only if it admits a homomorphism to the odd cycle \(C_{2k+1}\). For a given graph \(H\), a graph \(G\) is called _\(H\)-critical_ if \(G\) does not admit a homomorphism to \(H\), but every proper subgraph of \(G\) does. The definition of homomorphism extends to signed graphs \((G,\sigma)\) as follows. For a closed walk \(W\) in \(G\), the _sign_ of \(W\) is the product of the signs of all the edges in \(W\) (allowing repetition). A _homomorphism_ of \((G,\sigma)\) to another signed graph \((H,\pi)\) is a mapping of \(V(G)\) to \(V(H)\) such that both the adjacency and the signs of all closed walks are preserved. If there exists a homomorphism of \((G,\sigma)\) to \((H,\pi)\), then we write \((G,\sigma)\rightarrow(H,\pi)\). Again it is easy to see that the existence of a homomorphism is invariant under switching. The definition of criticality also extends in the natural way: given a signed graph \((H,\pi)\), a signed graph \((G,\sigma)\) is said to be _\((H,\pi)\)-critical_ if \((G,\sigma)\) does not admit a homomorphism to \((H,\pi)\), but every proper subgraph of \((G,\sigma)\) does. (More accurately, with certain girth conditions, see Definition 2.4.) Our main interest in this paper is in (circular) 3-coloring of signed graphs, which by definition is the case \(\ell=3\) of circular \(\frac{2\ell}{\ell-1}\)-coloring. This sequence of rationals turns out to be of special interest and importance, in that (analogously to the graph case) it is closely related to the existence of homomorphisms of signed graphs to signed cycles. We write \(C_{\ell}^{*}\) for a signed cycle of length \(\ell\) with an odd number of positive edges, together with negative loops at each vertex, see Figure 1. (Note that for fixed \(\ell\), all such cycles are switching-equivalent.) The following fact from [19] gives a characterization of circular \(\frac{2\ell}{\ell-1}\)-colorable signed graphs in terms of homomorphisms. **Proposition 1.4**.: [19] _A signed graph admits a circular \(\frac{2\ell}{\ell-1}\)-coloring if and only if it admits a homomorphism to \(C_{\ell}^{*}\)._ Proposition 1.4 implies the reformulation of Theorem 1.2 that will be our main focus from now on. **Theorem 1.5**.: _Every \(C_{3}^{*}\)-critical signed graph \((G,\sigma)\) satisfies \(|E(G)|\geq\frac{3|V(G)|-1}{2}\)._ The rest of the paper is organized as follows. In the next subsection, we give the (simple) proof of Corollary 1.3, and also outline how our main results relate to other previous work on graphs and signed graphs. While this material is not essential to the understanding of this paper, it provides further motivation and places our results in a broader context. In Section 2, we give preliminary background on signed graphs and homomorphisms of signed graphs. In particular, in Section 2.1, we provide some basic properties of \(C_{3}^{*}\)-critical signed graphs. In Section 3, we use the potential method employed in [10], adapted to our setting, to find more forbidden configurations in the minimum counterexample of our main theorem (Theorem 1.5) and use the discharging technique to complete the proof. In Section 4, we show that the edge density bound of Theorem 1.5 is asymptotically tight, and the girth bound of Corollary 1.3 for the class of signed projective-planar graphs is tight. We also pose some open questions there. ### Further Context As previously noted, results such as Theorem 1.1 and Theorem 1.5 have direct implications for colorings (or more generally homomorphisms) of graphs whose densities are bounded above, for example, graphs embedded on surfaces or graphs with large girth. We see this explicitly in the following proof of Corollary 1.3. _Proof of Corollary 1.3._ Let \(G=(V,E)\) be a planar or projective-planar graph of girth at least \(6\), and \(\sigma\) be a signature on \(G\). If \((G,\sigma)\) is not \(3\)-colorable, then by Proposition 1.4 we may assume without loss of generality that it is \(C_{3}^{*}\)-critical. Consider a plane or projective-plane embedding of \(G\), and let its set of faces be denoted by \(F\). Euler's formula Figure 1: Signed graphs \(C_{\ell}^{*}\). Solid blue edges are positive, dashed red edges are negative. states that \(|V|-|E|+|F|=2-g\) where \(g\) is the genus of the surface in which the graph is embedded (0 for the plane and 1 for the projective plane). The girth condition applied to the embedding gives that \(|E|\geq 3|F|\). Hence we obtain that \(|E|\leq\frac{3|V|-3(2-g)}{2}\), which contradicts Theorem 1.5. \(\Box\) In classical graph theory, one major motivation for proving lower bounds on the density of critical graphs was Jaeger's famous Circular Flow Conjecture [6, 7], which was recently disproved for \(k\geq 3\) by Han, Li, Wu, and Zhang [5]. However, its planar restriction remains open and can be stated as follows. **Conjecture 1.6**.: [6] _For any integer \(k\geq 1\), every planar graph of girth at least \(4k\) admits a homomorphism to \(C_{2k+1}\)._ For general \(k\), the best result is due to Lovasz, Thomassen, Wu, and Zhang [13], that the girth condition \(6k\) is sufficient. For small values of \(k\), tighter results are known. The case \(k=1\) is simply Grotzsch's theorem [4]; for \(k=2\), it has been verified by Dvorak and Postle [3] for the girth condition 10; for \(k=3\), the best-known girth bound of 16 has very recently been achieved by Postle and Smith-Roberge [22]. The same results for \(k=2,3\) were independently obtained by Cranston and Li [2] using the notion of flows. The results of [3] and [22] are each proved by establishing lower bounds on the density of \(C_{2k+1}\)-critical graphs, as follows. **Theorem 1.7**.: [3] _Every \(C_{5}\)-critical graph \(G\) except \(C_{3}\) satisfies \(|E(G)|\geq\frac{5|V(G)|-2}{4}\)._ **Theorem 1.8**.: [22] _Every \(C_{7}\)-critical graph \(G\) except \(C_{3}\) and \(C_{5}\) satisfies \(|E(G)|\geq\frac{17|V(G)|-2}{15}\)._ The general problem of finding the best possible lower bound on the edge density of \(C_{2k+1}\)-critical graphs has been studied extensively in the literature, and we refer to the two papers above and the references therein. In the setting of signed graphs, the _girth_ of a signed graph \((G,\sigma)\) is defined as the length of a shortest cycle in \(G\), and its _negative-girth_ as the length of its shortest negative cycle. Parallel to the graph case, the following natural questions have been addressed in the literature. 1. What is the edge density of \(C_{\ell}^{*}\)-critical signed graphs? 2. What is the smallest integer \(f(\ell)\) such that every signed planar graph of girth at least \(f(\ell)\) admits a homomorphism to \(C_{\ell}^{*}\)? Naserasr, Wang, and Zhu [20] have proved that every signed planar graph of girth at least 4 admits a homomorphism to \(C_{2}^{*}\), and the girth bound is best possible due to a result of Kardos and Narboni [8]. Moreover, by Proposition 1.4, the circular chromatic number bound 4 of such signed graphs is asymptotically tight, as there is a sequence of signed bipartite planar simple graphs whose circular chromatic number is approaching 4 [9]. Regarding negative cycles as homomorphism targets, the signed cycle of length \(\ell\) with an odd number of negative edges, written \(C_{-\ell}\), has also been studied. For \(\ell=4\), when restricted to bipartite graphs, the following results have been established. **Theorem 1.9**.: [15]__ * _Every_ \(C_{-4}\)_-critical signed graph_ \((G,\sigma)\) _except one signed graph on_ \(7\) _vertices and with_ \(9\) _edges satisfies that_ \(|E(G)|\geq\frac{4|V(G)|}{3}\)_._ * _Every signed bipartite planar graph of negative-girth at least_ \(8\) _admits a homomorphism to_ \(C_{-4}\)_. Moreover, the negative-girth bound is the best possible._ Thus our Theorem 1.5 and Corollary 1.3 further contribute to this line of investigation. In particular, Corollary 1.3 can be viewed as addressing the most basic case of a signed graph analogue of Conjecture 1.6. ## 2 Preliminaries In this paper, all graphs are finite and may have multiple edges or loops. If the signature of a signed graph \((G,\sigma)\) is understood from the context, or its particular knowledge is irrelevant, we use the simplified notation \(\hat{G}\) to denote it. We denote the _underlying graph_ of a signed graph \(\hat{G}=(G,\sigma)\) by \(G\). For the figures, we use a blue solid line to represent a positive edge, a red dashed line to represent a negative edge, and a gray line to represent an unsigned edge. A _digon_ is two parallel edges with different signs. We say \((H,\pi)\) is a _subgraph_ of \((G,\sigma)\) if \(H\) is a subgraph of \(G\) and \(\pi=\sigma|_{H}\). We use \(v(G)\) to denote the number of vertices of \(G\) and \(e(G)\) to denote the number of edges of \(G\). We say a vertex is a _distance-two neighbor_ of another vertex if they are connected by a path of length \(2\) whose internal vertex is of degree \(2\). A \(k\)_-vertex_ is a vertex having degree \(k\) and a \(k^{+}\)_-vertex_ is a vertex of degree at least \(k\). A \(k_{\geq\ell}\)_-vertex_ (or, \(k_{\leq\ell}\)_-vertex_) is a \(k\)-vertex with at least (respectively, at most) \(\ell\) neighbors of degree \(2\) and a \(k_{\ell}\)_-vertex_ is a \(k\)-vertex with exactly \(\ell\) neighbors of degree \(2\). Other standard notions follow [23]. ### Homomorphisms of signed graphs Switching a subset \(S\) of vertices of \((G,\sigma)\) amounts to toggling the sign of all the edges of the edge-cut \([S,V(G)\setminus S]\). Two signed graphs \((G,\sigma)\) and \((G,\sigma^{\prime})\), or alternatively, the two signatures \(\sigma\) and \(\sigma^{\prime}\) on \(G\), are switching equivalent if we can obtain \((G,\sigma^{\prime})\) from \((G,\sigma)\) by switching at an edge-cut. Note that switching at an edge-cut does not change the signs of any closed walk (or cycle). One of the earliest results [24] proved in the theory of signed graphs characterizes equivalent signed graphs using the sign of their cycles (or closed walks). **Lemma 2.1**.: [24] _Two signed graphs \((G,\sigma)\) and \((G,\sigma^{\prime})\) are switching equivalent if and only if each cycle has the same sign in both signed graphs._ Recall that a homomorphism of \((G,\sigma)\) to \((H,\pi)\) is a mapping \(f:V(G)\to V(H)\) such that the adjacency and the signs of closed walks are preserved. A homomorphism of \((G,\sigma)\) to \((H,\pi)\) is said to be _edge-sign preserving_ if, furthermore, it preserves the signs of edges. **Proposition 2.2**.: [16] _A signed graph \((G,\sigma)\) admits a homomorphism to \((H,\pi)\) if and only if there exists a switching-equivalent signature \(\sigma^{\prime}\) such that \((G,\sigma^{\prime})\) admits an edge-sign preserving homomorphism to \((H,\pi)\)._ We have noted before that based on the sign of the cycles and the parity of their lengths, there are four types of closed walks in signed graphs: positive odd closed walk (type 01), negative odd closed walk (type 11), positive even closed walk (type 00) and negative even closed walk (type 10). We denote by \(g_{{}_{ij}}(G,\sigma)\) for \(ij\in\mathbb{Z}_{2}^{2}\) the length of a shortest closed walk of type \(ij\) in a signed graph \((G,\sigma)\). The next lemma provides a necessary condition for a signed graph to admit a homomorphism to another. **Lemma 2.3**.: [17] _If \((G,\sigma)\rightarrow(H,\pi)\), then \(g_{{}_{ij}}(G,\sigma)\geq g_{{}_{ij}}(H,\pi)\) for \(ij\in\mathbb{Z}_{2}^{2}\)._ It is easy to observe that if a signed graph \((G,\sigma)\) is \((H,\pi)\)-critical and there exists \(ij\in\mathbb{Z}_{2}^{2}\) such that \(g_{{}_{ij}}(G,\sigma)\leq g_{{}_{ij}}(H,\pi)\), then \((G,\sigma)\) is just a signed cycle of type \(ij\). To eliminate the trivial case, we use the notion of \((H,\pi)\)-critical signed graph defined as follows: **Definition 2.4**.: [15] _A signed graph \(\hat{G}\) is \(\hat{H}\)-critical if for \(ij\in\mathbb{Z}_{2}^{2}\), \(g_{{}_{ij}}(\hat{G})\geq g_{{}_{ij}}(\hat{H})\), \(\hat{G}\) admits no homomorphism to \(\hat{H}\) but any proper subgraph of \(\hat{G}\) does._ In particular, this means any \(C_{3}^{*}\)-critical signed graph has no digon and no positive loop. ### Circular colorings of signed graphs The notion of the circular coloring of signed graphs is a refinement of both the notions of 0-free colorings of signed graphs and circular colorings of simple graphs. Now we are going to show how homomorphism captures the notion of circular \(\frac{p}{q}\)-coloring of signed graphs for any rational number \(\frac{p}{q}\). To do so, we need a special family of signed graphs. **Definition 2.5**.: [20] _Given two positive integers \(p\) and \(q\) with \(p\) being even, the circular \(\frac{p}{q}\)-clique, denoted \(K_{p;q}^{s}\), is a signed graph having the set of vertices \(\{0,1,\cdots,p-1\}\), and edges and signature as follows: (1) \(ij\) is a positive edge if \(q\leq|i-j|\leq p-q\), (2) \(ij\) is a negative edge if either \(|i-j|\leq\frac{p}{2}-q\) or \(|i-j|\geq\frac{p}{2}+q\)._ The signed graph \(K_{p;q}^{s}\) contains a negative loop at each vertex, and, moreover, it contains a digon if and only if \(\frac{p}{q}\geq 4\). Note that in \(K_{p;q}^{s}\), each vertex \(i\) and its antipodal vertex \(\bar{i}=i+\frac{p}{2}\) (taken modulo \(p\)) has exactly an opposite neighborhood, that is to say, a vertex is adjacent to \(i\) by a positive edge while it is adjacent to \(\bar{i}\) by a negative edge. We may switch at \(\{\frac{p}{2},\ldots,p-1\}\) and identify each of them with their antipodes, and the resulting signed graph is denoted by \(\hat{K}_{p;q}^{s}\). Such \(\hat{K}_{p;q}^{s}\) has exactly \(\frac{p}{2}\) vertices. Given any positive rational number \(\frac{p}{q}\), the circular \(\frac{p}{q}\)-clique \(K_{p;q}^{s}\) and its switching core \(\hat{K}_{p;q}^{s}\) are put into our context in the following proposition. **Proposition 2.6**.: [20] _Given a signed graph \((G,\sigma)\), the following statements are equivalent:_ * \((G,\sigma)\) _admits a circular_ \(\frac{p}{q}\)_-coloring;_ * \((G,\sigma)\) _admits an edge-sign preserving homomorphism to_ \(K^{s}_{p;q}\)_;_ * \((G,\sigma)\) _admits a homomorphism to_ \(\hat{K}^{s}_{p;q}\)_._ We note that \(\hat{K}^{s}_{2\ell;\ell-1}\) is switching isomorphic to \(C^{*}_{\ell}\). ### Properties of \(C^{*}_{3}\)-critical signed graphs We say a triangle is a graph that is a cycle \(C_{3}\) of length three. We denote by \(C^{*}_{3}\) the signed graph in Figure 2 which is a positive triangle with a negative loop at each vertex. Recall that a signed graph \(\hat{G}\) is _\(C^{*}_{3}\)-critical_ if the following three conditions are satisfied: there is no digon or positive loop in \(\hat{G}\), \(\hat{G}\not\to C^{*}_{3}\), and \(\hat{G}^{\prime}\to C^{*}_{3}\) for any proper subgraph \(\hat{G}^{\prime}\subsetneq\hat{G}\). First we give an example, depicted in Figure 3, which is \(C^{*}_{3}\)-critical and satisfies the condition \(e(G)=\frac{3v(G)-1}{2}\). **Lemma 2.7**.: _The signed graph \(\hat{W}\) is \(C^{*}_{3}\)-critical._ Proof.: Suppose for contradiction that \(\hat{W}\to C^{*}_{3}\). By Proposition 2.2, there is a switching-equivalent signature \(\sigma^{\prime}\) of \(\hat{W}\) such that \((W,\sigma^{\prime})\) admits an edge-sign preserving homomorphism to \(C^{*}_{3}\). We first observe that under \(\sigma^{\prime}\) any negative 4-cycle contains only one negative edge and any positive cycle contains no negative edges. Subject to these two conditions, \(\sigma^{\prime}\) is unique as drawn in Figure 3. Hence, by examining the subgraph \(\hat{W}-v_{4}\), in any edge-sign preserving homomorphism \(\varphi\) of \((W,\sigma^{\prime})\) to \(C^{*}_{3}\), \(\varphi(v_{1})=\varphi(v_{2})\). But then together with \(\varphi(v_{4})\), it would form a digon, which does not exist in \(C^{*}_{3}\), a contradiction. Therefore, \(\hat{W}\not\to C^{*}_{3}\). Finally, it is easy to see that any proper subgraph of \(\hat{W}\) admits a homomorphism to \(C^{*}_{3}\). Thus \(\hat{W}\) is \(C^{*}_{3}\)-critical. In the arguments that follow we will employ a general technique to "color" a signed graph \(\hat{G}\) by extending a "pre-coloring" of its subgraph \(\hat{H}\). First, we assume that there exists an edge-sign preserving homomorphism of \(\hat{H}\) to \(C^{*}_{3}\) under the signature of \(\hat{G}\). Once we fix the homomorphism of \(\hat{H}\) to \(C^{*}_{3}\), we never again switch at the vertices of \(\hat{H}\). To extend this homomorphism, we may switch at the vertices in \(V(G)\setminus V(H)\). In this setting, it makes sense to speak of the sign of a path if both ends of a path are fixed in \(\hat{H}\). The _sign_ of a path is then the product of the signs of all of its edges. Motivated by this, in the sequel, we use figures with round or square vertices to denote properties of the coloring and structure: we use a round vertex to denote a vertex that is not pre-colored, at which we allow switching, and whose degree is shown in the figure; We use a square vertex to denote a vertex which is pre-colored, at which we do not allow switching, which may have neighbors not drawn in the figure, and, moreover, which may not be distinct from other square vertices. **Observation 2.8**.: _Let \(\hat{P}\) be a signed path with the endpoints \(x\) and \(y\), which contains at most one negative edge. Let \(S_{x},S_{y}\subseteq V(C_{3}^{*})\). Let \(\varphi:\{x,y\}\to V(C_{3}^{*})\) be such that \(\varphi(x)\in S_{x}\) and \(\varphi(y)\in S_{y}\). The mapping \(\varphi\) can be extended to an edge-sign preserving homomorphism of \(\hat{P}\) to \(C_{3}^{*}\) unless one of the following conditions is satisfied:_ 1. \(\hat{P}\) _is either a positive edge or a negative path of length_ \(2\)_,_ \(S_{x}=S_{y}\) _and_ \(|S_{x}|=1\)_;_ 2. \(\hat{P}\) _is a negative edge and_ \(S_{x}\cap S_{y}=\emptyset\)_._ To justify this observation, note that when the above conditions are not satisfied \(\hat{P}\) has at least two positive edges, which affords enough flexibility in the mapping. A _theta graph_ is a simple graph that is the union of three internally disjoint paths that have the same two end vertices. **Lemma 2.9**.: _Every signed theta graph \(\hat{\Theta}\) admits a homomorphism to \(C_{3}^{*}\) and is therefore not \(C_{3}^{*}\)-critical._ Proof.: Among the three paths of a signed theta graph \(\hat{\Theta}\), two of them, say \(\hat{P}_{1},\hat{P}_{2}\) with \(v(\hat{P}_{1})\geq v(\hat{P}_{2})\), have the same parity of the number of positive edges. For each negative edge \(e=uv\) of those paths, identify \(u\) and \(v\). Now it is easy to see that \(\hat{P}_{1}\to\hat{P}_{2}\). Therefore, \(\hat{\Theta}\to C_{3}^{*}\) if and only if \(\hat{\Theta}-E(\hat{P}_{1})\to C_{3}^{*}\). But \(\hat{\Theta}-E(\hat{P}_{1})\) is a signed cycle (which is not a digon), and therefore admits a homomorphism to \(C_{3}^{*}\). In fact, Lemma 2.9 holds more generally: no signed theta graph \(\hat{\Theta}\) is \(C_{\ell}^{*}\)-critical, and every signed theta graph which does not violate the girth conditions admits a homomorphism to \(C_{\ell}^{*}\). **Lemma 2.10**.: _No \(C_{3}^{*}\)-critical signed graph contains an edge of the following type: loop-edge, parallel-edge, or cut-edge._ Proof.: Let \(\hat{G}\) be a \(C_{3}^{*}\)-critical signed graph, and let \(e\in E(\hat{G})\). By definition of criticality, \(\hat{G}\not\to C_{3}^{*}\) but \(\hat{G}-e\to C_{3}^{*}\). We will show, in order, that \(e\) cannot be a type of edge listed in the lemma. First, suppose \(e\) is a loop. The signed graph \(C_{3}^{*}\) has \(g_{{}_{01}}(C_{3}^{*})=3\), and so \(e\) must be a negative loop. Since \(C_{3}^{*}\) has negative loops at each vertex, \(\hat{G}-e\to C_{3}^{*}\) if and only if \(\hat{G}\to C_{3}^{*}\), a contradiction. Next, suppose edge \(f\) is parallel to \(e\). The signed graph \(C_{3}^{*}\) has \(g_{{}_{10}}(C_{3}^{*})=4\), which means that \(e\) and \(f\) must have the same sign. But since edges \(e\) and \(f\) have the same endpoints and sign, \(\hat{G}-e\to C_{3}^{*}\) if and only if \(\hat{G}\to C_{3}^{*}\), a contradiction. Finally, suppose that \(e\) is a cut-edge with ends \(u\) and \(v\). Let \(\hat{G}_{u}\) and \(\hat{G}_{v}\) be the components of \(\hat{G}-e\) containing \(u\) and \(v\) respectively. Since \(\hat{G}\) is \(C_{3}^{*}\)-critical, there exist homomorphisms \(\psi_{1}:\hat{G}_{u}\to C_{3}^{*}\) and \(\psi_{2}:\hat{G}_{v}\to C_{3}^{*}\). By the vertex-transitivity of \(C_{3}^{*}\), we may assume \(\psi_{1}(u)=\psi_{2}(v)\) if \(e\) is negative and \(\psi_{1}(u)\neq\psi_{2}(v)\) if \(e\) is positive. But, by Observation 2.8, \(\psi_{1}\cup\psi_{2}:\hat{G}\to C_{3}^{*}\), a contradiction. **Lemma 2.11**.: _No \(C_{3}^{*}\)-critical signed graph contains a vertex of the following type: \(1\)-vertex, \(2_{1}\)-vertex, \(4_{4}\)-vertex, or \(5_{5}\)-vertex._ Proof.: Let \(\hat{G}\) be a \(C_{3}^{*}\)-critical signed graph. Suppose to the contrary that there is a vertex \(v\) in \(\hat{G}\) of a type listed in the lemma. By Lemma 2.10, \(v\) cannot be a \(1\)-vertex. If \(v\) is a \(2_{1}\)-vertex, let \(\hat{G}^{\prime}\) be the signed graph obtained from \(\hat{G}\) by deleting \(v\) and its \(2\)-neighbor. By criticality, there is a homomorphism \(\varphi:\hat{G}^{\prime}\to C_{3}^{*}\). But by Observation 2.8, \(\varphi\) can be extended to a homomorphism of \(\hat{G}\) to \(C_{3}^{*}\). This contradicts that \(\hat{G}\) is \(C_{3}^{*}\)-critical. Suppose \(v\) is a \(4_{4}\)-vertex. Let \(v_{1},v_{2},v_{3},v_{4}\) be the distance-two neighbors of \(v\), see Figure 4. Let \(\hat{H}\) be the signed graph obtained from \(\hat{G}\) by deleting \(v\) and its \(2\)-neighbors. Since \(\hat{H}\) is a proper subgraph of a \(C_{3}^{*}\)-critical signed graph, there is a homomorphism \(\varphi\) of \(\hat{H}\) to \(C_{3}^{*}\). Assume that \(\varphi\) is edge-sign preserving under the signature \(\sigma\). By possibly switching at \(v\), we may assume that among four \(vv_{i}\)-paths at most two of them are negative, say \(vv_{1}\)-path and \(vv_{2}\)-path if there exists two. Let \(\varphi(v)\in V(C_{3}^{*})\setminus\{\varphi(v_{1}),\varphi(v_{2})\}\). By Observation 2.8, such a mapping can be extended to those \(2\)-neighbors of \(v\), a contradiction. The case when \(v\) is a \(5_{5}\)-vertex is similar to the \(4_{4}\)-vertex case, where again we may assume that there are at most two negative paths, and we omit the proof. **Lemma 2.12**.: _No \(C_{3}^{*}\)-critical signed graph contains a \(3_{2}\)-vertex._ Proof.: Let \(\hat{G}\) be a \(C_{3}^{*}\)-critical signed graph. Suppose to the contrary that there is a \(3_{2}\)-vertex \(v\) in \(\hat{G}\). Let \(x\) and \(y\) be its distance-two neighbors, and \(w\) be the remaining neighbor of \(v\). See Figure 5. Let \(\hat{H}\) be the signed graph obtained from \(\hat{G}\) by deleting the vertex \(v\) and its two \(2\)-neighbors. By the criticality of \(\hat{G}\), there is a homomorphism \(\psi:\hat{H}\to C_{3}^{*}\). We claim that \(\psi\) can be extended to a homomorphism of \(\hat{G}\) to \(C_{3}^{*}\). Assume that \(\sigma\) of \(\hat{G}\) is the signature under which \(\hat{H}\) admits an edge-sign preserving homomorphism to \(C_{3}^{*}\). By possibly switching at Figure 4: A \(4_{4}\)-vertex \(v\) and its distance-two neighbors. Figure 5: A \(3_{2}\)-vertex \(v\) and surrounding graph. \(v\), we may assume that at most one of the three paths \(vw\)-path, \(vx\)-path, and \(vy\)-path is negative under \(\sigma\). Moreover, we may assume each of the three paths has at most one negative edge. If none of the three paths are negative, let \(\psi(v)\in V(C_{3}^{*})\setminus\psi(w)\). If \(vw\)-path is the sole negative path, let \(\psi(v)=\psi(w)\). If, without loss of generality, \(vx\)-path is negative, then let \(\psi(v)\in V(C_{3}^{*})\setminus\{\psi(x),\psi(w)\}\). In each case, \(\psi\) can be extended to a homomorphism from \(G\) to \(C_{3}^{*}\) by Observation 2.8. **Lemma 2.13**.: _Let \(\hat{T}\) be a signed triangle with vertex set \(\{v_{1},v_{2},v_{3}\}\) and contains at most one negative edge. Let \(S_{i}\subseteq V(C_{3}^{*})\) for \(i\in\{1,2,3\}\). There exists an edge-sign preserving homomorphism \(\varphi:V(\hat{T})\to V(C_{3}^{*})\) such that \(\varphi(v_{i})\in S_{i}\) for each \(i\in\{1,2,3\}\) if one of the following conditions are satisfied:_ 1. \(|S_{1}|\geq 2,|S_{2}|\geq 2,\) _and_ \(|S_{3}|=3\)_;_ 2. \(|S_{1}|\geq 1,|S_{2}|=3,\) _and_ \(|S_{3}|=3\)_;_ 3. \(|S_{1}|\geq 1,|S_{2}|\geq 2,|S_{3}|=3,\) _and_ \(v_{1}v_{2}\) _is not the negative edge if_ \(\hat{T}\) _is negative;_ 4. \(|S_{1}|=|S_{2}|=|S_{3}|=2\)_, and_ \(S_{2}\cup S_{3}=V(C_{3}^{*})\)_._ Proof.: For Cases (1) and (2), the argument is the same. In both cases, if \(\hat{T}\) contains three positive edges, then we can choose \(\varphi(v_{1})\in S_{1}\), \(\varphi(v_{2})\in S_{2}\setminus\{\varphi(v_{1})\}\) and \(\varphi(v_{3})\in S_{3}\setminus\{\varphi(v_{1}),\varphi(v_{2})\}\) in this order. If \(\hat{T}\) contains one negative edge \(v_{i}v_{j}\), then we choose \(\varphi(v_{i})=\varphi(v_{j})\in S_{i}\cap S_{j}\) and then for \(k\in\{1,2,3\}\setminus\{i,j\}\), \(\varphi(v_{k})\in S_{k}\setminus\{\varphi(v_{i})\}\). Just be careful in Case (2), if the only negative edge is \(v_{2}v_{3}\), then we can always choose \(\varphi(v_{2})=\varphi(v_{3})\in S_{2}\cap S_{3}\setminus S_{1}\) to guarantee that \(S_{1}\setminus\{\varphi(v_{2})\}\) is not empty. For Case (3), if \(\hat{T}\) is positive, then the same argument above works. If \(v_{2}v_{3}\) is the only negative edge, then we choose \(\varphi(v_{2})=\varphi(v_{3})\in S_{2}\cap S_{3}\setminus S_{1}\) and choose \(\varphi(v_{1})\in S_{1}\); if \(v_{1}v_{3}\) is the only negative edge, then we choose \(\varphi(v_{1})=\varphi(v_{3})\in S_{1}\) and set \(\varphi(v_{2})\in S_{2}\setminus S_{1}\). For Case (4), if \(\hat{T}\) is negative then the argument for Case (1) works. Otherwise \(\hat{T}\) is positive and we may choose \(\varphi(v_{1})\in S_{1}\). Because \(S_{2}\cup S_{3}=V(C_{3}^{*})\), it follows that \(S_{2}\setminus\{\varphi(v_{1})\}\neq S_{3}\setminus\{\varphi(v_{1})\}\). Hence we may choose \(\varphi(v_{2})\in S_{2}\setminus\{\varphi(v_{1})\}\) and \(\varphi(v_{3})\in S_{3}\setminus\{\varphi(v_{1})\}\) so that \(\varphi(v_{2})\neq\varphi(v_{3})\). **Lemma 2.14**.: _No signed triangle \(\hat{T}\) of the following type is contained in a \(C_{3}^{*}\)-critical signed graph:_ 1. _two_ \(3_{1}\)_-vertices and a_ \(5_{3}\)_-vertex;_ 2. \(a\) \(3_{1}\)_-vertex and two_ \(4_{2}\)_-vertices;_ 3. \(a\) \(3_{1}\)_-vertex, a_ \(3\)_-vertex, and a_ \(4_{2}\)_-vertex._ Proof.: Let \(\hat{G}\) be a \(C_{3}^{*}\)-critical signed graph. Let the vertices at distance at most two from \(\hat{T}\) be labeled as in Figure 6. We proceed by cases. In each case, suppose for contradiction that the described signed triangle \(\hat{T}\) does exist in \(\hat{G}\). Let \(\mathcal{P}\) denote the set of paths in \(\hat{G}\setminus E(\hat{T})\) drawn in Figure 6 that join \(v_{i}\) and \(x_{j}\) for some \(i,j\). Let \(N\) be the internal vertices of the paths of \(\mathcal{P}\). Let \(\hat{H}\) denote the signed graph obtained from \(\hat{G}\) by deleting \(V(T)\cup N\) By the criticality of \(\hat{G}\), there is a homomorphism \(\psi:\hat{H}\to C_{3}^{*}\). Let \(\sigma\) be a signature of \(\hat{G}\) such that \((H,\sigma|_{H})\) admits an edge-sign preserving homomorphism to \(C_{3}^{*}\). By possibly switching at some subset of \(V(\hat{T})\), we may assume that there is at most one negative edge in \(\hat{T}\) with respect to \(\sigma\). **Cases (i) and (ii).** By possibly switching on the set \(V(T)\), we may assume at most two of the five paths in \(\mathcal{P}\) are negative under \(\sigma\). Further, by possibly switching on the vertices in \(N\), we may assume each path of \(\mathcal{P}\) has at most one negative edge. We proceed in two sub-cases. First, suppose that there is at most one negative path in \(\mathcal{P}\), and denote the end point of that path in \(\hat{T}\) by \(v_{k}\) and the other endpoint by \(x_{k}\). We define \(S_{k}=V(C_{3}^{*})\setminus\psi(x_{k})\) and \(S_{i}=S_{j}=V(C_{3}^{*})\) for \(v_{i},v_{j}\in V(\hat{T})\setminus\{v_{k}\}\). Noting \(|S_{k}|=2,|S_{i}|=|S_{j}|=3\), by Lemma 2.13 (1), we can choose \(\psi(v_{\ell})\in S_{\ell}\) for \(\ell\in\{1,2,3\}\) such that \(\psi:V(\hat{G})\setminus N\to V(C_{3}^{*})\) is an edge-sign preserving homomorphism of \(\hat{G}-N\) to \(C_{3}^{*}\). Then by Observation 2.8, we can extend this homomorphism to the remaining vertices of \(\hat{G}\). Second, if exactly two paths are negative, then let \(v_{a},v_{b}\) be the ends of those paths on \(T\) and let \(x_{i},x_{j}\) be the other ends of these two paths, respectively. If \(v_{a}\neq v_{b}\), then set \(S_{a}=V(C_{3}^{*})\setminus\{\psi(x_{i})\}\), \(S_{b}=V(C_{3}^{*})\setminus\{\psi(x_{j})\}\) and \(S_{k}=V(C_{3}^{*})\) where \(k\in[3]\setminus\{a,b\}\). Since \(|S_{a}|=|S_{b}|=2\) and \(|S_{k}|=3\), by Lemma 2.13 (1), we can choose \(\psi(v_{\ell})\in S_{\ell}\) for \(\ell\in\{1,2,3\}\) such that \(\psi:V(\hat{G})\setminus N\to V(C_{3}^{*})\) is an edge-sign preserving homomorphism of \(\hat{G}-N\) to \(C_{3}^{*}\). Then by Observation 2.8, we can extend this homomorphism to the remaining vertices of \(\hat{G}\). Similarly, if \(v_{a}=v_{b}\), then we set \(S_{a}=V(C_{3}^{*})\setminus\{\psi(x_{i}),\psi(x_{j})\}\) and set \(S_{\alpha}=S_{\beta}=V(C_{3}^{*})\) where \(\alpha,\beta\in[3]\setminus\{a\}\). Again, by Lemma 2.13 (2) and Observation 2.8 we can extend this homomorphism to the remaining vertices of \(\hat{G}\). In both cases, it is a contradiction as \(\hat{G}\to C_{3}^{*}\). **Case (iii).** We may assume \(\sigma\) satisfies the following: If \(\hat{T}\) has a negative edge, then it is \(v_{1}v_{3}\); At most two of the four paths in \(\mathcal{P}\) are negative; If exactly two such paths are negative, then \(v_{2}x_{2}\) is negative. The first is accomplished by switching on some subset of \(V(\hat{T})\), and the last two by possibly switching on the set \(V(\hat{T})\). As before, we assume each path of \(\mathcal{P}\) has at most one negative edge. No matter the sign of \(v_{2}x_{2}\), there is at most one negative path in \(\mathcal{P}\setminus\{v_{2}x_{2}\}\). Let \(v_{k}\in\{v_{1},v_{3}\}\) so that \(v_{k}\) is an endpoint of the negative path (and \(x_{k}\) is the other end) if it Figure 6: The three cases in Lemma 2.14 exists. We proceed in two sub-cases based on the sign of \(v_{2}x_{2}\). If \(v_{2}x_{2}\) is positive, then we set \(S_{2}=V(C_{3}^{*})\setminus\{\psi(x_{2})\},S_{k}=V(C_{3}^{*})\setminus\{\psi(x_{k })\}\) and \(S_{\ell}=V(C_{3}^{*})\) where \(\ell\in[3]\setminus\{2,k\}\). By a similar argument as above and by Lemma 2.13 (1), we are done. If \(v_{2}x_{2}\) is negative, then we set \(S_{2}=\{\psi(x_{2})\},S_{k}=V(C_{3}^{*})\setminus\{\psi(x_{k})\}\) and \(S_{\ell}=V(C_{3}^{*})\) where \(\ell\in[3]\setminus\{2,k\}\). By a similar argument as above and by Lemma 2.13 (3), we are done. ## 3 Density of \(C_{3}^{*}\)-critical signed graphs The _potential_ of a graph \(G\) is defined as \[\rho(G)=3v(G)-2e(G).\] The potential of a signed graph is the potential of its underlying graph. We first give the potential of some simple graphs. **Observation 3.1**.: _We have \(\rho(K_{1})=3\), \(\rho(K_{2})=4\), \(\rho(K_{3})=3\), and \(\rho(K_{4})=0\)._ In this section, we shall prove the following alternative formulation of Theorem 1.5. **Theorem 3.2**.: _If \(\hat{G}\) is \(C_{3}^{*}\)-critical, then \(\rho(G)\leq 1\)._ To prove it, we assume to the contrary that there exists a \(C_{3}^{*}\)-critical signed graph \(\hat{G}\) with \(\rho(G)\geq 2\) such that among all such counterexamples to Theorem 3.2 it has the minimum number of vertices. This means any \(C_{3}^{*}\)-critical signed graph \(\hat{G}^{\prime}\) with \(|V(G^{\prime})|<|V(G)|\) satisfies that \(\rho(G^{\prime})\leq 1\). We fix the minimum counterexample \(\hat{G}\) for the rest of this section and finally arrive at a contradiction to prove Theorem 3.2. Initially, we will develop several structural properties of \(\hat{G}\), after which we apply a discharging argument to force a contradiction. Given a (signed) graph \(\hat{H}\), let \(P_{2}(\hat{H})\) denote a graph obtained from (the underlying graph of) \(\hat{H}\) by adding a new 2-vertex incident to two new edges whose other ends are distinct vertices in \(\hat{H}\). By a slight abuse of notation, we sometimes treat \(P_{2}(\hat{H})\) as a signed graph. **Lemma 3.3**.: _Let \(\hat{H}\) be a subgraph of \(\hat{G}\). Then_ 1. \(\rho(H)\geq 2\)_, if_ \(\hat{G}=\hat{H}\)_;_ 2. \(\rho(H)\geq 3\)_, if_ \(\hat{G}=P_{2}(\hat{H})\)_;_ 3. \(\rho(H)=3\)_, if_ \(H=K_{1}\) _or_ \(K_{3}\)_;_ 4. \(\rho(H)\geq 4\)_, otherwise._ Proof.: It is straightforward to verify (i), (ii), and (iii). Indeed, if \(\hat{H}=\hat{G}\) then the lemma is satisfied by our assumption that \(\rho(G)\geq 2\); if \(P_{2}(\hat{H})=\hat{G}\), then \(\rho(H)=\rho(G)-3+4\geq 3\); if \(H=K_{1}\) or \(K_{3}\), then the lemma is satisfied by Observation 3.1. Suppose for contradiction that (iv) is false. Let \(\hat{H}\subseteq\hat{G}\) be a subgraph which does not satisfy (iv), chosen so that among all such subgraphs \(v(\hat{H})+e(\hat{H})\) is maximum. Note that \(\hat{H}\neq\hat{G}\), \(P_{2}(\hat{H})\neq\hat{G}\), \(\hat{H}\not\in\{K_{1},K_{3}\}\), and \(\rho(\hat{H})\leq 3\). We first claim that \(\hat{H}\) is an induced subgraph of \(\hat{G}\). Otherwise, assume that \(e\not\in E(\hat{H})\) is an edge connecting two vertices of \(\hat{H}\). Note that \(\rho(\hat{H}+e)=\rho(\hat{H})-2\leq 1<\rho(\hat{H})\), and \(v(\hat{H}+e)+e(\hat{H}+e)>v(\hat{H})+e(\hat{H})\). By (i), (ii), and (iii), it follows that \(\hat{H}+e\neq\hat{G}\), \(P_{2}(\hat{H}+e)\neq\hat{G}\), \(\hat{H}+e\not\in\{K_{1},K_{3}\}\). Thus the existence of \(\hat{H}+e\) contradicts the maximality of \(\hat{H}\). Therefore, \(\hat{H}\) is an induced subgraph. By Observation 3.1, we know that \(v(\hat{H})\geq 4\). Since \(\hat{H}\) is a proper subgraph of \(\hat{G}\) and \(\hat{G}\) is \(C_{3}^{*}\)-critical, there exists a homomorphism \(\psi:\hat{H}\to C_{3}^{*}\). We may assume that \(\sigma\) is a signature of \(\hat{G}\) such that \(\psi\) is an edge-sign preserving homomorphism of \((H,\sigma|_{H})\) to \(C_{3}^{*}\). We build a signed graph \(\hat{G}_{1}\) from \(\hat{G}\) by identifying any \(u,v\in V(\hat{H})\) whenever \(\psi(u)=\psi(v)\) and deleting resulting parallel edges of the same sign so that only one representative remains. We proceed with four observations about \(\hat{G}_{1}\). First, \(\hat{G}_{1}\) has no positive loops because \(\psi(u)=\psi(v)\) only if \(uv\) is not a positive edge in \(\hat{G}\). Second, \(v(\hat{G}_{1})+e(\hat{G}_{1})<v(\hat{G})+e(\hat{G})\) as \(v(\hat{H})\geq 4>v(C_{3}^{*})\) which means at least two vertices were identified while forming \(\hat{G}_{1}\). Third, \(\hat{G}_{1}\not\to C_{3}^{*}\) because otherwise \(\hat{G}\to C_{3}^{*}\) by the transitivity of homomorphisms, which would contradict that \(\hat{G}\) is \(C_{3}^{*}\)-critical. Fourth and finally, \(\hat{G}_{1}\) has no digon. Suppose to the contrary that there is a digon in \(\hat{G}_{1}\). This means that in \(\hat{G}\) there is a negative path \(\hat{P}\) of length \(2\) (with respect to the signature \(\sigma\)) so that the ends of \(\hat{P}\) are in \(V(\hat{H})\) while the internal vertex of \(\hat{P}\) is in \(V(\hat{G}-\hat{H})\). Then \(\rho(\hat{P}+\hat{H})=\rho(\hat{H})+3-4\leq 2\). Note that \(\hat{P}+\hat{H}\) is also a subgraph of \(\hat{G}\) but has more vertices plus edges than \(\hat{H}\) and moreover, \(\rho(\hat{P}+\hat{H})\leq 2<\rho(\hat{H})\). By the choice of \(\hat{H}\) and claim (i), it must be that \(\hat{G}=\hat{P}+\hat{H}\), contradicting that \(\hat{G}\neq P_{2}(\hat{H})\). Thus \(\hat{G}_{1}\) has no digon. By the first, third, and fourth observations above, \(\hat{G}_{1}\) contains a \(C_{3}^{*}\)-critical subgraph \(\hat{G}_{2}\). Let \(X\subseteq V(\hat{G}_{1})\) be the identified vertices of \(\hat{H}\) (including trivial identification if \(\psi^{-1}(u)\) is a singleton set for some \(u\in V(C_{3}^{*})\)). First, observe that \(X\cap V(\hat{G}_{2})\neq\emptyset\) because otherwise \(\hat{G}_{2}\subsetneq\hat{G}\) but both of them are \(C_{3}^{*}\)-critical. Also, \(V(\hat{G}_{2})\setminus X\neq\emptyset\) because otherwise \(\hat{G}_{2}\) is a subgraph of \(C_{3}^{*}\) which would mean \(\hat{G}_{2}\to C_{3}^{*}\), a contradiction. Since \(v(\hat{G}_{2})+e(\hat{G}_{2})\leq v(\hat{G}_{1})+e(\hat{G}_{1})<v(\hat{G})+e( \hat{G})\), by the choice of the minimum counterexample \(\hat{G}\) (to Theorem 3.2), we know that \(\rho(\hat{G}_{2})\leq 1\). We now construct a signed graph \(\hat{G}_{3}\) from the disjoint union of \(\hat{G}_{2}-X\) and \(\hat{H}\) by adding the following edges. For each vertex \(v\in V(\hat{G}_{2})\setminus X\) and each vertex \(u\in X\), if \(vu\in E(\hat{G}_{2})\), then choose a representative edge \(vw\in E(\hat{G})\) for some \(w\in\psi^{-1}(u)\) to be included in \(E(\hat{G}_{3})\). In this way, \(\hat{G}_{3}\subseteq\hat{G}\), and because \(V(\hat{G}_{2})\setminus X\neq\emptyset\), \(\hat{H}\subsetneq\hat{G}_{3}\). Now we consider the vertices and edges in \(\hat{G}_{3}\). It is straightforward to see that \(V(\hat{G}_{3})=V(\hat{G}_{2})\cup V(\hat{H})\setminus V(X)\), and so \[v(\hat{G}_{3})=v(\hat{G}_{2})+v(\hat{H})-|X|. \tag{1}\] An edge of \(\hat{G}_{3}\) is of one of three types: (a) edges in \(\hat{G}_{2}\setminus X\); (b) edges in \(\hat{H}\); and (c) edges with one end in \(\hat{G}_{2}\setminus X\), and the other end in \(\hat{H}\). Similarly, edges in \(\hat{G}_{2}\) have one of three types: (a) edges in \(\hat{G}_{2}\setminus X\); (d) edges with both ends in \(X\); and (e) edges with one end in \(\hat{G}_{2}\setminus X\), and the other end in \(X\). But, by construction, there is a one-to-one correspondence between edges of type (c) and (e). Therefore, \[e(\hat{G}_{3})=e(\hat{G}_{2})+e(\hat{H})-e(G_{2}[X]). \tag{2}\] Hence, noting \(\rho(\hat{G}_{2})\leq 1\), \(\rho(\hat{H})\leq 3\), and \(\rho(G_{2}[X])\geq 3\) (by Observation 3.1), by Equations (1) and (2), it follows that \[\rho(\hat{G}_{3})=\rho(\hat{G}_{2})+\rho(\hat{H})-\rho(G_{2}[X])\leq 1+3-3\leq 1.\] Since \(\hat{H}\subsetneq\hat{G}_{3}\subseteq\hat{G}\) and \(\rho(\hat{G}_{3})<\rho(\hat{H})\), by claims (i), (ii), and (iii), \(\hat{G}_{3}\) is a larger subgraph of \(\hat{G}\) than \(\hat{H}\) which doesn't satisfy claim (iv). This contradicts our choice of \(\hat{H}\) and completes the proof. The following observation will aid in the proof of Lemma 3.5. **Observation 3.4**.: _Let \(C\) be a cycle of a \(C_{3}^{*}\)-critical signed graph with a vertex of degree \(2\)._ 1. _If_ \(C\) _is a_ \(4\)_-cycle, then_ \(C\) _is negative. If, additionally,_ \(C\) _has a chord then the two triangles formed by this chord have different signs._ 2. _If_ \(C\) _is a_ \(3\)_-cycle, then_ \(C\) _is positive._ To justify the observation, note that if \(C\) does not have the prescribed sign, then the subgraph formed by deleting the \(2\)-vertex maps to \(C_{3}^{*}\) if and only if \(\hat{H}\) does. Let \(\Theta_{1},\Theta_{2},\Theta_{3}\), and \(X\) be the graphs depicted in Figure 7a 7b, 7c, and 7d, respectively. **Lemma 3.5**.: \(\hat{G}\) _has no subgraph whose underlying graph is isomorphic to any of \(\Theta_{1},\Theta_{2},\Theta_{3}\), or \(X\)._ Proof.: We can easily compute that \(\rho(\Theta_{1})=2\) and \(\rho(\Theta_{2})=\rho(\Theta_{3})=\rho(X)=3\). Let \(\hat{H}\) be a subgraph of \(\hat{G}\) and \(H\in\{\Theta_{1},\Theta_{2},\Theta_{3},X\}\). Suppose that \(H=\Theta_{1}\). By Lemma 3.3 (i), we conclude that \(G=\Theta_{1}\). By Lemma 3.3 (i) and (ii), we conclude that either \(G=H\) or \(G=P_{2}(H)\). If \(H\in\{\Theta_{2},\Theta_{3}\}\), then the former case is impossible by Lemma 2.9. If \(H=X\), then the former case would contradict that \(\hat{G}\) is \(C_{3}^{*}\)-critical. Indeed, if \(X\) could be signed in a way that made it \(C_{3}^{*}\)-critical, then by Observation 3.4 (ii), it must have two triangles \(v_{1}v_{2}v_{3}\) and \(v_{3}v_{4}v_{5}\) both being positive, which is switching equivalent to the signature that all the edges of \(\hat{X}\) are positive, and it is easy to see that when signed this way \(\hat{X}\to C_{3}^{*}\), a contradiction. Figure 7: Graphs that are not a subgraph of \(\hat{G}\). We are left to show that \(P_{2}(\Theta_{2})\), \(P_{2}(\Theta_{3})\), and \(P_{2}(X)\) each have no signature which makes them \(C_{3}^{*}\)-critical. We consider them in order. In each case, let \(v\) be the new vertex and let the remaining vertices be labeled as in Figure 7. Suppose for contradiction that \(\sigma\) is a signature of each respective graph so that it is \(C_{3}^{*}\)-critical. * For \(P_{2}(\Theta_{2})\), \(v\) must be adjacent to two of \(\{v_{1},v_{3},v_{5}\}\). Otherwise, the \(4\)-cycles \(v_{1}v_{2}v_{5}v_{4}\), \(v_{1}v_{2}v_{3}v_{4}\) and \(v_{5}v_{2}v_{3}v_{4}\), each containing a \(2\)-vertex, would need to be negative by Observation 3.4 (i), but it is impossible by the handshake lemma (i.e., the number of negative facial cycles of a signed plane graph is even). Thus, without loss of generality, it must be that \(vv_{1},vv_{3}\in E(P_{2}(\Theta_{2}))\). Again by Observation 3.4 (i), we need \(P_{2}(\Theta_{2})\) to be assigned such that every \(4\)-cycle with a \(2\)-vertex is negative and, by Lemma 2.1, it determines a unique (switching-equivalent) signature \(\sigma\). Note that up to switching we may assume that \(v_{1}v_{4}\) and \(v_{2}v_{3}\) are the only negative edges in \(\sigma\). But now it is straightforward to verify that \((P_{2}(\Theta_{2}),\sigma)\to C_{3}^{*}\), a contradiction. * For \(P_{2}(\Theta_{3})\), \(v\) must be adjacent to one of \(v_{3}\) or \(v_{4}\) by Lemma 2.11 which forbids a \(2_{1}\)-vertex in any \(C_{3}^{*}\)-critical signed graph. By symmetry, say \(v\) is adjacent to \(v_{3}\). If \(v\) is also adjacent to \(v_{5}\), then the underlying graph is isomorphic to a \(P_{2}(\Theta_{2})\), and this case is complete. If \(v\) is also adjacent to \(v_{1}\), then the \(4\)-cycles \(v_{1}v_{2}v_{3}v\) and \(v_{5}v_{2}v_{3}v_{4}\) must be negative by Observation 3.4 (i). But then by assuming that three edges of \(v_{1}v_{2}v_{3}\) are all negative or all positive depending on the sign of the triangle, it is straightforward to see the signed graph maps to \(C_{3}^{*}\). The cases for \(v\) being adjacent to \(v_{2}\) or \(v_{4}\) are omitted because they proceed as in the previous case, where Observation 3.4 reduces the number of signatures to be considered to one or two, and then, in a straightforward way, verifying that a homomorphism to \(C_{3}^{*}\) does exist. * For \(P_{2}(X)\), it must be that \(v\) is adjacent to a \(2\)-vertex in each of the triangles \(v_{1}v_{2}v_{3}\) and \(v_{3}v_{4}v_{5}\) by Lemma 2.11 as there are no two \(2\)-vertices adjacent to each other in any \(C_{3}^{*}\)-critical signed graph.By symmetry, we may assume that \(v\) is adjacent to \(v_{2}\) and \(v_{5}\). But this graph is isomorphic to a \(P_{2}(\Theta_{3})\) (i.e., \(v\) is adjacent to \(v_{2}\) and \(v_{3}\) in \(P_{2}(\Theta_{3})\)) and, by the previous case, it cannot be the underlying graph of a \(C_{3}^{*}\)-critical signed graph. This completes the proof. **Corollary 3.6**.: _Every vertex of \(\hat{G}\) is in at most one triangle._ Proof.: If a vertex is contained in two triangles, then, as a \(C_{3}^{*}\)-critical graph has no parallel edges or digon, those two triangles must either (1) share exactly one vertex, or (2) share exactly one edge. But both contradict Lemma 3.5. **Lemma 3.7**.: _Let \(v\) be a \(3_{1}\)-vertex of \(\hat{G}\) and \(u\) be its \(2\)-neighbor. Assume that \(x\) and \(y\) are the \(3^{+}\)-neighbors of \(v\). Then the path \(xvy\) must be in a positive triangle._ Proof.: Let \(w\) be the other neighbor of \(u\) which is not \(v\). See Figure 8. Suppose, for a contradiction, that either the path \(xvy\) is in a negative triangle or \(xy\not\in E(\hat{G})\). We consider these two possibilities: * Suppose that \(xvy\) is a negative triangle. Let \(\hat{G}_{1}=\hat{G}-\{u,v\}\). It follows from the criticality of \(\hat{G}\) that there is a homomorphism \(\psi:\hat{G}_{1}\to C_{3}^{*}\). Let \(\sigma\) be a signature of \(\hat{G}\) such that \(\psi:(\hat{G}_{1},\sigma|_{G_{1}})\to C_{3}^{*}\) is an edge-sign preserving homomorphism with respect to \(\sigma\). We will arrive at a contradiction by showing that \(\psi\) can be extended to \(\hat{G}\). By possibly switching on \(u\) and \(v\), we may assume \(\sigma(uv)=\sigma(uw)=+\). In the edge-sign preserving homomorphism \(\psi:\hat{G}_{1}\to C_{3}^{*}\), based on the sign of the edge \(xy\), to determine \(\psi(v)\) we have two cases to consider: (1) If \(\sigma(xy)=-\), then \(\psi(x)=\psi(y)\). Because \(xvy\) is a negative triangle, it must be that \(\sigma(xv)=\sigma(yv)\). If both are negative, then set \(\psi(v)=\psi(x)\), otherwise choose \(\psi(v)\in V(C_{3}^{*})\setminus\{\psi(x)\}\). (2) If \(\sigma(xy)=+\), then \(\psi(x)\neq\psi(y)\) and \(\sigma(xv)\neq\sigma(yv)\). Suppose without loss of generality that \(\sigma(xv)=-\) and then we set \(\psi(v)=\psi(x)\). Now we determine \(\psi(u)\). Since \(\sigma(uv)=\sigma(uw)=+\) we may extend \(\psi\) by setting \(\psi(u)\in V(C_{3}^{*})\setminus\{\psi(v),\psi(w)\}\). Therefore, \(\hat{G}\to C_{3}^{*}\), a contradiction. * Suppose that \(xy\not\in E(\hat{G})\). Let \(\hat{G}_{1}=\hat{G}-\{u,v\}+xy\) and assign a sign to \(xy\) such that \(xvy\) is a negative triangle in \(\hat{G}+xy\). By the above reasoning, if \(\hat{G}_{1}\to C_{3}^{*}\), then also \(\hat{G}\to C_{3}^{*}\), a contradiction. Thus \(\hat{G}_{1}\not\to C_{3}^{*}\), and so there exists \(\hat{G}_{2}\subseteq\hat{G}_{1}\) that is \(C_{3}^{*}\)-critical. Observe that in constructing \(\hat{G}_{2}\) we do not create any digon. Clearly, \(xy\in E(\hat{G}_{2})\) because otherwise \(\hat{G}_{2}\subsetneq\hat{G}\) but both are \(C_{3}^{*}\)-critical, a contradiction. Since \(v(\hat{G}_{2})<v(\hat{G})-1\), by the choice of \(\hat{G}\), it follows that \(\rho(\hat{G}_{2})\leq 1\). We then define \(\hat{G}_{3}=\hat{G}_{2}-xy\). Note that \(\hat{G}_{3}\subsetneq\hat{G}\) and \(\rho(\hat{G}_{3})=\rho(\hat{G}_{2})+2\leq 3\). Since \(v(\hat{G}_{3})\leq v(\hat{G})-2\), it follows that \(\hat{G}\neq\hat{G}_{3}\) and \(\hat{G}\neq P_{2}(\hat{G}_{3})\). As \(xy\not\in E(\hat{G}_{3})\), \(\hat{G}_{3}\not\in\{K_{1},K_{3}\}\). The existence of \(\hat{G}_{3}\) is a contradiction to Lemma 3.3. This completes the proof. **Lemma 3.8**.: \(\hat{G}\) _contains no triangle \(T\) of the following type._ 1. _two_ \(3_{1}\)_-vertices and a_ \(4\)_-vertex;_ 2. \(a\) \(3_{1}\)_-vertex and two_ \(3\)_-vertices._ Figure 8: A \(3_{1}\)-vertex with its neighbors. Figure 9: The two cases in Lemma 3.8 Proof.: We proceed by cases. In each case, suppose for contradiction that the described triangle \(T\) does exist in \(\hat{G}\). Let the vertices of \(T\) be labeled as is Figure 9. Note that as each \(T\) contains a \(3_{1}\)-vertex, by Lemma 3.7, \(T\) is a positive triangle. Let \(\mathcal{P}\) denote the set of paths in \(\hat{G}\) drawn in Figure 9 that join \(v_{i}\) and one of \(x_{1},x_{2},x,y\) in \(\hat{G}-E(T)\). By Lemma 3.5, there is no edge connecting vertices \(x\) and \(y\) in \(\hat{G}\). **(i).** We add one edge \(xy\) and assign it a signature such that \(xv_{3}y\) is a negative triangle. The resulting signed graph is denoted by \(\hat{G}^{\prime}\), i.e., \(\hat{G}^{\prime}=\hat{G}+xy\). Let \(\hat{H}\) denote the signed graph obtained from \(\hat{G}^{\prime}\) by deleting \(V(T)\), and \(v_{1}\) and \(v_{2}\)'s \(2\)-neighbors. First, we claim that \(\hat{H}\not\to C_{3}^{*}\). Suppose for contradiction there is a homomorphism \(\psi:\hat{H}\to C_{3}^{*}\). Let \(\sigma\) be a signature of \(\hat{G}^{\prime}\) such that \(\psi\) is edge-sign preserving. As in the proof of Lemma 2.14, we may assume the edges of \(T\) are positive, and that at most two of the paths in \(\mathcal{P}\) are negative under \(\sigma\). Furthermore, by possibly switching on the set \(V(T)\), we may assume that the two paths of length \(2\) in \(\mathcal{P}\) are not both negative. We consider two possibilities based on the sign of \(xy\): * If the edge \(xy\) is positive under \(\sigma\), then because \(v_{3}yx\) is a negative triangle we may assume without loss of generality that \(v_{3}x\) is negative and \(v_{3}y\) is positive. Moreover, if there is another negative path in \(\mathcal{P}\) (except \(v_{3}x\)), then by symmetry it is the \(v_{1}x_{1}\)-path. Define \(S_{1}=V(C_{3}^{*})\setminus\psi(x_{1})\), \(S_{2}=V(C_{3}^{*})\), and \(S_{3}=\{\psi(x)\}\). By Lemma 2.13, we can choose \(\psi(v_{i})\in S_{i}\), for \(i\in\{1,2,3\}\), such that \(\psi:V(\hat{H})\cup V(\hat{T})\to V(C_{3}^{*})\) is an edge-sign preserving homomorphism and then by Observation 2.8 we may extend \(\psi\) to a homomorphism of \(\hat{G}\) to \(C_{3}^{*}\). Note that this is possible because \(\psi(x)\neq\psi(y)\) since \(\sigma(xy)=+\), and so the ends of the edge \(v_{3}y\) map to different vertices of \(C_{3}^{*}\). This contradicts that \(\hat{G}\) is \(C_{3}^{*}\)-critical. * If \(xy\) is negative under \(\sigma\), then \(\sigma(v_{3}x)=\sigma(v_{3}y)\) and \(\psi(x)=\psi(y)\). If \(\sigma(v_{3}x)=-\), then \(x_{1}v_{1}\)-path and \(x_{2}v_{2}\)-path are both positive, and in this case we define \(S_{1}=S_{2}=V(C_{3}^{*})\) and \(S_{3}=\{\psi(x)\}\). If \(\sigma(v_{3}x)=+\), then by our assumptions there is at most one negative path in \(\mathcal{P}\), by symmetry say it is \(x_{1}v_{1}\)-path. In this case we define \(S_{1}=V(C_{3}^{*})\setminus\psi(x_{1})\), \(S_{2}=V(C_{3}^{*})\), and \(S_{3}=V(C_{3}^{*})\setminus\psi(x)\). In either case, by Lemma 2.13, we may choose \(\psi(v_{i})\in S_{i}\) such that \(\psi:V(\hat{H})\cup V(\hat{T})\to V(C_{3}^{*})\) is an edge-sign preserving homomorphism and then, again, by Observation 2.8 we may extend \(\psi\) to \(\hat{G}\), a contradiction. Therefore, \(\hat{H}\not\to C_{3}^{*}\). Since \(\hat{H}\not\to C_{3}^{*}\), we know that \(\hat{H}\) contains a \(C_{3}^{*}\)-critical subgraph \(\hat{H}_{1}\). Moreover, \(\hat{H}_{1}\) must contain the edge \(xy\) because otherwise \(\hat{H}_{1}\subsetneq G\), contradicting that they are both \(C_{3}^{*}\)-critical. Noting that \(v(\hat{H}_{1})<v(\hat{G})-1\), by the minimality of \(\hat{G}\), \(\rho(\hat{H}_{1})\leq 1\). Let \(\hat{H}_{2}=\hat{H}_{1}-xy\) and note that \(\rho(\hat{H}_{2})=\rho(\hat{H}_{1})+2\leq 3\). As \(\hat{H}_{2}\) is a subgraph of \(\hat{G}\), by Lemma 3.3, either \(\hat{H}_{2}=\hat{G}\), or \(\hat{G}=P_{2}(\hat{H}_{2})\), or \(\hat{H}_{2}\in\{K_{1},K_{3}\}\), but all of these are impossible. The first two cannot be because five vertices were deleted in \(\hat{G}\) to form \(\hat{H}\), and the last because of the deleted edge \(xy\). **(ii).** As in Case (i), add one edge \(xy\) but assign it a signature so that \(v_{2}v_{3}yx\) is a positive \(4\)-cycle. Denote the resulting signed graph by \(\hat{G}^{\prime}\). Let \(\hat{H}\) be the signed graph obtained from \(\hat{G}^{\prime}\) by deleting the vertices of \(T\) and the \(2\)-vertex adjacent to \(v_{1}\). First, we claim that \(\hat{H}\not\to C_{3}^{*}\). Suppose for contradiction there is a homomorphism \(\psi:\hat{H}\to C_{3}^{*}\). Let \(\sigma\) be a signature of \(\hat{G^{\prime}}\) which admits \(\psi\) as edge-sign preserving. Once again, we may assume that all the edges of \(T\) are positive and that at most one of the paths in \(\mathcal{P}\) is negative under \(\sigma\). We consider two cases based on the sign of \(xy\). * If \(xy\) is negative under \(\sigma\), then since \(v_{2}v_{3}yx\) is a positive \(4\)-cycle we may assume without loss of generality that \(v_{2}x\) is the only negative path in \(\mathcal{P}\). Define \(S_{1}=V(C_{3}^{*})\), \(S_{2}=\{\psi(x_{2})\}\), and \(S_{3}=V(C_{3}^{*})\setminus\{\phi(y)\}\). By Lemma 2.13, we may choose \(\psi(v_{i})\in S_{i}\) such that \(\psi:V(\hat{H})\cup V(\hat{T})\to V(C_{3}^{*})\) is an edge-sign preserving homomorphism and then, again, by Observation 2.8 we may extend \(\psi\) to \(\hat{G}\), a contradiction. * If \(xy\) is positive under \(\sigma\), then \(v_{2}x\) and \(v_{3}y\) are also positive, and \(\psi(x)\neq\psi(y)\). Define \(S_{i}=V(C_{3}^{*})\setminus\phi(v_{i})\) for \(i=1,2,3\). Then \(|S_{i}|=2\) for each \(i\), but \(S_{2}\cup S_{3}=V(C_{3}^{*})\). Hence by Lemma 2.13 we can choose \(\psi(v_{i})\in S_{i}\) such that \(\psi:V(\hat{H})\cup V(\hat{T})\to V(C_{3}^{*})\) is an edge-sign preserving homomorphism and then extend \(\psi\) to \(\hat{G}\) by Observation 2.8, a contradiction. Therefore, \(\hat{H}\not\to C_{3}^{*}\). Since \(\hat{H}\not\to C_{3}^{*}\), this means \(\hat{H}\) has a \(C_{3}^{*}\)-critical subgraph \(\hat{H_{1}}\), which contains the edge \(xy\). This leads to a contradiction in the same manner as Case (i), and we do not repeat the details. This completes the proof of the lemma. **Lemma 3.9**.: _Let \(v\) be a \(4\)-vertex of \(\hat{G}\) with two \(2\)-neighbors \(u\) and \(w\). Suppose that \(x\) and \(y\) are the other neighbors of \(v\). Then either \(xvy\) is a positive triangle, or \(xvy\) is a path in a negative \(4\)-cycle._ Proof.: Let \(u^{\prime}\) and \(w^{\prime}\) be the neighbors of \(u\) and \(w\) respectively that are not \(v\). See Figure 10. Let \(\sigma=\sigma(\hat{G})\). Suppose for contradiction that \(xvy\) is neither in a positive triangle nor a negative \(4\)-cycle. By possibly switching, we may assume that \(\sigma(vx)=\sigma(vy)\) and \(\sigma(xy)=-\) if there exists one such edge. If \(v^{\prime}\neq v\) is a common neighbor of \(x\) and \(y\), then it must be that also \(\sigma(v^{\prime}x)=\sigma(v^{\prime}y)\) because otherwise, \(xvyv^{\prime}\) is a negative \(4\)-cycle. Let \(\hat{G}_{1}\) be the signed graph obtained from \(\hat{G}\) by identifying \(x\) and \(y\) to a new vertex \(z\) and deleting one of the parallel edges connecting \(z\) and \(v\). Since any common neighbor \(v^{\prime}\) of \(x\) and \(y\) has \(\sigma(v^{\prime}x)=\sigma(v^{\prime}y)\), the graph \(\hat{G}_{1}\) has no digon. Since \(\sigma(xy)=-\), if such an edge exists, then \(\hat{G}_{1}\) may have a negative loop but no positive loop. If there is a homomorphism \(\psi:\hat{G}_{1}\to C_{3}^{*}\), then \(\psi\) can be extended to \(\hat{G}\) by setting \(\psi(x)=\psi(y)=\psi(z)\). This contradicts that \(\hat{G}\) is \(C_{3}^{*}\)-critical. Thus \(\hat{G}_{1}\not\to C_{3}^{*}\), and hence \(\hat{G}_{1}\) contains a \(C_{3}^{*}\)-critical subgraph \(\hat{G}_{2}\). Note that \(z\in V(\hat{G}_{2})\) because otherwise \(\hat{G}_{2}\subsetneq\hat{G}\), but both are \(C_{3}^{*}\)-critical, a contradiction. Figure 10: A \(4\)-vertex with two \(2\)-neighbors. Now we claim that the vertex \(v\) is not in \(\hat{G}_{2}\). Otherwise, \(v\) is a vertex of one of the following types: \(1\)-vertex, \(2_{1}\)-vertex, or \(3_{2}\)-vertex, contradicting Lemma 2.11 or 2.12. Note that \(u\) and \(w\) are also not in \(\hat{G}_{2}\) since \(\hat{G}_{2}\) must be connected and contains no cut-edge by Lemma 2.10. Clearly, \(v(\hat{G}_{2})<v(\hat{G})-1\), so \(\rho(\hat{G}_{2})\leq 1\) by the minimality of \(\hat{G}\). We now construct a signed graph \(\hat{G}_{3}\) from \(\hat{G}_{2}\) as follows: firstly, undo the identification at \(z\), without putting back the negative edge \(xy\) if such an edge exists; secondly, add the vertex \(v\) and edges \(vx\) and \(vy\). Note that \(\hat{G}_{3}\) is a proper subgraph of \(\hat{G}\) and \(\rho(\hat{G}_{3})=\rho(\hat{G}_{2})+6-4\leq 3\). Since \(\hat{G}_{3}\subsetneq\hat{G}\), by Lemma 3.3, one of the following conditions is satisfied: (1) \(\hat{G}=\hat{G}_{3}\), (2) \(\hat{G}=P_{2}(\hat{G}_{3})\), (3) \(G_{3}=K_{1}\), or (4) \(G_{3}=K_{3}\). But (1) and (2) are not possible because \(u,w\in V(\hat{G})\setminus V(\hat{G}_{3})\), and (3) and (4) are not satisfied because \(x,y\in V(\hat{G}_{3})\) but are not adjacent. This contradiction completes the proof. The next lemma shows us that every \(4_{3}\)-vertex of \(\hat{G}\) is in exactly three \(4\)-cycles, all of which share exactly one common edge. **Lemma 3.10**.: _Let \(v\) be a \(4_{3}\)-vertex of \(\hat{G}\), and let \(w\) be its neighbor which is not of degree \(2\). Then the distance-two neighbors of \(v\) are distinct, and \(w\) is adjacent to each of them._ Proof.: Let \(v\in V(\hat{G})\) be a \(4_{3}\)-vertex whose neighborhood is labeled as in Figure 11. Since \(\hat{G}\) contains no copy of \(\Theta_{2}\), by Lemma 3.5, \(x_{1},x_{2}\), and \(x_{3}\) are not identified. By Lemma 3.9, for each \(i\), either \(w=x_{i}\) or \(wx_{i}\) is an edge. First, we show that \(w\neq x_{i}\) for any \(i\). If there are distinct \(i\) and \(j\) such that \(w=x_{i}=x_{j}\), then \(\{w,v,v_{i},v_{j}\}\) induces a copy of \(\Theta_{1}\), contradicting Lemma 3.5. If there are \(i\) and \(j\) such that \(w=x_{i}\) and \(wx_{j}\) is an edge, then \(\{w,v,v_{i},v_{j},x_{j}\}\) induces a copy of \(\Theta_{3}\), again contradicting Lemma 3.5. Therefore, \(w\neq x_{i}\) for any \(i\), which means \(wx_{1},wx_{2}\) and \(wx_{3}\in E(\hat{G})\). Finally, if there exist distinct \(i,j\) such that \(x_{i}=x_{j}\), then \(\{w,v,v_{i},v_{j},x_{i}\}\) induces a copy of \(\Theta_{2}\), a contradiction. **Lemma 3.11**.: 1. _No_ \(5_{\geq 2}\)_-vertex is adjacent to a_ \(4_{3}\)_-vertex in_ \(\hat{G}\)_._ 2. _No_ \(4_{0}\)_-vertex is adjacent to two_ \(4_{3}\)_-vertices in_ \(\hat{G}\)_._ 3. _No_ \(5\)_-vertex is adjacent to four_ \(4_{3}\)_-vertices in_ \(\hat{G}\)_._ Proof.: (1) Let \(v\) be the \(4_{3}\)-vertex and its neighborhood be labeled as in Figure 11. Suppose for contradiction \(w\) is a \(5_{\geq 2}\)-vertex. Then some \(x_{i}\) must be a \(2\)-vertex, but this contradicts Lemma 2.11, which forbids a \(2_{1}\)-vertex in \(\hat{G}\). (2) Suppose for contradiction such a \(4_{0}\)-vertex exists. Let \(v\) be one of its \(4_{3}\)-neighbors, and let the neighborhood of \(v\) be labeled as in Figure 11. The \(4_{0}\)-vertex is clearly \(w\), and Figure 11: A \(4_{3}\)-vertex and its neighborhood. by Lemma 3.10, the other \(4_{3}\)-vertex must be some \(x_{i}\), say \(x_{3}\). Applying Lemma 3.10 to the \(4_{3}\)-vertex \(x_{3}\), it follows that \(x_{1}\) and \(x_{2}\) are at distance \(2\) from \(x_{3}\), as in Figure 12. But then the signed graph \(\hat{H}\) induced by \(v,w,x_{1},x_{2},x_{3}\) and the five \(2\)-neighbors of \(v\) and \(x_{3}\) has \(10\) vertices and \(14\) edges. This means \(\rho(H)=2\). By Lemma 3.3, it must be that \(\hat{H}=\hat{G}\). But this is impossible because \(\hat{H}\) has a \(3_{2}\)-vertex \(x_{1}\), contradicting Lemma 2.12. (3) Suppose for contradiction that such a \(5\)-vertex \(w\) exists. Let \(x_{1},x_{2},x_{3}\), and \(x_{4}\) be its \(4_{3}\)-neighbors, and let \(w^{\prime}\) be the remaining neighbor of \(w\). By Lemma 3.10, for each \(i\), the \(3\) distance-two neighbors of \(x_{i}\) are among \(\{w^{\prime},x_{j}:j\neq i\}\). Now, by parity, \(w^{\prime}\) is adjacent to \(0,2\), or \(4\) of the \(x_{i}\)'s. If \(w^{\prime}\) is adjacent to \(2\) (or \(4\)) of them, then the subgraph \(\hat{H}\) induced by \(\{w,w^{\prime},x_{1},x_{2},x_{3},x_{4}\}\) and the \(2\)-neighbors of each \(x_{i}\) has \(\rho(\hat{H})=1\) (or \(0\), respectively). Both contradict Lemma 3.3. This means each \(x_{i}\) is pairwise connected by a path of length two, where the internal vertex of each path has degree \(2\), and \(w^{\prime}\) is adjacent to none of \(w_{i}\)'s, as depicted in Figure 13. But then the edge \(w^{\prime}w\) is a cut-edge in \(\hat{G}\), contradicting Lemma 2.10. ### Discharging part We define a _wealthy vertex_ to be a vertex of one of the following types: a \(4_{0}\)-vertex, a \(5_{\leq 2}\)-vertex, or a \(6^{+}\)-vertex, and a _rich vertex_ to be a vertex of one of the following types: a \(4_{1}\)-vertex, a \(5_{3}\)-vertex, or a wealthy vertex. In the next lemma, we show that every \(3_{1}\)-vertex has a "good" neighbor. **Lemma 3.12**.: _Every pair of adjacent \(3_{1}\)-vertices has a common wealthy neighbor and every \(3_{1}\)-vertex has a rich neighbor._ Proof.: Let \(v\) be a \(3_{1}\)-vertex, and \(w,u\) be its neighbors of degree at least \(3\). By Lemma 3.7, \(w\) and \(u\) are adjacent. First, suppose that \(v\) has a \(3_{1}\)-neighbor, say \(u\). In this case, we shall show that \(w\) is wealthy. By Lemma 3.8, \(w\) has degree at least \(5\). If \(w\) has degree exactly \(5\), then \(w\) is a \(5_{\leq 3}\)-vertex because of its neighbors \(v\) and \(u\). But by Lemma 2.14 (i), \(w\) is not a \(5_{3}\)-vertex. This means \(w\) is either a \(5_{\leq 2}\)-vertex or a \(6^{+}\)-vertex, so \(w\) is wealthy by definition. Now suppose that \(v\) does not have a \(3_{1}\)-neighbor. In this case, we shall show that one of \(w\) or \(u\) is rich. If either is of degree at least \(5\), then it is rich. So we may assume each of \(w\) and \(u\) is of degree at most \(4\). Suppose that one of them is of degree \(3\), say \(w\). By Lemma 3.8 (ii), \(u\) has degree \(4\). By Lemma 2.14 (iii), \(u\) is a \(4_{\leq 1}\)-vertex, so \(u\) is rich. So we may assume each of \(w\) and \(u\) is of degree exactly \(4\). Then by Lemma 2.14 (ii), one of them is a \(4_{\leq 1}\)-vertex, which is rich. Then we show that every \(4_{3}\)-vertex also has a "good" neighbor. **Lemma 3.13**.: _Every \(4_{3}\)-vertex is adjacent to a (unique) wealthy vertex \(w\) in \(\hat{G}\). Moreover, if \(w\) is in a triangle, then it is of degree at least \(6\)._ Proof.: Let \(v\) be a \(4_{3}\)-vertex, \(w\) its non-\(2\)-neighbor, and whose neighborhood is otherwise labeled as in Figure 11. Since \(2_{1}\)-vertices are forbidden by Lemma 2.11, for \(i\in\{1,2,3\}\) each \(x_{i}\) has degree at least \(3\). By Lemma 3.10, \(w\) is adjacent to each distinct \(x_{i}\). This means \(w\) is of degree at least \(4\) and moreover, none of its four labeled neighbors is of degree \(2\). By the definition, \(w\) is a wealthy vertex. For the moreover part, observe that if \(w\) is in a triangle and is of degree at most \(5\), then there must be a copy of \(\Theta_{3}\), contradicting Lemma 3.5. Now we are ready to apply the discharging method. We begin with every vertex having a charge equal to its degree. Thus \[\sum_{v\in V(\hat{G})}c(v)=\sum_{v\in V(\hat{G})}d(v)=2e(\hat{G})\leq 3v(\hat{G} )-2.\] We then apply the following three discharging rules to each vertex \(v\) in \(\hat{G}\). 1. _Every_ \(3^{+}\)_-vertex gives a charge of_ \(\frac{1}{2}\) _to each of its_ \(2\)_-neighbors._ 2. _Every rich vertex gives a charge of_ \(\frac{1}{2}\) _to each of its_ \(3_{1}\)_-neighbors._ 3. _Every wealthy vertex gives a charge of_ \(\frac{1}{2}\) _to each of its_ \(4_{3}\)_-neighbors._ We claim that after discharging, each vertex has a charge of at least \(3\), thus \[\sum_{v\in V(\hat{G})}c(v)=\sum_{v\in V(\hat{G})}c^{\prime}(v)\geq 3v(\hat{G}),\] a contradiction. Let \(v\in V(\hat{G})\). We consider three cases based on the type of \(v\). **Case (1)**. Assume that \(v\) is not rich. This means \(v\) is one of the following types: a \(2\)-vertex, a \(3\)-vertex, a \(4_{\geq 2}\)-vertex, or a \(5_{\geq 4}\)-vertex. The vertex \(v\) may give charge only to its \(2\)-neighbors (if exists) via Rule 1, and may receive charge via Rule 2 or 3. If \(v\) has degree \(2\), then by Lemma 2.11, there is no \(2_{1}\)-vertex and \(v\) has two neighbors of degree at least \(3\). By Rule 1, \(v\) receives a charge of \(\frac{1}{2}\) from each of them. And since \(v\) gives no charge to any neighbor, after discharging \(v\) has a charge of \(3\). If \(v\) is one of \(3_{0}\)-vertex, \(4_{2}\)-vertex, or \(5_{4}\)-vertex, then it has sufficient charge to give to each of its \(2\)-neighbors (if exists) such that it ends with a charge of \(3\). By Lemma 2.12, there exists no \(3_{2}\)-vertex and by lemma 2.11, there exists no \(4_{4}\)-vertex and no \(5_{5}\)-vertex. Hence, the only remaining cases are when \(v\) is a \(3_{1}\)-vertex or \(4_{3}\)-vertex. By Lemmas 3.12 and 3.13, via Rule 2 or 3, each of these vertices receives a charge of at least \(\frac{1}{2}\) from some rich or wealthy neighbor. Therefore, after giving a charge \(\frac{1}{2}\) to each of its \(2\)-neighbors and receiving a charge \(\frac{1}{2}\) from its rich or wealthy neighbor, it ends up with a charge of \(3\). **Case (2)**. Assume that \(v\) is rich but not wealthy. This means \(v\) is either a \(4_{1}\)-vertex or \(5_{3}\)-vertex, and thus it may only give charge to its neighbors via Rules 1 and 2. By Lemma 3.7, if \(v\) is adjacent to a \(3_{1}\)-vertex \(x\), then it is in a triangle with \(x\). By Corollary 3.6, \(v\) is in at most one triangle. If \(v\) is adjacent to two \(3_{1}\)-vertices, say \(x,x^{\prime}\), then \(v,x,x^{\prime}\) are in a triangle. But by the moreover part of Lemma 3.12, this means \(v\) is wealthy, a contradiction. Hence, \(v\) is adjacent to at most one \(3_{1}\)-vertex. This means \(v\) gives a charge of at most \(\frac{1}{2}\) via Rule 2. Thus after giving a charge \(\frac{1}{2}\) to its \(2\)-neighbors via Rule 1, it still has a charge of at least \(3\). **Case (3)**. Assume that \(v\) is wealthy. This means \(v\) is one of the following types: a \(6^{+}\)-vertex, a \(4_{0}\)-vertex, or a \(5_{\leq 2}\)-vertex. If \(v\) is a \(6^{+}\)-vertex, then by Rules 1, 2, and 3, after discharging it always has a charge of \(d(v)-\frac{1}{2}d(v)\geq 3\). Hence, we may assume \(v\) is either a \(4_{0}\)-vertex, or a \(5_{\leq 2}\)-vertex. Note that by Lemma 3.13, Rules 2 and 3 cannot apply at the same time to \(v\). By Corollary 3.6, \(v\) is in at most one triangle and can therefore give a charge of at most \(1\) via Rule 2. Thus after applying Rules 1 and 2, \(v\) is left with a charge of at least \(3\). It remains only to consider when Rule 3 is applied. By Lemma 3.11 (1), if \(v\) is a \(5_{2}\)-vertex, it has no \(4_{3}\)-neighbor, and so gives no charge under Rule 3. By Lemma 3.11 (2), \(v\) is adjacent to at most one \(4_{3}\)-vertex if it is a \(4_{0}\)-vertex, and by Lemma 3.11 (3), \(v\) is adjacent to at most three \(4_{3}\)-vertices if it is a \(5_{\leq 1}\)-vertex. This means after applying Rules 1 and 3, \(v\) still has a charge of at least \(3\). We are done. ## 4 Tightness and discussion We have seen in Lemma 2.7 that \(\hat{W}\) is a \(C_{3}^{*}\)-critical signed graph satisfying that \(e(\hat{W})=\frac{3v(\hat{W})-1}{2}\). Now we provide a sequence of \(C_{3}^{*}\)-critical signed graphs with edge density \(\frac{3}{2}\), showing that the bound in Theorem 1.5 is asymptotically tight. Let \(\hat{S}_{k}\) be the connected signed graph obtained from a negative cycle of length \(k\) by adding a set \(S\) of \(k\) vertices and \(2k\) edges so that each edge of the negative cycle is contained in a positive triangle with a distinct vertex in \(S\), as in Figure 14. It is easy to observe that \(e(\hat{S}_{k})=\frac{3v(\hat{S}_{k})}{2}.\) Note that in any edge-sign preserving homomorphism of a switching-equivalent \(\hat{S}_{k}\) to \(C_{3}^{*}\), every edge of each triangle needs to be positive. Hence, such a signature does not exist because \(\hat{S}_{k}\) has a negative \(k\)-cycle which requires at least one negative edge appearing in one triangle. Therefore \(\hat{S}_{k}\not\to C_{3}^{*}\). Furthermore, there is a signature of \(\hat{S}_{k}\) with exactly two negative edges (both in the same triangle). If any edge \(e\) is deleted, then by symmetry we may assume it is one of those two negative edges, and with this signature \(\hat{S}_{k}-e\to C_{3}^{*}\). So \(\hat{S}_{k}\) is \(C_{3}^{*}\)-critical. Next, we shall prove that the girth bound in Corollary 1.3 for the class of signed projective-planar graphs is tight. We show that there is a signed projective-planar graph of girth \(5\) which does not admit a homomorphism to \(C_{3}^{*}\) in the next lemma. Let \(\hat{P}\) be a signed Petersen graph with a signature such that the edges of one 5-cycle are negative and all the other edges are positive, depicted in Figure 15. We note that \(\hat{P}\) is a signed projective-planar graph of girth 5. **Lemma 4.1**.: _The signed graph \(\hat{P}\) is \(C_{3}^{*}\)-critical._ Proof.: It is easy to see that for any edge \(e\), \(\hat{P}-e\) satisfies that \(\rho(\hat{P}-e)=2\). Hence, by Theorem 3.2 and because of the degrees of the vertices of \(\hat{P}-e\), it does not contain any \(C_{3}^{*}\)-critical signed graph as a subgraph and thus it admits a homomorphism to \(C_{3}^{*}\). Proving that \(\hat{P}\) does not admit a homomorphism to \(C_{3}^{*}\) is a bit more technical. For a contradiction, assume that there exists such a mapping. First, observe that any mapping of a positive 5-cycle to \(C_{3}^{*}\) covers all positive edges of the target while it is the opposite for negative 5-cycles (at least one positive edge of \(C_{3}^{*}\) is not used as the homomorphic image of an edge of a negative 5-cycle). By the high symmetry of the signed Petersen graph, observe also that any negative 5-cycle can be isomorphically seen as the central cycle, denoted by \(S=u_{1}u_{3}u_{5}u_{2}u_{4}\), depicted in Figure 15. Let us study how a negative 5-cycle can be mapped to \(C_{3}^{*}\) and let \(a,b\) and \(c\) denote the vertices of \(C_{3}^{*}\). 1. All vertices \(u_{1}\) up to \(u_{5}\) of the negative cycle \(S\) are mapped to one vertex of \(C_{3}^{*}\), say \(a\). Since \(v_{1}u_{1}u_{4}u_{2}v_{2}\) is a positive 5-cycle, it should cover all the positive edges of the target \(C_{3}^{*}\) so \(v_{1}\) and \(v_{2}\) should be mapped to \(b\) and \(c\) respectively. The same argument stands for the pairs \(v_{2}v_{3}\), \(v_{3}v_{4}\), \(v_{4}v_{5}\), and \(v_{5}v_{1}\). This is impossible since the 5-cycle is not bipartite. So no negative 5-cycle has all its vertices mapped to a single vertex of \(C_{3}^{*}\). 2. Four vertices of \(S\), say \(u_{1}\) up to \(u_{4}\), are mapped to one vertex of \(C_{3}^{*}\), say \(a\) and the vertex \(u_{5}\) is mapped to \(b\). Then, considering the positive cycles \(u_{1}u_{3}v_{3}v_{4}u_{4}\) and \(u_{1}v_{1}v_{2}u_{2}u_{4}\) whose homomorphic images should cover all the positive edges of the target signed graph \(C_{3}^{*}\), we may conclude that \(\{v_{1},v_{2}\}\) is mapped to \(\{b,c\}\) and so does \(\{v_{3},v_{4}\}\). Now if \(v_{1}\) is mapped to \(b\) and \(v_{2}\) to \(c\), then the homomorphism image of the negative 5-cycle \(v_{1}v_{2}u_{2}u_{5}v_{5}\) already covers all positive edges of \(C_{3}^{*}\) which is a contradiction with our first observation. The same result holds for the pair \(\{v_{3},v_{4}\}\). Thus it implies that both \(v_{1}\) and \(v_{4}\) are mapped to \(c\) while both \(v_{2}\) and \(v_{3}\) are mapped to \(b\). This prevents the positive (outer) 5-cycle \(v_{1}v_{2}v_{3}v_{4}v_{5}\) to be mapped to \(C_{3}^{*}\), as wherever \(v_{5}\) is mapped it will not cover the edge \(ab\). So no negative 5-cycle has four of its vertices mapped to a single vertex of \(C_{3}^{*}\). 3. Three consecutive of \(S\), say \(u_{3},u_{1}\) and \(u_{4}\), are mapped to \(a\). In this case, both other vertices \(u_{2}\) and \(u_{5}\) must be mapped to the same vertex, say \(b\), to preserve the sign of the cycle. Since \(v_{3}u_{3}u_{1}u_{4}v_{4}\) is a positive 5-cycle, we may assume without loss of generality that \(v_{3}\) is mapped to \(b\) while \(v_{4}\) is mapped to \(c\). But then the edges of the negative 5-cycle \(u_{2}v_{2}v_{3}v_{4}u_{4}\) already cover all positive edges of the target \(C_{3}^{*}\) which is a contradiction. So no three consecutive vertices of a negative 5-cycle can be mapped to a single vertex of \(C_{3}^{*}\). Most of the technical part is done. Now observe that any three consecutive vertices (i.e., vertices that induce a path) of \(\hat{P}\) are involved in a negative 5-cycle. Together with our previous case study, this implies that no three vertices of a 3-path can be mapped to a single vertex of \(C_{3}^{*}\). In other words, any switching-equivalent signature \(\pi\) of \(\hat{P}\) that allows an edge-sign preserving homomorphism of \((P,\pi)\) to \(C_{3}^{*}\) does not have two incident negative edges. Up to isomorphism, only one switching-equivalent signature \(\pi\) of \(\hat{P}\) achieves this and it is on the right of Figure 15. If such an edge-sign preserving homomorphism was possible, then \(v_{2}\) and \(v_{3}\) are mapped to \(a\), \(v_{4}\) and \(v_{5}\) are mapped to \(b\), and \(u_{2}\) and \(u_{5}\) are mapped to \(c\). In order to prevent the vertices of any 3-path being mapped to a single vertex, it enforces \(u_{3}\) to be mapped to \(b\), \(u_{4}\) to \(a\), and \(v_{1}\) to \(c\). But then \(u_{1}\) cannot be mapped anywhere while preserving the sign of its incident edges. This concludes the proof that \(\hat{P}\) does not admit a homomorphism to \(C_{3}^{*}\). Actually, in a personal communication of the fifth author with R. Naserasr and S. Mishra, it has been shown that \(\chi_{c}(\hat{P})=\frac{10}{3}\) which also implies that \(\hat{P}\not\to C_{3}^{*}\). Here we give the shortest proof that we could achieve for the sake of completeness. However, we don't know if the girth bound of 6 in Corollary 1.3 is tight for the class of signed planar graphs. We have seen in Theorem 1.1 that if a graph \(G\) satisfies that \(e(G)<\frac{5v(G)-2}{3}\), then \(G\to C_{3}\). Hence, any graph \(G\) with average degree less than \(\frac{10}{3}-\frac{4}{3v(G)}\) is 3-colorable. It implies Brook's theorem when \(\Delta=3\), i.e., except \(K_{4}\) (where \(v(K_{4})=4\)), any graph on more than 4 vertices with maximum degree 3 is 3-colorable. The previous argument means that the \(C_{3}\)-critical graph with \(\Delta=3\) is unique and it is \(K_{4}\). We may consider the analogous problem for \(C_{3}^{*}\)-critical signed graphs. It has been proved in Theorem 1.5 that if a signed graph \(\hat{G}\) satisfies that \(e(\hat{G})<\frac{3v(\hat{G})-1}{2}\), then \(\hat{G}\to C_{3}^{*}\). Thus any signed graph with its average degree less than \(3-\frac{1}{v(G)}\) is circular 3-colorable. We pose the following question: **Problem 4.2**.: _Are there finitely many \(C_{3}^{*}\)-critical signed graphs with \(\Delta=3\)?_ Acknowledgement.This work was initiated as a group project at the ANR-HOSIGRA Workshop, held at CAES du CNRS La Villa Clythia in Frejus, France in May 2022. The authors wish to thank ANR project HOSIGRA (ANR-17-CE40-0022) for providing this support. The second and third authors were partially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). The fourth author was partially funded by IFCAM project "Applications of graph homomorphisms" (MA/IFCAM/18/39) for this work. The fifth author was partially supported by European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 754362.
2309.10425
Computation of Ultra-Short-Term Prediction Intervals of the Power Prosumption in Active Distribution Networks
Microgrids and, in general, active distribution networks require ultra-short-term prediction, i.e., for sub-second time scales, for specific control decisions. Conventional forecasting methodologies are not effective at such time scales. To address this issue, we propose a non-parametric method for computing ultra short-term prediction intervals (PIs) of the power prosumption of generic electrical-distribution networks. The method groups historical observations into clusters according to the values of influential variables. It is applied either to the original or to the differentiated power-prosumption time series. The clusters are considered statistically representative pools of future realizations of power prosumption (or its derivative). They are used to determine empirical PDFs and, by extracting the quantiles, to deliver PIs for respective arbitrary confidence levels. The models are validated a posteriori by carrying out a performance analysis that uses experimentally observed power-prosumption for different building types, thus allowing the identification of the dominant model.
Plouton Grammatikos, Fabrizio Sossan, Jean-Yves Le Boudec, Mario Paolone
2023-09-19T08:39:41Z
http://arxiv.org/abs/2309.10425v1
Computation of Ultra-Short-Term Prediction Intervals of the Power Prosumption in Active Distribution Networks ###### Abstract Microgrids and, in general, active distribution networks require ultra-short-term prediction, i.e., for sub-second time scales, for specific control decisions. Conventional forecasting methodologies are not effective at such time scales. To address this issue, we propose a non-parametric method for computing ultra short-term prediction intervals (PIs) of the power prosumption of generic electrical-distribution networks. The method groups historical observations into clusters according to the values of influential variables. It is applied either to the original or to the differentiated power-prosumption time series. The clusters are considered statistically representative pools of future realizations of power prosumption (or its derivative). They are used to determine empirical PDFs and, by extracting the quantiles, to deliver PIs for respective arbitrary confidence levels. The models are validated a posteriori by carrying out a performance analysis that uses experimentally observed power-prosumption for different building types, thus allowing the identification of the dominant model. prosumption, forecast, prediction intervals, electrical load, microgrids. ## I Introduction After being the mainstream framework for the integration and coordination of distributed generation, the concepts of an active distribution network (ADN) and a microgrid recently came to prominence to tackle the challenges caused by the large-scale integration of variable renewable generation. ADNs comprise low-voltage (LV) or medium-voltage (MV) electrical grids with systems in place to control a combination of distributed energy resources (DERs), such as generators, loads, and storage devices [1]. Due to the low level of aggregation, the ADN requirements for electrical-power prosumption1 forecasting are different than for conventional large interconnected grids. One example relates to the possible violation of the ampacity rating of transformers, power converters, and lines due to sudden changes in the prosumption associated with the highly stochastic nature of prosumers2. A significant change in the prosumers' renewable-power generation, or a spike in load, can create power-flow variations that can exceed the rating of transformers. Whereas, a spike in current could cause the line relays to trip. This example is particularly relevant in the presence of photovoltaics (PV) and electrical-vehicle (EV) charging stations (CS) in the grid (e.g., [2]). The former can exhibit power variations of even 60% of their capacity in under a second [3], whereas the latter can cause significant load changes of hundreds of kW within a few seconds [4]. The prediction of the prosumption can be integrated into various real-time (RT) (e.g., [5]) and model-predictive control (MPC) (e.g., [6]) frameworks to ensure the safe operation of the ADNs and the optimal usage of their resources. Footnote 1: In power systems, the prosumption indicates the aggregated power providor or consumed by users that have the capability to generate electricity by means of user-owned distributed generation locally. Footnote 2: A node that can both absorb or inject power due to prosumption. A further example is based on the capability of microgrids to operate autonomously. When connected to the external grid, microgrids can provide ancillary services to the upper grid layer [7]. Whereas, in case of contingencies, they can operate islanded to the main grid and can enhance the resiliency of the supply to the local load. The islanding maneuver (i.e., the operation sequence for bringing a microgrid from connected to off-grid) requires a prediction of the prosumption in the range of the fundamental frequency period (20 ms) in order to correctly set the gains of the slack resource droop control (e.g., [8]). A third example, which is of high importance both in distribution and transmission networks, is that of voltage sags [9, 10]. Voltage sags are defined as a sudden reduction of the voltage between 90% and 10% of the nominal value and can last from 10 ms up to 1 minute. They are caused primarily by power-system faults, such as short circuits, or by the start-up of large motors and can cause system outages if not treated in time. For distribution systems, where voltage sags typically last between 90-2000 ms [11], in order to mitigate the voltage drop, a control framework equipped with a forecasting tool acting in the sub-second range could have a timely reaction to the voltage sag by injecting an optimally computed active/reactive power into the grid. As ultra-short-term power-prosumption forecasts are actionable for fundamental decisions in the context of ADN/microgrid operation and their RT control, we note that well-established forecasting methodologies (e.g., developed using several techniques, such as regression-based model, artificial neural network [12, 13, 14, 15, 16, 17, 18, 19]) are not suited to this purpose because, besides referring to point predictions, they were developed considering a high level of aggregation and forecasting horizon from 15 minutes and up. Moreover, in certain applications, such as robust optimization (e.g., [20]), worst-case analysis is required; therefore, point predictions are inadequate. As stated in [21] and further supported by the experimental measurements of this paper, when decreasing the aggregation level and measurements sampling time, the power-prosumption volatility and noise level become prominent because consumer behaviors tend not to cancel out. As the current state of the art appears to be inadequate to deliver ultra-short-term power-prosumption prediction intervals (PIs) of ADN prosumers, we propose an adaptive non-parametric method based on pattern recognition. The algorithm is designed to be computationally efficient, thus allowing for the delivery of high-time resolution probabilistic PIs in RT and at a high sampling rate with low computational overhead. The model is initially trained using a time series of the aggregated-power prosumption without requiring any knowledge of the nature and number of loads/generators present in the network. With an efficient updating and aging procedure, it is then continuously updated as new measurements become available. The paper is organized as follows. In Section II, the problem of estimating the power prosumption PI is stated, along with a review of methods already developed in the context of power system applications. In Section III, we describe the proposed PI models whose performance is analyzed in Section IV by using experimental data for different building types. Finally, in Section V, we summarize the findings. ## II Problem Statement As stated in the previous section, most of the existing literature on power-prosumption forecasting is concerned with point predictions, specifically the problem of estimating the expected realization of the power prosumption for a given look-ahead time. Whereas, we target the computation of PIs; in other words, we predict, with a given confidence level, the interval where the future power-prosumption realization is expected to lie. Denoting the PI at the target confidence level \(\alpha\) as the couple \((P^{\downarrow\alpha},P^{\uparrow\alpha})\) composed by the lower and upper bound of the interval, we address the problem that consists in finding the one-step-ahead PI as a function of a sequence of \(n\) historical power-prosumption measurements until the time instant \(i\), specifically: \[\left(P^{\downarrow\alpha}_{i+1|i},P^{\uparrow\alpha}_{i+1|i}\right)=f(P_{i}, \ldots,P_{i+1-n}) \tag{1}\] where \(i\) is the current time interval, and \(f\) is a PI estimation model. When using parametric point predictors (such as autoregressive integrated moving average (ARIMA) models), we can determine the PIs by estimating the variance of the model residuals and computing the quantiles for the prescribed confidence level. This procedure can be performed under the hypothesis of Gaussian _iid_ (independent and identically distributed) model residuals. In cases where this hypothesis does not hold, non-parametric methods could be considered. For example, in order to determine the PIs of the power output of a wind farm, the authors of [22] apply quantile regression to characterize the historical residuals of a state-of-the-art point-prediction model. The same concept, but developed using fuzzy inference instead of quantile regression, is described in [23]. As far as forecasting the electrical-power prosumption is concerned, non-parametric methods have been proposed lately in [24] and [25]. In the former work, an artificial neural network (ANN) with an empirically chosen number of layers was trained using historical data to provide a 30-minute-ahead PI for a given confidence level according to the values of selected data features. The latter work is concerned with predicting the minimum and maximum bounds of the power consumption by applying empirical-mode decomposition and support-vector regression to an interval-valued time signal obtained from a one-hour historical sample of power consumption. Both methods target a prediction horizon that is too long for the requirements of RT ADN/microgrid operation discussed in the introduction. ## III Computation of Prediction Intervals As it will be exhaustively described in the rest of this section, the estimation methodology for PIs consists in grouping historical power-prosumption measurements into clusters according to the value of selected influential variables. At the time of delivering a PI, the values of the influential variables are determined, thus allowing for the selection of the appropriate cluster that is finally used as the empirical PDF (probability distribution function) of the next realization. The algorithm is designed to deliver a PI in rolling RT with a minimum report rate of 20ms (longer values are also analyzed). The algorithm operation sequence is sketched in Fig. 1. The first phase, called batch training, consists of the off-line training of the estimation model that uses historical data. In the second phase, the one-step-ahead PI is delivered. And finally, in the third phase, the new progressively available measurements are used for the on-line model training. This methodology is applied in two flavors, specifically on the original and differentiated time-series, as discussed in the following two sections. ### _PI Model A_ #### Iii-A1 Off-Line Batch Training We consider \(n\) historical power-prosumption measurements \(P_{i+1-n},\ldots,P_{i}\) and the re Fig. 1: Operation sequence of the PI estimation models. The batch training is performed off-line, whereas the PIs computation and on-line training are performed in rolling RT. Ellipses denote the input and output of each phase. -spective time stamp \(t_{i+1-n},\ldots,t_{i}\). Power-prosumption measurements are discretized in time and amplitude with user-defined discretization steps \(\Delta T\) and \(\Delta P\), respectively. We define \(c\) as the operator that determines a non-negative integer \(l\), said _label_, according to the power-prosumption value and timestamp in the following way: \[l_{j}=c(P_{j},t_{j}),\;\;l_{j}\in\{0,1,\ldots,L-1\}. \tag{2}\] where \(j\) is a generic time slot, and \(L\) is the total number of labels. The process used to operate the classification \(c\) is described in detail in Section III-C. The label in (2) is utilized to group the historical power-prosumption measurements \(\mathcal{P}_{i,n}\) into \(L\) clusters, denoted as \(\mathcal{G}_{i}^{0},\ldots,\mathcal{G}_{i}^{L-1}\). Each cluster contains all the historical power-prosumption measurements for which the previous observation was of the respective given label. For example, the cluster \(\mathcal{G}_{i}^{0}\) contains the measurements until the time slot \(i\) for which the respective previous realization was with label \(0\), \(\mathcal{G}_{i}^{1}\) those for which the respective previous realization was with label \(1\), and so on. Formally, the clusters are defined as: \[\begin{split}\mathbb{G}_{i}^{l}=\{P_{j+1}:c(P_{j},t_{j})=l,\;j=i- n,\ldots,i\},\\ l=0,\ldots,L-1.\end{split} \tag{3}\] Let \(\mathcal{G}_{i}^{0},\ldots,\mathcal{G}_{i}^{L-1}\) be the normalized histogram of each cluster computed as \[\mathcal{G}_{i}^{l}(x)=\frac{1}{\left|\mathbb{G}_{i}^{l}\right|}\sum_{p\in \mathcal{G}_{i}^{l}}\delta(x-p),\;\;l=0,\ldots,L-1 \tag{4}\] where \(|\cdot|\) denotes the set cardinality (i.e., the number of elements it contains) and \(\delta\) is the Dirac measure: \[\delta(x)=\begin{cases}1&x=0\\ 0&\text{otherwise}.\end{cases} \tag{5}\] As the power prosumption is bounded, say between \(P_{\text{min}}\) and \(P_{\text{max}}\), histograms in (4) are defined over a finite domain. Specifically, the domain is as: \[x\in\mathbb{X}=\{P_{\text{min}},P_{\text{min}}+\Delta P,P_{\text{min}}+2\Delta P,\ldots,P_{\text{max}}\}. \tag{6}\] The value of \(\Delta P\) is chosen as a trade-off between accuracy and computational efficiency. Indeed, the smaller the \(\Delta P\) is, the more accurate the prediction of the PI is. This aspect is made clear below. However, choosing a very small step will require more memory to store all the measurements, thus resulting in a slower computation of the PI. #### Ii-C2 On-Line PI Estimation At time \(i\), the objective of the PI estimator is to determine the PI for the time slot \(i+1\) at a given arbitrary target confidence level, said \(\alpha\). The underlying idea is to assume the clusters (3) as a statistically representative pool of possible realizations of the one-step-ahead power-prosumption realization. Therefore, the normalized histograms (4) are assumed to be discrete PDFs and used to extract the symmetric quantiles corresponding to the \(\alpha\) confidence level. Let \[l_{i}=c(P_{i},t_{i}) \tag{7}\] be the label calculated with the information at the current time instant. The PI lower and upper bounds are determined as \[P_{i+1|i}^{\downarrow\alpha}=(1-\alpha)/2\;\;\text{ quantile of}\;\;\mathcal{G}_{i}^{l_{i}}, \tag{8}\] \[P_{i+1|i}^{\uparrow\alpha}=(1+\alpha)/2\;\;\text{ quantile of}\;\; \mathcal{G}_{i}^{l_{i}}. \tag{9}\] For the sake of clarity, the lower and upper quantiles in the expressions above are approximated by, respectively, (also see Fig. 2): \[\inf_{x\in\mathbb{X}}\left\{x:F(x)\geq(1-\alpha)/2\right\} \tag{10}\] and \[\sup_{x\in\mathbb{X}}\left\{x:F(x)\leq(1-\alpha)/2\right\}, \tag{11}\] where \(F\) denotes the discrete CDF (cumulative distribution function) of \(\mathcal{G}_{i}^{l}\) calculated by computing its cumulative sum. There are two advantages to this approach. First, histograms carry the complete information over the empirical PDFs, thus allowing for computing PIs at arbitrary confidence levels by training only one model. Second, it overcomes the problem of quantile crossing that arises, for example, in [22], from treating the bounds at a given confidence level as two different time series. #### Ii-C3 On-Line Training as time passes, new measurements become available and can be included to improve future PI estimates. Once the outcome \(P_{i+1}\) is known, the normalized histogram associated with the label \(l_{i}\) is updated with the new information, and the other histograms stay the same. Formally, the training procedure is as follows: \[\mathcal{G}_{i+1}^{l}(x)=\begin{cases}\phi\mathcal{G}_{i}^{l}(x)+(1-\phi) \delta(x-P_{i+1})&l=l_{i}\\ \mathcal{G}_{i}^{l}(x)&l\neq l_{i},\end{cases} \tag{12}\] where \[\phi=\frac{T_{\phi}/T}{T_{\phi}/T+1} \tag{13}\] is called the _forgetting factor_, \(T\) is the measurement period and \(T_{\phi}\) is called the _forgetting time constant_. The forgetting factor controls how much past measurements influence the computation of PIs. Specifically, each new measurement has the same weight in the computation as all the measurements in the past \(T_{\phi}\) seconds. The adoption of such a forgetting factor is important in order to track changes in the composition of prosumers' load/generation patterns. Fig. 2: Exemplification of the procedure (10)-(11) to approximate the quantiles for (9)-(8). In this case, the target confidence level \(\alpha\) is 80%. ### _PI Model B_ #### Ii-B1 Off-Line Batch Training we apply the same principles described for Model A but on the once differentiated power-prosumption training data-set. The differentiated time series is denoted as: \[B_{j}=P_{j}-P_{j-1},\;\;j=i-n+1,\ldots,i. \tag{14}\] The observations clusters are now calculated as follows: \[\mathbb{E}_{i}^{l}=\{B_{j+1}:c(P_{j},t_{j})=l,\;j=i-n+1,\ldots,i\}, \tag{15}\] \[l=0,\ldots,L-1,\] and are used to determine the normalized histograms of the differentiated power prosumption time series, which are denoted as \(\mathcal{H}_{i}^{0},\ldots,\mathcal{H}_{i}^{L-1}\). #### Ii-B2 On-Line PI Estimation The one-step-ahead PI bounds are computed as: \[P_{i+1|}^{l\alpha} =P_{i}+(1-\alpha)/2\text{ quantile of }\mathcal{H}_{i}^{l_{i}}, \tag{16}\] \[P_{i+1|i}^{\dagger\alpha} =P_{i}+(1+\alpha)/2\text{ quantile of }\mathcal{H}_{i}^{l_{i}}, \tag{17}\] i.e., the current power-prosumption plus two back-off terms representing the expected power prosumption variation with respect to the current realization \(P_{i}\). #### Ii-B3 On-Line Training Once the prosumption \(P_{i+1}\) is known and the power difference \(B_{i+1}\) is computed, the normalized histogram corresponding to the current label \(l_{i}\) is updated by adding the new differenced value, whereas the others stay the same, i.e. \[\mathcal{H}_{i+1}^{l}(x)=\begin{cases}\phi\mathcal{H}_{i}^{l}(x)+(1-\phi) \delta(x-B_{i+1})&l=l_{i}\\ \mathcal{H}_{i}^{l}(x)&l\neq l_{i}.\end{cases} \tag{18}\] ### _Classification According to Influential Variables_ The assignment performed through the function \(c(\cdot)\) in (2) is realized by first clustering the historical measurements into groups based on similarities with respect to chosen influential variables and, then, assigning a cluster label to each new measurement. Needless to say, influential (or explanatory) variables are quantities that have an influence on the power prosumption. In general, they can be discovered using numerical methods (like analysis of variance, correlation analysis, or other procedures [26]) or identified by exploiting any empirical knowledge on the observed process (in our case the structure of prosumption of a given node). In this work, influential variables are chosen by adopting the latter approach. The chosen variables are (i) power magnitudes and (ii) time of day (in seconds). The first variable accounts that, in a limited capacity feeder, prosumption variations depend on the same power-prosumption magnitude. For example, when the consumption is large, a load disconnection is more likely than a load insertion because many loads are already active, and vice-versa. The second variable is supposed to capture the different power-prosumption patterns that might occur during the day. The historical measurements are grouped, by using a k-means approach, into \(L\) clusters, according to the values of the influential variables. An example of the clustering method is shown on Fig. 3 for a node with a level-3 EV charging station. The k-means algorithm was trained with two weeks of historical measurements by using both the power level and the time of day as influential variables and by grouping the measurements into \(L=8\) clusters, each of which is labeled with a different color in the figure. The value of \(L\) is chosen by the user and is fixed a priori. Its influence on the performance of the algorithm is studied in Section IV. During the off-line and on-line training phases of the algorithm, the label of measurement \((P_{i},t_{i})\) is computed as follows: \[l_{i}=\operatorname*{arg\,min}_{l=0,\ldots,L-1}d(e(P_{i},t_{i}),c_{l}) \tag{19}\] where \(e(P_{i},t_{i})\) is the point in the \(m-d\) space whose coordinates are the \(m\) influential variables chosen in the classification scheme, \(c_{l}\) is the center of cluster \(l\) and \(d(\cdot,\cdot)\) is the Euclidean distance between two points. The different classification schemes are introduced with the objective of performing an a-posteriori validation of the selection process of influential variables and the number of clusters. Indeed, as stated earlier in this section, influential variables are assigned by exploiting the empirical knowledge of the process: By comparing the performance of different classification schemes (in Section IV), it is possible to infer whether the progressively more complex classification schemes are meaningful or not. ### _Implementation Aspects and Complexity_ The main design requirement of the proposed algorithm is to deliver PIs in RT in order to, for example, activate enough capacity in inertia-less microgrids or to assist in the decision process of setting the droop controller of slack generators in the islanding maneuver [8]. Given the large PIs reporting rate, computational complexity is a central aspect and hence is addressed in this section. The batch-training phase does not have RT requirements because it is performed off-line. This consists in labeling each observation of the training data-set by Fig. 3: Visualization of k-means clustering for a node with an EV charging station with 8 clusters and using the power level and the time of day as influential variables. Positive power is consumption. applying the discussed classification algorithm. As the number of labels \(L\) is fixed by design, the overall complexity of the classification procedure, given by Eq. (19), for one observation is constant time, or \(O(1)\). Iterating it over a set of \(N\) training data is an operation with linear time complexity, or \(O(N)\). The computation of the PIs and the on-line training are performed in rolling RT. The former operation requires computing the label in (7), which is \(O(1)\), and the PI bounds by (8)-(11), which involve a minimum and maximum search over the set \(\mathbb{X}\), a problem with log time complexity with respect to the set cardinality, \(O(\log_{2}|\mathbb{X}|)\), e.g., using a binary search that can exploit the monotonicity of the discrete CDFs. However, as the cardinality of \(\mathbb{X}\) is fixed by design in (6), the complexity of the problem can be regarded as constant time. Therefore, delivering PIs and performing on-line training are procedures whose complexities do not scale with the size of the problem. The training data and progressively incoming measurements are encoded in \(L\) normalized histograms. Each of them is stored using \(2\times|\mathbb{X}|\) doubles, specifically the height and value of each bin. For example, assuming a discretization of 1024 levels (10 bit), preserving the information for a 1 year at 20 ms of resolution with the proposed method requires 128 kB per label (considering a double representation of 64-bit), while storing the individual values would require approximately 14 Gb. ## IV Performance Evaluation ### _A real case application: university buildings_ To test the performance of the proposed models, we consider three sequences of power-prosumption measurements that were recorded from different points inside the MV distribution network of the EPFL campus. The first one consists of an office building with a maximum consumption of 80 kW and that is equipped with a 30 kVA roof PV; the second one includes an office load with a maximum consumption of 30 kW and a 150 kW level 3 EV charging station; and the third one is a heat pump with a maximum consumption of 1.5 MW. The measurements are with a resolution of 20 ms and are provided by a PMU-based metering infrastructure that has been deployed on the university campus (see [27]). We consider 45 days of historical power-prosumption measurements that span the period of September-October 2022. In each case, we consider two weeks of training data; they are used to construct the clusters (as explained in Section III-C) and to perform the off-line training of the algorithm. Then, the proposed PI estimation models are operated for a month (with on-line training), and the estimated PIs are validated against the latter data-set, at 20 ms resolution. Each month consists of approximately 130 million data points. The evaluation is performed in a simulated environment coded in C++. The simulations are executed in a Windows Server with 128GB RAM and an Intel Xeon Gold 6130 CPU at 2.10GHz. ### _Performance Metrics_ We introduce the following metrics to allow for a quantitative comparison between the performance of models and classification schemes. The first is the PI normalized averaged width (PINAW), which is as follows: \[\text{PINAW}=\frac{1}{N}\sum_{j=1}^{N}(P_{j}^{\dagger\alpha}-P_{j}^{\downarrow \alpha})/P_{\text{nom}}. \tag{20}\] The second metric is the PI coverage probability (PICP), i.e. the percentage of power-prosumption realization that falls inside the predicted PI. It is as follows: \[\text{PICP}=\frac{1}{N}\sum_{j=1}^{N}b_{j}^{\alpha} \tag{21}\] where \[b_{j}^{\alpha}=\begin{cases}1,&P_{j}^{\downarrow\alpha}\leq P_{j}\leq P_{j}^{ \dagger\alpha}\\ 0,&\text{otherwise}.\end{cases} \tag{22}\] Because there is a trade-off between the width of the PI and the accuracy of the model, it is imperative that we define a third metric to quantify it. The metric we chose for this purpose is a modification of the coverage width-based criterion (CWC) proposed in [28] and is defined as follows: \[\text{CWC}=\text{PINAW}\max(1,e^{-\mu\frac{\text{CWC}-\mu}{1-\alpha}}) \tag{23}\] where \(\mu\) is a user-defined parameter that quantifies the trade-off between PICP and PINAW. For our experiments, we chose \(\mu=\frac{log(10)}{10}\). This means that a deviation of one order of magnitude in the error rate penalizes the width of the interval by \(10\) times. The same result can be achieved with a linear penalty with slope \(0.9\), as can be seen in Fig. 4. The \(x\)-axis is the percentage deviation of the error rate from the target error rate, whereas the \(y\)-axis is the ratio \(\frac{\text{CWC}}{\text{PINAW}}\). We observe that the exponential function penalizes less severely the error rates within one order of magnitude in comparison to the linear function. However, with our choice, higher error rates are more penalized. Fig. 4: Visual representation of the CWC. \(\text{error}=1-\text{PICP}\) is the error rate of the algorithm, and target \(=1-\alpha\) is the target error rate. The red line indicates Eq. (23) with the chosen value of \(\mu\), and the blue line indicates a linear penalty. ### _Clustering Using Power Levels_ To evaluate the algorithm, we consider four target confidence levels \(\alpha\), namely 99, 99.9, 99.99, and 99.999%. The parameters to be analyzed are (i) the model A or B, (ii) the number of clusters \(L\) and (iii) the forgetting factor \(\phi\) or, equivalently, the forgetting constant \(T_{\phi}\). Each combination of model, \(L\) and \(T_{\phi}\) is called a _configuration_. To choose the best configuration for each building, we run the algorithm for both models A and B, with all possible combinations of values of \(L\) in the set \(\{1,8,64,256,512,1024\}\) and \(T_{\phi}\) in the set \(\{1,60,3600,21600,86400,604800\}\) seconds3. For all the experiments, the value of the quantization step \(\Delta P\) was chosen such that the size of the domain \(\mathbb{X}\) (see Eq. (6)) is equal to 2000 points. It was experimentally observed that the values of PIs computed using larger domains did not change within three significant digits. Also, the average time needed to compute the PI and do one cycle of on-line training was around \(50\mu s\); thus, the method meets the RT requirements. Footnote 3: The values correspond to 1 second, 1 minute, 1 hour, 6 hours, 1 day and 1 week respectively. For each combination of models, of a number of clusters, and time constants, and for each confidence level \(\alpha\), we compute the three metrics PINAW, PICP, and CWC. Tables I, II, and III show the top \(5\) configurations, i.e., those that achieve the smallest CWC, for each target confidence level, for the three buildings, respectively. Concerning the office building and the one hosting the EV charging station, model B outperforms model A for every confidence level. Also, for confidence levels up to \(99.9\%\) the algorithm is able to achieve the target confidence. For higher confidence levels, however, the algorithm misses the target confidence by at most one order of magnitude. Given our choice of the parameter \(\mu\), this means that the computed CI is penalized by a maximum factor of \(10\) in the computation of CWC. Regarding the optimal configuration, fewer clusters, together with larger forgetting time constants, are better choices. More specifically, the optimal number of clusters \(L\) does not need to be larger than 8, whereas a value of \(T_{\phi}\) between one day and one week is the optimal choice. By taking a closer look at Tables I and II, we observe a trade-off between the values of \(L\) and \(T_{\phi}\) in the performance of the algorithm. If we focus on Table Ia, for example, we notice that we could choose, without affecting the value of CWC by more than \(1\%\), either a combination of \(8\) clusters and a forgetting time constant of one week, or \(256\) clusters and one day, or \(1024\) clusters and \(6\) hours. In fact, this table showcases that the choice of \(L\) and \(T_{\phi}\) has a minor effect on the computation of PIs for the office building. The performance of the algorithm is mainly influenced by the differentiation of the measurements performed by model B. For the charging station, however, more clusters perform up to \(30\%\) worse compared to the optimal case of only \(8\) clusters. The results are quite different for the heat pump in Table III. First of all, the algorithm can only approach the target confidence levels within one order of magnitude. Unlike the other two buildings, the optimal results are achieved by a combination of few clusters (ideally only one) and a small forgetting time constant (less than an hour). This implies that the consumption of the heat pump changes more rapidly than that of the office building and the charging station, and that the measurements older than one hour do not influence the computation of PIs. We also observe that model A outperforms model B for large confidence levels. Even though the value of PINAW computed by model A is up to \(5\) times larger than the one computed by model B, model B fails to achieve the target confidence level by up to two orders of magnitude, which results in an exponential increase in the metric CWC. Overall, we conclude that the optimal choice of model, \(L\) and \(T_{\phi}\) depends on the characteristics of the prosumer. Nodes with low volatility, such as an office building, benefit from a long memory of up to one week, whereas nodes characterized by rapid power changes require the use of short memory. The clustering of power measurements seems to benefit mainly nodes with clearly distinct power-levels, such as those at a charging station. But in any case, the number of clusters does not need to be more than \(8\). Finally, the differentiation of power (i.e., model B) improves the computation of PIs, provided that the power differentials have low volatility, as is the case for the office building and the charging station. ### _Clustering Using Power Levels and Time of Day_ In this section, we consider whether adding the time of day (TOD) as a feature of the algorithm, in addition to the power level (P), would improve the performance of the algorithm. The idea is that the clustering based on TOD will capture the patterns that the prosumption exhibits during the day and that this might affect the computation of the PIs. We re-run the experiments of the previous section, with the addition of the TOD. The best configuration is again the one that achieves the smallest value of CWC for each target confidence level. The comparison between the two cases, specifically (i) using only power level (P) and (ii) using power level and the time of day (P+TOD), is shown on Table IV. We observe that the best configuration for the two clustering strategies is the same in most cases. For the office building, the introduction of the TOD does not affect the value of CWC by more than \(1\%\), which confirms once again that the clustering method does not affect the performance of the algorithm. For the charging station, the value of CWC differs by \(5-10\%\) between the two clustering methods. However, the introduction of the new feature could either improve or worsen the performance, depending on the confidence level. Therefore, we cannot conclude whether it consistently performs better. As far as the heat pump is concerned, the usage of the TOD in the clustering has no effect on the performance, because the algorithm performs better when all measurements are put in one cluster. The results indicate that this feature does not play a critical role in the computation of PIs. Perhaps other features, apart from the power level and the time of day, could influence the performance of the algorithm. The effect of additional features could be studied in future research. ### _Effect of the Measurement Period_ A crucial objective of this work is to find out how the proposed algorithm scales as the measurement period increases. To test the algorithm on different measurement periods, we integrate the available data-set. In particular, given the measurements \(P_{j}^{20},j=1..N\) at \(20ms\) resolution, the measurements at resolution \(T\)\((ms)\) are recomputed as: \[P_{i}^{T}=\frac{20}{T}\sum_{j=(i-1)\frac{T}{20}+1}^{i\frac{20}{T}}P_{j}^{20}, i=1..\frac{20}{T}N \tag{24}\] where \(T\) is assumed to be a multiple of \(20ms\). The forgetting factor \(\phi\) is scaled according to \(T\), as in Eq. (13). For this and the following section, we do clustering based only on the power level. We perform again simulations with varying cluster numbers \(L\) and forgetting time constants \(T_{\phi}\) for different confidence levels. In Fig. 5, we plot the values of PINAW and PICP as a function of the measurement period from \(20ms\) up to \(300s\) for the three buildings considered. Each point on the graphs corresponds to a different configuration (the one that achieves the smallest CWC), as shown in Table V. The graph for the office building showcases that the algorithm performs well even for larger measurement periods, provided that the target confidence level is less than \(99.99\%\). Indeed, for low confidence levels, the error rate is kept below the target, and the average width of the PI is less than \(20\%\) of the nominal value. For larger confidence levels, however, the algorithm fails because either the target level is not achieved or the width of the PI is so high that it becomes useless for grid control. It is worth noting that at a resolution of five minutes, the result of the algorithm is identical for all target confidence levels above \(99.99\%\). This implies that, when the measurements are sparse, we might need a larger history to meet high confidence levels. Similar observations can be made for the charging station. The main difference is that the maximum measurement period with an acceptable performance depends on the target confidence level. With target confidence of \(99\%\), the algorithm computes a low PI width for measurement periods up to \(10sec\); whereas, with \(99.9-99.999\%\), it has acceptable performance but only when the resolution is sub-second. For higher confidence, the algorithm cannot predict accurate PIs for periods larger than \(20ms\). Looking at the results for the heat pump on Fig. 5, we notice a break in the trend of increasing PINAW. To understand why this happens, we look at Table Vc. The best configuration for confidence levels \(99.99-99.999\%\) changes from using model A at \(20ms\) resolution to model B at higher resolutions. Model A has been shown in Section IV-C to generate, in general, larger PIs than model B. This increase in the width results in a lower error rate, which in turn might reduce the value of CWC, which is the sole metric used to compare configurations to one another. The value of CWC ultimately depends on the choice of the parameter \(\mu\), which might affect the trends in the graphs of Fig. 5. Fig. 5: Performance evaluation as a function of the measurement period ### _Confidence Level Uncertainty_ The metrics of Section IV-B evaluate, over the full one-month period, the average performance of the algorithm. However, it would be interesting to see how the algorithm performs over time. For this purpose, we split the one-month evaluation period into six-hour windows and compute the error rate achieved by the algorithm in each window. Hence, for each confidence level, we compute a histogram of \(120\) estimations of the error rate. We then depict the statistical measures of the histograms using the box plots of Fig. 6. The blue box indicates the 25th and 75th percentiles, the red line is the median, the red crosses are outliers, and the small square indicates the target error rate. We computed the box plots only for the best configuration for each building and for confidence level at \(20ms\) resolution, which are shown in Table V. If the algorithm is consistent in predicting accurate PIs, then the median of the box plot should be close to the target error-rate. We see that this is indeed the case for the office building. For the largest confidence level, our algorithm performs even better than expected, albeit with many outliers. For the charging station, the performance is sometimes worse than expected, but the target error-rate is nevertheless contained within the 25th and the 75th percentile of the box plot. This is not always the case for the heat pump, hence the algorithm cannot consistently estimate accurate PIs if there is high volatility in the measurements. Fig. 6: Box plot of model accuracy versus target confidence level at \(20ms\) resolution The results showcase that our algorithm computes accurate PIs, irrespective of the building, provided that the target confidence level is less than \(99.9\%\). For higher confidence levels, the performance is still acceptable, but the consistency of the predictions depends on the dynamics of the node. ## V Conclusion Motivated by the requirements of ADNs real-time control, we have presented a non-parametric method for computing ultra-short-term PIs (prediction intervals) of the power prosumption in generic ADNs (e.g. buildings). The method consists in grouping historical measurements into clusters, according to the value of selected influential variables. The clusters are considered statistically representative pools of future power-prosumption realizations and are used to extract PIs at arbitrary confidence levels by calculating the quantiles from the respective PDF. The proposed method has been applied to the original and once-differentiated power-prosumption time series, and different influential variables have been considered. The performance of the method was tested for different types of prosumers by using experimental measurements from an MV distribution network. The performance analysis enabled us to make an a-posteriori selection of the parameters of the algorithm. The algorithm was shown to compute relatively narrow PIs for the studied prosumers, for time resolutions from \(20ms\) up to a few minutes in some cases, provided that the target confidence level is below \(99.9\%\). A final statement concerns the computational complexity, which becomes a relevant concern especially when considering densely sampled time series and the high reporting rate for the predictions. We have shown that the proposed algorithm performs the PI computation and on-line training in constant time hence is scalable.
2309.16054
Helium Enhanced Planets Along the Upper Edge of the Radius Valley
The low mean densities of sub-Neptunes imply that they formed within a few million years and accreted primordial envelopes. Because these planets receive a total X-ray and extreme ultra-violet flux that is comparable to the gravitational binding energy of their envelopes, their primordial hydrogen-helium atmospheres are susceptible to mass loss. Models of photoevaporating sub-Neptunes have so far assumed that envelope compositions remain constant over time. However, preferential loss of atmospheric hydrogen has the potential to change their compositions. Here, by modeling the thermal and compositional evolution of sub-Neptunes undergoing atmospheric escape with diffusive separation between hydrogen and helium, we show that planets with radii between 1.6 and 2.5 that of Earth can become helium-enhanced from billions of years of photoevaporation, obtaining helium mass fractions in excess of 40%. Atmospheric helium enhancement can be detected through transmission spectra, providing a novel observational test for whether atmospheric escape creates the radius valley.
Isaac Malsky, Leslie Rogers, Eliza M. R. Kempton, Nadejda Marounina
2023-09-27T22:29:04Z
http://arxiv.org/abs/2309.16054v1
# Helium Enhanced Planets Along the Upper Edge of the Radius Valley ###### Abstract The low mean densities of sub-Neptunes imply that they formed within a few million years and accreted primordial envelopes[1]. Because these planets receive a total X-ray and extreme ultra-violet flux that is comparable to the gravitational binding energy of their envelopes, their primordial hydrogen-helium atmospheres are susceptible to mass loss[2]. Models of photoevaporating sub-Neptunes have so far assumed that envelope compositions remain constant over time. However, preferential loss of atmospheric hydrogen has the potential to change their compositions. Here, by modeling the thermal and compositional evolution of sub-Neptunes undergoing atmospheric escape with diffusive separation between hydrogen and helium, we show that planets with radii between 1.6 and 2.5 R\({}_{\oplus}\) can become helium-enhanced from billions of years of photoevaporation, obtaining helium mass fractions in excess of 40%. Atmospheric helium enhancement can be detected through transmission spectra, providing a novel observational test for whether atmospheric escape creates the radius valley[3]. Planets with orbital periods shorter than 100 days and radii smaller than Neptune outnumber planets larger than Neptune by a factor of ten[4]. The _Kepler_ survey has revealed that the radius distribution of this population is bimodal: there is a scarcity of planets between 1.5 and 2.0 \(R_{\oplus}\) and peaks in the occurrence rate at \(\sim\)1.3 \(R_{\oplus}\) (super-Earths) and \(\sim\)2.4 \(R_{\oplus}\) (sub-Neptunes)[3]. Planet mass-radius measurements show that most planets larger than 1.6 R\({}_{\oplus}\) have low mean densities, requiring voluminous volatile envelopes[1]. Although exoplanet surveys have begun to constrain the structure of this population, much of the formation and compositional evolution of sub-Neptunes remain a mystery, and a large variety of bulk compositions are _a priori_ possible[5]. Primordial hydrogen and helium accreted from the protoplanetary disk, H\({}_{2}\)O accreted in the form of solid icy material, and atmospheric H\({}_{2}\)O created through magma-atmosphere interactions and volcanic outgassing[6] may all contribute to the volatile envelopes of sub-Neptunes. The radius valley could be explained as the outcome of multiple planet formation pathways. Atmospheric escape may shape the evolution of highly irradiated sub-Neptunes, bifurcating the population based on envelope retention[7]. The primordial hydrogen-helium envelopes surrounding sub-Neptunes are susceptible to mass loss driven by ionizing radiation from their host star[8] and the thermal energy released from the planetary cores[9]. Planets that retain their envelopes may comprise the larger 2.4 R\({}_{\oplus}\) sub-Neptune mode of the radius distribution, while the evaporated cores of former sub-Neptunes may make up the smaller \(\sim\) 1.3 R\({}_{\oplus}\) mode of the distribution. Alternatively, if a significant sub-population of planets formed at or beyond the water snow line in the nascent protoplanetary disk before migrating inwards toward the star, the bimodality of the small planet radius distribution would reflect the differing bulk compositions of these two populations[10]. In this scenario, while the small super-Earth mode of the planet radius distribution would still be comprised of rocky planets formed _in situ_, the larger sub-Neptune mode would be comprised of water-rich planets with several tens of percent water by mass[11, 12]. Tests of the origin of the radius valley have so far relied on characterizing the radius distribution of planets as a function of characteristics such as orbital period, host star spectral type, stellar age, and stellar metallicity[13, 14, 15]. Here we propose a new indicator of how the radius valley is sculpted by the photoevaporation of primordial envelopes: measuring the imprint of fractionated mass-loss on the atmospheric compositions of planets along the upper edge of the radius valley. To date, most models of the evolution of sub-Neptune-size planets experiencing atmospheric escape have assumed that the chemical composition of the planetary envelope remains constant over time as the envelope is gradually lost [2; 16; 17]. This approximation may be appropriate in the first \(\sim 100\) Myr of a planet's life when X-ray and extreme-ultraviolet (EUV) driven escape rates are large and helium and metals are dragged along with the escaping hydrogen. However, as the planet and host star age, diffusive separation of the atmospheric constituents may lead to fractionation and preferential loss of hydrogen [18]. Hu et al. (2015) [19] first proposed hydrogen-depleted helium-dominated atmospheres on Neptune- and sub-Neptune-size planets to explain the lack of CH\({}_{4}\) in GJ 436b's emission spectrum. Self-consistent calculations of the coupled thermal, mass-loss, and compositional evolution of primordial planetary envelopes [20] have since shown that -- though GJ 436b itself is too large (4.33 \(\pm\) 0.18 \(R_{\oplus}\)[21]) for photoevaporation to affect its atmospheric composition -- the cumulative effect of preferential loss of hydrogen over billions of years can lead smaller planets (\(\lesssim 3\)\(R_{\oplus}\)) to become helium-enhanced. Here we expand upon the method developed in Malsky & Rogers (2020) [20], and show how fractionated mass loss shapes the compositional evolution of the broader planet population of sub-Neptunes. Using the Modules for Experiments in Stellar Astrophysics (MESA v12778) [22], we simulate the evolution of an extensive grid of sub-Neptune primordial envelopes for 10 Gyr. To isolate the effect of mass-loss evolution on planet metallicity, all planets start with a solar composition atmosphere (\(X=0.74\), \(Y=0.24\), \(Z=0.02\)). We predict that planets on the large-radius edge of the radius valley will be enhanced in helium and depleted in hydrogen if the radius valley is primarily produced through photoevaporative mass loss. Figure 1 shows a radius valley forms in our simulations as some planets retain part of their initial hydrogen-helium envelope (at radii \(\gtrsim 1.6\)\(R_{\oplus}\)) and some are completely stripped of their atmospheres and become remnant cores (at radii \(\lesssim 1.9\)\(R_{\oplus}\)). Billions of years of fractionated atmospheric escape leads to planets that are enhanced in helium and metals relative to their initial conditions, with many planets commonly achieving \(Y\geq 0.4\). To become helium-enhanced (\(Y\geq 0.4\)), planets must undergo extensive mass loss and lose at least 50% of their initial volatile inventory (by mass), yet necessarily still retain a portion of their initial envelope. Therefore, planets that are helium-enhanced -- and hence have lost most but not all of their primordial envelopes -- fall on the upper edge of the radius valley. Helium-enhanced planets on the upper edge of the radius valley are a robust outcome of our simulations, and persist for every combination of host star spectral type (G-dwarf, K-dwarf, and M-dwarf) and homopause temperature (ranging from 3,000 K to 10,000 K) that we explored (SS B.2, SS B.3). While increasing the homopause temperature diminishes the level of hydrogen-helium fractionation in the escaping wind, helium-enhanced planets are obtained even at the highest plausible homopause temperature of 10,000 K [23] after \(\gtrsim 5\) Gyr. The location of helium enhancement shifts in \(M_{p}-R_{p}-F_{p}\) space with host star spectral type, decreasing in radius for planets evolved around lower mass stars. The uncertain parameters in models of photoevaporation (such as the mass-loss efficiency factor) largely shift the location of the radius gap and the helium-enhanced planets in tandem. Thus, benchmarking the predicted helium-enhanced feature in the exoplanet population against the radius gap minimizes the sensitivity to uncertain model parameters. The metallicity of a planet's atmosphere carries a signature of its initial formation process (e.g., the timing and size scale of the accretion of solids [24]). For planets at the edge of the radius gap, subsequent atmospheric evolution imprints an additional metallicity enhancement. The atmospheres of helium-enhanced planets are dramatically enriched in metals by fractionated atmospheric escape. Atmospheric metallicities, log\({}_{10}\)([Fe/H]), can be enhanced by a factor of 200 over their initial values, though factors of 5 to 30 are more typical (Figure 2). function of the extinction cross-section [25] and is measurable from the shapes of individual spectral features, the relative depths of features from the same molecule, and/or the slope of the Rayleigh signature [26]. Comparing atmospheres with identical mean-molecular weights and scale heights (Figure 3), super-solar Y/X cases have lower metallicities (and overall opacities) than solar Y/X cases. Consequently, absorption line cores form deeper in the atmosphere for the helium-enhanced cases, resulting in increased pressure broadening apparent in the width of the sodium and potassium lines. The excess pressure broadening will be more apparent for the single elements lines (especially Na and K) than for the molecular bands, which are blends of many individual vibration-rotation lines. The relative depth of the molecular Rayleigh scattering signature at short wavelengths, compared to the transit depths from molecular absorption in the near infra-red, measures the mixing ratio of spectrally inactive gasses (e.g., H\({}_{2}\), He, and N\({}_{2}\)) in the atmosphere [26]. The low Rayleigh cross section of He causes the lower Rayleigh scattering continuum in the maximum Y/X models in Figure 3. Super-solar Y/X may also shift the equilibrium molecular abundances of spectrally active molecules, decreasing the proportion of CH\({}_{4}\) relative to CO and CO\({}_{2}\) when the number fraction of hydrogen becomes comparable to the number fraction of heavy elements ratio [27] (a level of hydrogen depletion only reached by the most extreme outcomes of our simulations). One complication to using helium enhancement as an observational diagnostic arises if the atmosphere has clouds or haze. For example, some of the biggest spectral differences are expected at shorter wavelengths, where aerosol particles are efficient scatterers. Additional degeneracies between the shape of transmission spectral features and the presence of aerosol layers could significantly complicate the inference of the helium to hydrogen ratio, as shown by the lower panels of Figure 3. However, a detailed study of the degeneracy of clouds and helium enhancement is outside the scope of this work. Helium has been directly detected in the escaping atmospheres of hot Jupiters (HD189733b, WASP-107b, WASP-69b), warm Neptunes (Hat-P-11b, GJ 3470b), and a young sub-Neptune (TOI 560.01) via absorption in the meta-stable helium 1083 nm line of transmitted starlight [28]. Importantly, the fractionation process, whose cumulative effect engenders helium-enhancement in the atmospheres retained by sub-Neptunes, itself causes the escaping winds to be depleted in helium relative to hydrogen. Detecting the time-integrated effects of fractionated escape in the envelopes retained by planets on the upper edge of the radius valley would be complementary to the direct detection of spectral features in the winds currently escaping from sub-Neptunes. A super-solar abundance ratio of helium relative to hydrogen is an observable signature of a planetary envelope of primordial origin that has been sculpted by hydrogen loss (via atmospheric escape and/or H\({}_{2}\)-magma interactions). The ratio of helium to hydrogen in planet-forming disks was set by primordial Big Bang nucleosynthesis and has not been significantly modified since [29]. As a non-reactive noble gas, helium is not incorporated into minerals or ices and thus cannot be accreted by a planet in the form of rocky or icy solids [30]. Due to the low relative cosmic abundances of unstable radioactive nuclides, the amount of helium produced by alpha decays is negligible compared to the helium-enhanced envelopes in our simulations (wherein helium accounts for \(\sim 0.02\%\) of the planet mass). Thus, outgassing and delivery of volatiles by icy pebbles or planetesimals will only dilute the helium-to-hydrogen ratio in planetary atmospheres. Not all planets on the upper edge of the radius gap will necessarily be helium-enhanced. Planets that are less than a few billion years old will not have time to accumulate the effects of preferential hydrogen loss and water worlds may also be possibilities in this parameter space [10]. However, atmospheric helium enhancement presents an important avenue for testing the origins of the radius valley. An observational detection of helium-enhanced planets on the upper edge of the radius valley would break the degeneracy between sub-Neptune planet compositional scenarios, and provide insights into the formation and evolution of this enigmatic and abundant planet population. ## Code availability MESA is publicly available ([http://mesa.sourceforge.net/](http://mesa.sourceforge.net/)). Exo_Transmit is also publicly available ([https://github.com/elizakempton/Exo_Transmit](https://github.com/elizakempton/Exo_Transmit)). ## Appendix A Appendix Methods To model the coupled thermal, mass-loss, and compositional evolution of primordial envelopes surrounding sub-Neptune mass planets, we use the Modules for Experiments in Stellar Astrophysics (MESA v12778) [31; 32; 33; 34; 22]. We follow the modeling approach from Malsky & Rogers (2020) [20] with several additions. First, we now self-consistently model the hydrogen ionization fraction used to calculate the rate of momentum exchange between hydrogen and helium. Second, we have updated our atmospheric boundary conditions within MESA. For each simulated planet, we create an initial MESA planet model with the desired combination of initial total planet mass, core mass, atmospheric composition, and entropy. All models begin with a solar composition primordial envelope surrounding a rocky core with solar proportions of silicates and iron. The \(M_{p}-R_{p}\) relation of Earth-composition rocky cores [35] sets the inner boundary condition of the MESA model of the hydrogen-helium envelope. To set the core luminosity, we assume [36] that the rocky core has a heat capacity of \(c_{v}\)=1.0 J K\({}^{-1}\) g\({}^{-1}\), and include the contribution from the decay of radionuclides, following Chen & Rogers (2016) [17]. ### Atmospheric Boundary Conditions To set the boundary conditions and atmospheric profile, we model the atmosphere up to a optical depth (from the planet's local thermal irradiation) of \(\tau\)=2/3, and implement a grey Eddington T(\(\tau\)) relation with the atm_T_tau_relation option in MESA. We define planetary transit radius (R\({}_{p}\)) to be the location where the pressure is equal to 1.0 mbar. This roughly corresponds to the radii observed by transit surveys [37; 38]. To extrapolate to pressures below the outermost zone in MESA (at approximately 80 millibar) we assume an isothermal temperature profile and a constant value for the mean molecular mass of the atmosphere. As in Malsky & Rogers (2020) [20], we use gaseous mean opacities from Freedman et al. (2014) [39] and model irradiation from the host star by specifying both the incident stellar flux and the column depth that the flux penetrates down to in the planet's atmosphere. We standardize the initial thermal profile of the planet at the beginning of evolution to a "hot start". At the start of the evolution stage the planet envelope cools and gravitationally contracts on a Kelvin-Helmholtz timescale. Over 6.0 Myr the irradiation from the planet's host star is increased from 0 to the full specified irradiation. At 6.0 Myr, the planet has been brought to the correct starting state and begins fractionated mass loss. We define the homopause as the location where the hydrogen-helium binary diffusion coefficient is equal to the eddy diffusion coefficient. Below the homopause radius, turbulence and convection homogenize the planet atmosphere. Above the homopause radius, fractionation of hydrogen and helium can lead to differences in atmospheric abundances. Generally, the homopause radius of the planet is approximately 10% larger than the transit radius of a planet. Throughout this work we adopt a value of K\({}_{zz}=10^{9}\) cm\({}^{2}\) s\({}^{-1}\) for the eddy diffusion coefficient. Increasing (decreasing) the eddy diffusion coefficient by a factor of 10 results in approximately a 5% larger (smaller) homopause radius [20]. ### Photoevaporation During evolution, planets lose mass due to photoevaporation driven by EUV radiation [23; 40]. Ionizing stellar EUV radiation heats the outer layers of the planet envelope (via thermalization of electrons ionized from hydrogen atoms) and drives a hydrodynamic wind from the planet. The EUV flux from the star, which drives the mass loss, decreases exponentially in time as the star evolves. We parameterize the star's EUV luminosity following the equations in Sanz-Forcada et al. (2011) [41] and model fractionated mass loss from photoevaporation following an approach adapted from Hu et al. (2015) [19]. We use \(\Phi\) to denote the total mass loss rate from the planet (mass per time), and \(\phi\) to denote the number fluxes of particles escaping from the planet (particles per area per time). At low EUV fluxes, the overall mass loss is approximated as energy-limited, wherein a fixed fraction of the EUV luminosity impinging on the planet contributes to unbinding mass from the gravitational potential well of the planet. The energy-limited mass-loss rate is \[\Phi_{\rm EL}=\frac{L_{\rm EUV}\eta a^{2}R_{h}^{3}}{4Kd^{2}GM_{p}},\] (A1) where \(L_{\rm EUV}\) is the EUV luminosity, \(M_{p}\) is the mass of the planet, R\({}_{h}\) is the homopause radius, \(K\) is the Roche potential reduction factor [42], \(\eta\) is the heating efficiency, \(d\) is the orbital separation, and \(a\) is the ratio between the EUV absorbing radius and the homopause radius. EUV photons are deposited at a radius corresponding to approximately \(\tau_{\rm EUV}\)=1, which places the EUV absorbing radius within 10% of the planet's homopause radius [43; 44; 45; 46; 19]. When calculating the energy-limited mass loss rate, we adopt \(a=1\) following Hu et al. (2015) to subsume the uncertainty in the ratio between the EUV absorbing radius and the homopause radius into other parameters (namely \(\eta\)) in the energy-limited escape formulation. While the energy-limited escape rate is a good approximation when the escaping wind is subsonic, simulations show that it breaks down when the flow is transonic [47; 48; 49; 50]. At large EUV heating rates (Q\({}_{net}\gtrsim\) 5\(\times\)10\({}^{13}\) - 5\(\times\)10\({}^{14}\) ergs s\({}^{-1}\) in our simulations) the majority of the incident radiation is converted into translational and thermal energy in the atmosphere and the mass loss becomes less efficient. For planets receiving EUV fluxes above the critical minimum heating rate to drive a transonic flow, the mass loss rate saturates and no longer increases with energy input. In this transonic escape regime, we modify the energy-limited escape rate with the efficiency reduction factor, \(f_{r}\), from Johnson et al. (2013)[49], \(\Phi=f_{r}\Phi_{\rm EL}\). ### Fractionation At radii above the homopause, atmospheric constituents separate out by their molecular weight, with the lighter species extending out to higher altitudes due to their larger atmospheric scale heights. The diffusive separation of atmospheric constituents leads heavier species (helium and metals) to be preferentially retained as the hydrogen is lost. It is thus convenient to separate the total mass loss rate, \(\Phi\), into the separate contributions from hydrogen and helium escape \[\Phi=\Phi_{\rm H}+\Phi_{\rm He}=4\pi R_{h}^{2}\left(\phi_{\rm H}m_{\rm H}+ \phi_{\rm He}m_{\rm He}\right),\] (A2) where \(\Phi_{\rm H}\) and \(\Phi_{\rm He}\) are the mass loss rates of hydrogen and helium, and \(\phi_{\rm H}\) and \(\phi_{\rm He}\) are the fluxes of hydrogen and helium particle escaping per unit time and per unit area. The diffusion of helium relative to the escaping hydrogen is characterized by an effective binary diffusion coefficient (b\({}^{\prime}\)) as \[\frac{k\rm T_{H}}{b^{\prime}}=(1-x)\frac{k\rm T_{H}}{b}+x\frac{m_{H}\nu}{n_{ \rm He}}\] (A3) where \(b=1.04\times 10^{18}\rm T^{0.732}cm^{-1}s^{-1}\) is the binary diffusion coefficient between neutral hydrogen and helium [51], \(\nu\) is the ion-neutral momentum transfer collision frequency [52], and \(x\) is the ionization fraction of hydrogen. The first term on the right hand side of Eq A3 reflects the coupling between H and He, while the second term on the right hand side represents the coupling between H\({}^{+}\) and He. We improve upon Malsky & Rogers (2020)[20], which assumed a constant hydrogen ionization fraction of 0.1 [19; 20], by calculating the ionization fraction at the homopause radius at each timestep. This allows us to model the fractionation between hydrogen and helium for varying homopause temperatures and pressures. The fractionated escape fluxes of hydrogen and helium particles are approximated as \[\frac{\phi_{\rm He}}{X_{\rm He}}=\frac{\phi_{\rm H}}{X_{\rm H}}-\frac{GM_{p}( m_{\rm He}-m_{\rm H})b^{\prime}}{R_{h}^{2}kT_{H}},\] (A4) where \(X_{\rm H}\) and \(X_{\rm He}\) are the mixing ratios of hydrogen and helium at the homopause, \(k\) is the Boltzmann constant, m\({}_{\rm H}\) is the mass of a hydrogen atom, and m\({}_{\rm He}\) is the mass of a helium atom. The second term on the right hand side of equation A4 is denoted by Hu et al. (2015) as the diffusion-limited escape rate \(\phi_{\rm DL}\), \[\phi_{\rm DL}=\frac{GM_{p}(m_{\rm He}-m_{\rm H})b^{\prime}}{R_{h}^{2}kT_{H}}.\] (A5) We note that this definition differs slightly from the diffusion-limiting flux of hydrogen escaping through a stationary background atmosphere defined by Hunten (1973)[53], which is related to the expression in Equation A5 by \(X_{\rm H}\phi_{\rm DL}\). Equations A2 and A4 together reveal that the extent of the fractionation is divided into two regimes determined by the mass loss rate. When the mass loss rate is large compared to the diffusion-limited escape rate (\(\phi_{\rm H}/X_{\rm H}\gg\phi_{\rm DL}\)), hydrogen and helium are lost in approximately equal proportion to their mixing ratios at the homopause radius. During this rapid evaporation stage, hydrogen and helium are strongly coupled and relatively little helium enhancement occurs. As the escape rate decreases and approaches the diffusion limited mass escape rate of hydrogen, the escaping wind from the planet becomes more and more enriched in hydrogen relative to helium. Once the mass loss rate decreases below the critical diffusion limited mass loss rate, only hydrogen escapes. After each time step in the evolution of the MESA planet model, we update composition of the remaining envelope that was retained by the planet to reflect the differing amounts of hydrogen and helium that were lost. This is accomplished as part of the extras_finish_step routine, as described in Malsky & Rogers (2020)[20]. ### Grid Sub-Neptune Evolution Models The grid of planets modeled has 17 masses from 4.0 to 20.0 M\({}_{\odot}\), 25 initial envelope mass fraction from 0.001 to 0.01, and 30 orbital separations from 0.01 to 0.3 au. Additionally, for each of these parameterizations we model planets orbiting G stars with T\({}_{eff}\) = 6,000 K, M\({}_{*}\) = 1.0 M\({}_{\odot}\), and R\({}_{*}\) = 1.0 R\({}_{\odot}\), K stars with T\({}_{eff}\) = 4,780, M\({}_{*}\) = 0.75 M\({}_{\odot}\), and R\({}_{*}\) = 0.73 R\({}_{\odot}\), and M stars with T\({}_{eff}\) = 3,600 K, M\({}_{*}\) = 0.2 M\({}_{\odot}\), and R\({}_{*}\) = 0.30 R\({}_{\odot}\). For each set of models we simulated homopause temperatures of 3,000 K and 10,000 K for a total of 76,500 planet models. ### Chemistry of Helium-Enhanced Atmospheres In order to understand how helium enhancement manifests in observations, we simulate transmission spectra for a number of atmospheric compositions. Figure 4 shows the spread in compositions after 10 Gyr of mass loss. We selected compositions with highest helium/hydrogen enhancement at metallicities of 10x solar and 100x solar. Next, we took the relative abundances of atmospheric constituents from Lodders (2003)[54] and scaled them to the helium and metal enhancements of our two selected models. Then, we found atmospheric compositions with solar helium to hydrogen ratios that matched the mean molecular weight of the 10x solar and 100x solar metallicity helium enhanced models, as shown in Figure 5. We calculate abundances in thermochemical equilibrium for the most important atmospheric absorbers over a grid of temperature and pressure (i.e. equation of state (EOS) tables in Exo_Transmit format for representative X / Y / Z compositions in our model grid), using the methods outlined in Mbarek & Kempton (2016)[55]. The abundances of key species are shown in Figure 6. Importantly, these new EOS tables highlight that the helium enhanced atmospheres have much lower metallicities for constant mean molecular weights. We benchmarked our solar helium to hydrogen ratio tables against the ones included in Exo_Transmit and found perfect agreement. Finally, we choose a representative temperature-pressure profile for the distribution of surface gravities, radii, and equilibrium temperatures of helium enhanced planets found in our simulations (Figure 7) and extrapolate an isothermal upper atmosphere extending from the transit radius to 0.1 mbar to calculate transmission spectra. ## Appendix B Supplemental Results ### Candidates for Helium Enhancement Figure 8 shows the 3-dimensional volume of planetary mass-radius-incident flux (\(M_{p}-R_{p}-F_{p}\)) parameter space in which helium enhancement is found. Helium-enhanced planets reside in a narrow arc of \(M_{p}-R_{p}-F_{p}\) parameter space with radii between 1.6 and 2.5 R\({}_{\oplus}\), incident flux rates between 10 F\({}_{\oplus}\) and 800 F\({}_{\oplus}\), and masses from 4.0 to 20.0 M\({}_{\oplus}\). As planets age and lose hydrogen preferentially the parameter space for helium enhancement expands. To determine the mass-radius-flux parameter space for helium enhanced planets (as shown in Figure 8) we found the minimum flux necessary for helium enhancement at each mass and radius. Because of differences in initial envelope mass fractions, there may be multiple helium enhanced model evolution tracks that lead to the same planet mass and radius at a given age. First, for each host star type and homopause temperature, we filter the population of sub-Neptunes to include only planets that have Y \(\geq\) 0.4. Next, we interpolated over the filtered models using a radial basis function[56] to find the flux at each point in the mass-radius parameter space and fit the upper and lower \(M_{p}-R_{p}\) relations with logarithmic functions as they best matched the bounds of our modeled planets. Planets were included as helium enhancement candidates if they had fluxes between 0.1 and 10x the flux value of our \(M_{p}-R_{p}\) interpolation. The flux value for each planet was calculated using parameters from the NASA Exoplanet Archive[57]. To prioritize helium enhanced exoplanet candidates based on observability, we calculate a transmission spectroscopy metric[58] (TSM) score for each planet, shown in Table 1. This score is a measure of the quality of a candidate for atmospheric characterization, with higher scores meaning that a planet is more readily accessible: \[\text{TSM}=(\text{Scale factor})\times\frac{\text{R}_{p}^{3}\text{T}_{\text{ eq}}}{\text{M}_{p}\text{R}_{*}^{2}}\times 10^{-\text{m}_{\text{J}}/5}\] (B6) where R\({}_{*}\) is stellar radius, \(T_{eq}\) is the planet's equilibrium temperature assuming zero albedo, and m\({}_{J}\) is J band apparent magnitude of the host star. We adopt a scale factor of 1.26 for planets with radii between 1.5 R\({}_{\oplus}\) and 2.75 R\({}_{\oplus}\)[58]. Among the exoplanets discovered orbiting G, K, or M stars, there are a number of candidates for helium-enhanced atmospheres. Table 1 shows the relevant properties for each planet in the \(M_{p}-R_{p}-F_{p}\) parameter space for which we predict helium enhancement after 10 Gyr of fractionated mass loss. The measured properties of these planets overlap (within their 1-sigma measurements uncertainties) with the \(M_{p}-R_{p}-F_{p}\) parameter space in which we find helium-enhanced planets. There are a number of candidates around G and K star planets. However, as yet we found no candidates for helium enhancement around M stars. ### Homopause Temperature As the temperature at the homopause increases, the effects of fractionation decrease. First, the coupling of neutral hydrogen and helium increases with increasing temperature [51; 52]. Second, hydrogen ionization increases with increasing temperature, and ionized hydrogen is more strongly coupled with helium than neutral hydrogen. In order to quantify how helium enhancement changes with homopause temperature (T\({}_{\rm H}\)), we simulate planet evolution with a lower estimate (3,000 K), and an upper estimate (10,000 K). Homopause temperatures above 10,000 K are unphysical as Lyman-\(\alpha\) cooling thermostats the upper atmosphere temperatures [23]. Previous work simulating the thermospheres of sub-Neptunes have used 3,000 K as a lower bound [59], and cooler values would only further increase the level of helium enhancement in sub-Neptunes. Figure 9 shows a population of simulated planets evolved with homopause temperatures of 10,000 K. Compared to Figure 2, which shows planets simulated with homopause temperatures of 3,000 K, these planets have less extreme helium and metal enhancement, and became helium enhanced at older ages (generally after 5 Gyr). Nonetheless, many of these planets still attain atmospheric helium mass fractions greater than 0.40 and even as extreme as 0.80. The robustness of helium enhancement for higher homopause temperatures is also paralleled in Figures 10 and 11, which show the flux-radius and mass-radius relationship for helium enhanced planets with homopause temperatures of 10,000 K. ### Stellar Type Helium enhancement is a prominent feature of populations of sub-Neptune mass planets that evolve around G, K, and M type stars. Host star spectral type affects the mass-loss history of planets (at specified initial planet mass, envelope mass fraction, and irradiation flux) in two ways. Lower mass stars have a higher ratio of EUV luminosity to \begin{table} \begin{tabular}{l l l l l l} \hline \hline \multicolumn{1}{c}{ Name} & \multicolumn{1}{c}{Mass (M\({}_{\oplus}\))} & \multicolumn{1}{c}{Radius (R\({}_{\oplus}\))} & \multicolumn{1}{c}{Flux (F\({}_{\oplus}\))} & \multicolumn{1}{c}{J Band Magnitude} & \multicolumn{1}{c}{TSM} \\ \hline \hline \multicolumn{6}{c}{_G Star Planets_} \\ \hline \hline EPIC 249893012 b & 8.75 & 1.95 & 1032.49 & 10.22 & 26.74 \\ HD 136352 b & 4.72 & 1.66 & 111.65 & 4.31 & 601.73 \\ HD 86226 c & 7.25 & 2.16 & 486.81 & 6.84 & 618.47 \\ K2-111 b & 5.29 & 1.82 & 479.92 & 9.77 & 56.06 \\ K2-38 c & 9.90 & 2.42 & 128.32 & 9.91 & 131.29 \\ Kepler-18 b & 6.99 & 2.00 & 451.56 & 12.19 & 34.30 \\ TOI-1062 b & 10.15 & 2.26 & 188.67 & 8.78 & 447.62 \\ TOI-763 b & 9.79 & 2.28 & 178.10 & 8.86 & 350.75 \\ \hline \hline \multicolumn{6}{c}{_K Star Planets_} \\ \hline \hline TOI-1235 b & 5.90 & 1.69 & 60.13 & 8.71 & 360.17 \\ TOI-1749 c & 14.00 & 2.12 & 34.88 & 11.07 & 298.06 \\ TOI-178 c & 4.77 & 1.67 & 96.08 & 9.37 & 249.03 \\ \hline \hline \multicolumn{6}{c}{_M Star Planets_} \\ \multicolumn{6}{c}{None} \\ \hline \hline \end{tabular} \end{table} Table 1: All planets in the \(M_{p}-R_{p}-F_{p}\) parameter space for which we predict helium enhancement after 10 Gyr of fractionated mass loss. Masses, radii, and orbital separations are all taken from the NASA Exoplanet Archive [57]. The data were retrieved on February 14th, 2022. Planets with radii larger than 1.5 R\({}_{\oplus}\) and TSM scores above 90 are high quality candidates for atmospheric characterization [58]. We use the SAG 13 definitions for the effective temperature cutoffs for G, K, and M stars. total bolometric luminosity (with L\({}_{\rm EUV}\) / L\({}_{\rm BOL}\) equal to \(\sim\) 4, 9, and 70 at 5 Gyr for our simulated G, K, and M stars respectively). Stellar tidal forces are a second factor contributing to differences in planet evolution tracks with host star spectral type. For the same instellation, \(F_{p}\), planets orbiting lower mass, less luminous stars have closer orbital separations \(d\) and smaller Roche lobe radii. The closer proximity of the Roche lobe boundary to the planet further enhances the mass loss rates for planets orbiting K or M stars compared to those orbiting G stars. Planets with lower mass host stars have smaller Roche potential reduction factors \(K^{42}\), which in turn increases the mass loss rate \(\Phi_{\rm EL}\) in Equation 11. Observations have shown that the location of the radius valley shifts to smaller radii for planets evolved around lower mass stars [60; 61]. Figures 12, 13, 14, 15, 16, and 17 show the \(M_{p}-R_{p}-F_{p}\) parameter space of sub-Neptunes evolved with fractionated mass loss around K and M type stars. Compared to planets evolved around G type stars, planets orbiting cooler stars become helium enhanced at lower instellations, have smaller radii, and slightly less metal enhancement. For G stars we find the radius valley extended from approximately 1.6 R\({}_{\oplus}\) to 2.2 R\({}_{\oplus}\) and is approximately 0.2 R\({}_{\oplus}\) wide. In comparison, the radius valley for planets evolved around K and M type stars is narrower, as shown in Figures 16, and 17. ### Mass Loss Rates During the transonic escape regime, our models have escape rates of between 4\(\times\)10\({}^{8}\) to 2\(\times\)10\({}^{9}\)g s\({}^{-1}\). Due to the translational and thermal energy losses, the escape rate remains nearly constant up to an age of approximately 1 Gyr. As the incident EUV decreases, the mass loss becomes energy limited and subsequently decreases approximately following a power law in time. By 10 Gyr the mass loss rates range from \(\sim\) 5\(\times\)10\({}^{6}\) to 2\(\times\)10\({}^{9}\)g s\({}^{-1}\). Therefore, planets with small (i.e., initial f\({}_{\rm env}\)\(\leq\) 0.01) envelopes can lose the majority their primordial envelopes. For example, a 10.0 M\({}_{\oplus}\) planet with an initial envelope mass fraction of 0.5% has an initial envelope mass of \(\sim\)3.0\(\times\)10\({}^{26}\) g. A sustained mass loss rate of 1\(\times\)10\({}^{9}\) g s\({}^{-1}\) over 5.0 Gyr causes just over 50% of the envelope mass to be lost. Changing the mass loss efficiency factor has a large effect on the mass loss rate for sub-Neptunes. However, the mass loss efficiency factor is degenerate with orbital separation. Increasing the mass loss efficiency merely moves the parameter space in which planets become helium enhanced to larger orbital separations. Furthermore, the mass loss efficiency is not well constrained within the field [23; 2; 62]. Throughout this work we assume a constant value of 10% following Malsky & Rogers (2020)[20]. ### Remnant Cores In our simulations, a number of planets were stripped of nearly their entire envelope and failed to evolve for the full 10 Gyr that we simulated. We call these planets remnant cores and define them as any planet which failed to evolve past 2.5 Gyr. We assign these remnant cores radii equal to that of their rocky cores [35]. We find remnant cores for fluxes between 14 and 1100 F\({}_{\oplus}\) for G stars, between 6 and 600 F\({}_{\oplus}\) for K stars, and between 0.4 and 110 F\({}_{\oplus}\) for M stars. When we compare the population of remnant cores to the helium enhanced planets, we see a clear bifurcation. Remnant cores have radii less than 2.2 R\({}_{\oplus}\), and occupy a \(F_{\oplus}-R_{\oplus}\) parameter space below that of helium enhancement. Figures 1, 13, and 16 show the population of helium enhanced planets, juxtaposed against the population of remnant cores. The largest rocky cores have radii that are equal to the smallest helium enhanced planets. These simulations were formed for a broad grid of initial conditions and we do not make any attempt to fine tune or match the empirical radius valley. The initial mass distribution of planets will affect the actual radius distribution of remnant cores/helium enhanced planets achieved.
2302.14531
Finite sample inference for empirical Bayesian methods
In recent years, empirical Bayesian (EB) inference has become an attractive approach for estimation in parametric models arising in a variety of real-life problems, especially in complex and high-dimensional scientific applications. However, compared to the relative abundance of available general methods for computing point estimators in the EB framework, the construction of confidence sets and hypothesis tests with good theoretical properties remains difficult and problem specific. Motivated by the universal inference framework of Wasserman et al. (2020), we propose a general and universal method, based on holdout likelihood ratios, and utilizing the hierarchical structure of the specified Bayesian model for constructing confidence sets and hypothesis tests that are finite sample valid. We illustrate our method through a range of numerical studies and real data applications, which demonstrate that the approach is able to generate useful and meaningful inferential statements in the relevant contexts.
Hien D Nguyen, Mayetri Gupta
2023-02-28T12:42:22Z
http://arxiv.org/abs/2302.14531v1
# Finite sample inference for empirical Bayesian methods ###### Abstract In recent years, empirical Bayesian (EB) inference has become an attractive approach for estimation in parametric models arising in a variety of real-life problems, especially in complex and high-dimensional scientific applications. However, compared to the relative abundance of available general methods for computing point estimators in the EB framework, the construction of confidence sets and hypothesis tests with good theoretical properties remains difficult and problem specific. Motivated by the universal inference framework of Wasserman et al. (2020), we propose a general and universal method, based on holdout likelihood ratios, and utilizing the hierarchical structure of the specified Bayesian model for constructing confidence sets and hypothesis tests that are finite sample valid. We illustrate our method through a range of numerical studies and real data applications, which demonstrate that the approach is able to generate useful and meaningful inferential statements in the relevant contexts. ## 1 Introduction Let \(\mathbf{D}_{n}=\left(\boldsymbol{X}_{i}\right)_{i\in[n]}\) be our data, presented as a sequence of \(n\in\mathbb{N}=\left\{1,2,\dots\right\}\) random variables \(\boldsymbol{X}_{i}\in\mathbb{X}\) (\(i\in[n]=\left\{1,\dots,n\right\}\)). For each \(i\in[n]\), let \(\boldsymbol{\Theta}_{i}\in\mathbb{T}\) be a random variable with probability density function (PDF) \(\pi\left(\boldsymbol{\theta}_{i};\boldsymbol{\psi}\right)\), where \(\boldsymbol{\psi}\in\mathbb{P}\) is a hyperparameter. Furthermore, suppose that \(\left[\boldsymbol{X}_{i}|\boldsymbol{\Theta}_{i}=\boldsymbol{\theta}_{i}\right]\) arises from a family of data generating processes (DGPs) with conditional PDFs \[f\left(\boldsymbol{x}_{i}|\boldsymbol{\Theta}_{i}=\boldsymbol{\theta}_{i} \right)=f\left(\boldsymbol{x}_{i}|\boldsymbol{\theta}_{i}\right),\] and that the sequence \(\left(\left(\boldsymbol{X}_{i},\boldsymbol{\Theta}_{i}\right)\right)_{i\in[n]}\) is independent. Suppose that \(\left(\boldsymbol{\Theta}_{i}\right)_{i\in[n]}\) is realized at \(\boldsymbol{\vartheta}_{n}^{*}=\left(\boldsymbol{\theta}_{i}^{*}\right)_{i\in[ n]}\), where each realization \(\boldsymbol{\theta}_{i}^{*}\) (\(i\in[n]\)) is unknown, and where \(\boldsymbol{\psi}\) is also unknown. Let \(\mathbb{I}\subset[n]\), and write \(\boldsymbol{\vartheta}_{1}^{*}=\left(\boldsymbol{\theta}_{i}^{*}\right)_{i \in\mathbb{I}}\). When \(\mathbb{I}=\left\{i\right\}\), we shall use the shorthand \(\mathbb{I}=i\), where it causes no confusion. Under this setup, for significance level \(\alpha\in\left(0,1\right)\), we wish to draw inference regarding the realized sequence \(\boldsymbol{\vartheta}_{n}^{*}\) by way of constructing \(100\left(1-\alpha\right)\%\) confidence sets \(\mathcal{C}_{i}^{\alpha}\left(\mathbf{D}_{n}\right)\) that satisfy: \[\Pr_{\boldsymbol{\theta}_{i}^{*}}\left[\boldsymbol{\theta}_{i}^{*}\in \mathcal{C}_{i}^{\alpha}\left(\mathbf{D}_{n}\right)\right]\geq 1-\alpha, \tag{1}\] and \(p\)-values \(P_{\mathbb{I}}\left(\mathbf{D}_{n}\right)\) for testing null hypotheses \(\mathrm{H}_{0}:\boldsymbol{\vartheta}_{\mathbb{I}}^{*}\in\mathbb{T}_{\mathbb{ I},0}\subset\mathbb{T}^{\left|\mathbb{I}\right|}\) that satisfy: \[\sup_{\boldsymbol{\vartheta}_{\mathbb{I}}^{*}\in\mathbb{T}_{\mathbb{I},0}}\Pr _{\boldsymbol{\vartheta}_{\mathbb{I}}^{*}}\left[P_{\mathbb{I}}\left(\mathbf{ D}_{n}\right)\leq\alpha\right]\leq\alpha, \tag{2}\] where \(\Pr_{\boldsymbol{\theta}_{i}^{*}}\) and \(\Pr_{\boldsymbol{\vartheta}_{\mathbb{I}}^{*}}\) denote probability measures consistent with the PDF \(f\left(\boldsymbol{x}_{i}|\boldsymbol{\theta}_{i}^{*}\right)\), for each \(i\in\left[n\right]\), and for all \(i\in\mathbb{I}\), respectively. That is, for a measurable set \(\mathcal{A}\subset\mathbb{X}^{n}\), and assuming absolute continuity of \(\Pr_{\boldsymbol{\vartheta}_{\mathbb{I}}^{*}}\) with respect to some measure \(\mathfrak{m}\) (typically the Lebesgue or counting measure), we can write \[\Pr_{\boldsymbol{\vartheta}_{\mathbb{I}}^{*}}\left(\mathcal{A}\right)=\int_{ \mathcal{A}}\prod_{i\in\mathbb{I}}f\left(\boldsymbol{x}_{i}|\boldsymbol{ \theta}_{i}^{*}\right)\prod_{j\notin\mathbb{I}}f\left(\boldsymbol{x}_{j}| \boldsymbol{\theta}_{j}\right)\mathrm{d}\mathfrak{m}\left(\mathbf{d}_{n} \right), \tag{3}\] where \(\boldsymbol{\theta}_{j}\) is an arbitrary element of \(\mathbb{T}\), for each \(j\notin\mathbb{I}\). The setup above falls within the framework of empirical Bayesian (EB) inference, as exposited in the volumes of Maritz and Lwin (1989); Ahmed and Reid (2001); Serdobolskii (2008); Efron (2010), and Bickel (2020). Over the years, there has been a sustained interest in the construction and computation of EB point estimators for \(\boldsymbol{\vartheta}_{n}^{*}\), in various contexts, with many convenient and general computational tools now made available, for instance, via the software of Johnstone and Silverman (2005); Leng et al. (2013); Koenker and Gu (2017), and Narasimhan and Efron (2020). Unfortunately, the probabilistic properties of \(\boldsymbol{\vartheta}_{n}^{*}\) tend to be difficult to characterize, making the construction of confidence sets and hypothesis tests with good theoretical properties relatively less routine than the construction of point estimators. When restricted to certain classes of models, such constructions are nevertheless possible, as exemplified by the works of Casella and Hwang (1983); Morris (1983a); Laird and Louis (1987); Datta et al. (2002); Tai and Speed (2006); Hwang et al. (2009); Hwang and Zhao (2013), and Yoshimori and Lahiri (2014), among others. In this work, we adapt the universal inference framework of Wasserman et al. (2020) to produce valid confidence sets and \(p\)-values with properties (1) and (2), respectively, for arbitrary estimators of \(\boldsymbol{\vartheta}_{n}^{*}\). As with the constructions of Wasserman et al. (2020), the produced inferential methods are all valid for finite sample size \(n\) and require no assumptions beyond correctness of model specification. The confidence sets and \(p\)-values arise by construction of holdout likelihood ratios that can be demonstrated to have the \(e\)-value property, as described in Vovk and Wang (2021) (see also the \(s\)-values of Grunwald et al., 2020 and the betting values of Shafer, 2021). Here, we are able to take into account the hierarchical structure of the Bayesian specified model by using the fact that parameterized \(e\)-values are closed when averaged with respect to an appropriate probability measure (cf. Vovk, 2007 and Kaufmann and Koolen, 2018). Due to the finite sample correctness of our constructions, we shall refer to our methods as finite sample EB (FSEB) techniques. Along with our methodological developments, we also demonstrate the application of our FSEB techniques in numerical studies and real data applications. These applications include the use of FSEB methods for constructing confidence intervals (CIs) for the classic mean estimator of Stein (1956), and testing and CI construction in Poisson-gamma models and Beta-binomial models, as per Koenker and Gu (2017) and Hardcastle and Kelly (2013), respectively. Real data applications are demonstrated via the analysis of insurance data from Haastrup (2000) and differential methylation data from Cruickshanks et al. (2013). In these real and synthetic applications, we show that FSEB methods, satisfying conditions (1) and (2), are able to generate useful and meaningful inferential statements. We proceed as follows. In Section 2, we introduce the confidence set and \(p\)-value constructions for drawing inference regarding EB models. In Section 3, numerical studies of simulated data are used to demonstrate the applicability and effectiveness of FSEB constructions. In Section 4, FSEB methods are applied to real data to further show the practicality of the techniques. Lastly, in Section 5, we provide discussions and conclusions regarding our results. ## 2 Confidence sets and hypothesis tests We retain the notation and setup from Section 1. For each subset \(\mathbb{I}\subset[n]\), let us write \(\mathbf{D}_{\mathbb{I}}=\left(\boldsymbol{X}_{i}\right)_{i\in\mathbb{I}}\) and \(\overline{\mathbf{D}}_{\mathbb{I}}=\left(\boldsymbol{X}_{i}\right)_{i\in[n] \backslash\mathbb{I}}\). Suppose that we have available some estimator of \(\boldsymbol{\psi}\) that only depends on \(\overline{\mathbf{D}}_{\mathbb{I}}\) (and not \(\mathbf{D}_{\mathbb{I}}\)), which we shall denote by \(\hat{\boldsymbol{\psi}}_{\mathbb{I},n}\). Furthermore, for fixed \(\boldsymbol{\psi}\), write the integrated and unintegrated likelihood of the data \(\mathbf{D}_{\mathbb{I}}\), as \[L_{\mathbb{I}}\left(\boldsymbol{\psi}\right)=\prod_{i\in\mathbb{I}}\int_{ \mathbb{I}}f\left(\boldsymbol{X}_{i}|\boldsymbol{\theta}_{i}\right)\pi\left( \boldsymbol{\theta}_{i};\boldsymbol{\psi}\right)\mathrm{d}\mathfrak{n}( \boldsymbol{\theta}_{i}) \tag{4}\] and \[l_{\mathbb{I}}\left(\boldsymbol{\vartheta}_{\mathbb{I}}\right)=\prod_{i\in \mathbb{I}}f\left(\boldsymbol{X}_{i}|\boldsymbol{\theta}_{i}\right), \tag{5}\] respectively, where \(\boldsymbol{\vartheta}_{\mathbb{I}}=\left(\boldsymbol{\theta}_{i}\right)_{i \in\mathbb{I}}\) (here, \(\boldsymbol{\vartheta}_{\{i\}}=\boldsymbol{\theta}_{i}\)). We note that in (4), we have assumed that \(\pi(\cdot;\boldsymbol{\psi})\) is a density function with respect to some measure on \(\mathbb{T}\), \(\mathfrak{n}\). Define the ratio statistic: \[R_{\mathbb{I},n}\left(\mathbf{\vartheta}_{\mathbb{I}}\right)=L_{\mathbb{I}}\left( \hat{\mathbf{\psi}}_{\mathbb{I},n}\right)/l_{\mathbb{I}}\left(\mathbf{\vartheta}_{ \mathbb{I}}\right), \tag{6}\] and consider sets of the form \[\mathcal{C}_{i}^{\alpha}\left(\mathbf{D}_{n}\right)=\left\{\mathbf{\theta}\in \mathbb{T}:R_{i,n}\left(\mathbf{\theta}\right)\leq 1/\alpha\right\}.\] The following Lemma is an adaptation of the main idea of Wasserman et al. (2020) for the context of empierical Bayes estimators, and allows us to show that \(\mathcal{C}_{i}^{\alpha}\left(\mathbf{D}_{n}\right)\) satisfies property (1). **Lemma 1**.: _For each \(\mathbb{I}\subset[n]\) and fixed sequence \(\mathbf{\vartheta}_{n}^{*}\in\mathbb{T}^{n}\), \(\mathrm{E}_{\mathbf{\vartheta}_{\mathbb{I}}^{*}}\left[R_{\mathbb{I},n}\left(\mathbf{ \vartheta}_{\mathbb{I}}^{*}\right)\right]=1\)._ Proof.: Let \(\mathbf{d}_{\mathbb{I}}\) and \(\bar{\mathbf{d}}_{\mathbb{I}}\) be realizations of \(\mathbf{D}_{\mathbb{I}}\) and \(\overline{\mathbf{D}}_{\mathbb{I}}\), respectively. Then, using (3), write \[\mathrm{E}_{\mathbf{\theta}_{\mathbb{I}}^{*}}\left[R_{\mathbb{I},n} \left(\mathbf{\vartheta}_{\mathbb{I}}^{*}\right)\right] =\int_{\mathbb{X}^{n}}R_{\mathbb{I},n}\left(\mathbf{\vartheta}_{ \mathbb{I}}^{*}\right)\prod_{i\in\mathbb{I}}f\left(\mathbf{x}_{i}|\mathbf{\theta}_{i}^ {*}\right)\prod_{j\notin\mathbb{I}}f\left(\mathbf{x}_{j}|\mathbf{\theta}_{j}\right) \mathrm{dm}\left(\mathbf{d}_{n}\right)\] \[\underset{\text{(ii)}}{=}\int_{\mathbb{X}^{n-|\mathbb{I}|}}\int_ {\mathbb{X}^{|\mathbb{I}|}}L_{\mathbb{I}}\left(\hat{\mathbf{\psi}}_{\mathbb{I},n} \right)\prod_{i\in\mathbb{I}}f\left(\mathbf{x}_{i}|\mathbf{\theta}_{i}^{*}\right) \mathrm{dm}\left(\mathbf{d}_{\mathbb{I}}\right)\prod_{j\notin\mathbb{I}}f \left(\mathbf{x}_{j}|\mathbf{\theta}_{j}\right)\mathrm{dm}\left(\bar{\mathbf{d}}_{ \mathbb{I}}\right)\] \[\underset{\text{(iii)}}{=}\int_{\mathbb{X}^{n-|\mathbb{I}|}}\prod_ {j\notin\mathbb{I}}f\left(\mathbf{x}_{j}|\mathbf{\theta}_{j}\right)\mathrm{dm}\left( \bar{\mathbf{d}}_{\mathbb{I}}\right)\] \[\underset{\text{(iv)}}{=}1.\] Here, (i) is true by definition of (6), (ii) is true by definition of (5), (iii) is true by the fact that (4) is a probability density function on \(\mathbb{X}^{|\mathbb{I}|}\), with respect to \(\mathfrak{m}\), and (iv) is true by the fact that \(\prod_{j\notin\mathbb{I}}f\left(\mathbf{x}_{j}|\mathbf{\theta}_{j}\right)\) is a probability density function on \(\mathbb{X}^{n-|\mathbb{I}|}\), with respect to \(\mathfrak{m}\). **Proposition 1**.: _For each \(i\in[n]\), \(\mathcal{C}_{i}^{\alpha}\left(\mathbf{D}_{n}\right)\) is a \(100\left(1-\alpha\right)\%\) confidence set, in the sense that_ \[\mathrm{Pr}_{\mathbf{\theta}_{i}^{*}}\left[\mathbf{\theta}_{i}^{*}\in\mathcal{C}_{i}^ {\alpha}\left(\mathbf{D}_{n}\right)\right]\geq 1-\alpha.\] Proof.: For each \(i\), Markov's inequality states that \[\mathrm{Pr}_{\mathbf{\theta}_{i}^{*}}\left[R_{i,n}\left(\mathbf{\theta}_{i}^{*} \right)\geq 1/\alpha\right]\leq\alpha\mathrm{E}_{\mathbf{\theta}_{i}^{*}}\left[R_{i,n} \left(\mathbf{\theta}_{i}^{*}\right)\right]=\alpha,\] which implies that \[\mathrm{Pr}_{\mathbf{\theta}_{i}^{*}}\left[\mathbf{\theta}_{i}^{*}\in\mathcal{C}_{i}^ {\alpha}\left(\mathbf{D}_{n}\right)\right]=\mathrm{Pr}_{\mathbf{\theta}_{i}^{*}} \left[R_{i,n}\left(\mathbf{\theta}_{i}^{*}\right)\leq 1/\alpha\right]\geq 1-\alpha\] by Lemma 1. Next, we consider the testing of null hypotheses \(\mathrm{H}_{0}\): \(\boldsymbol{\vartheta}_{\mathbb{I}}^{*}\in\mathbb{T}_{\mathbb{I},0}\) against an arbitrary alternative \(\mathrm{H}_{1}\): \(\boldsymbol{\vartheta}_{\mathbb{I}}^{*}\in\mathbb{T}_{\mathbb{I},1}\subseteq \mathbb{T}^{[\mathbb{I}]}\). To this end, we define the maximum unintegrated likelihood estimator of \(\boldsymbol{\vartheta}_{\mathbb{I}}^{*}\), under \(\mathrm{H}_{0}\) as \[\tilde{\boldsymbol{\vartheta}}_{\mathbb{I}}\in\left\{\tilde{\boldsymbol{ \vartheta}}_{\mathbb{I}}\in\mathbb{T}_{\mathbb{I},0}:l_{\mathbb{I}}\left( \tilde{\boldsymbol{\vartheta}}_{\mathbb{I}}\right)=\sup_{\boldsymbol{ \vartheta}_{\mathbb{I}}\in\mathbb{T}_{\mathbb{I},0}}l_{\mathbb{I}}\left( \boldsymbol{\vartheta}_{\mathbb{I}}\right)\right\}. \tag{7}\] Using (7), and again letting \(\hat{\boldsymbol{\psi}}_{\mathbb{I},n}\) be an arbitrary estimator of \(\boldsymbol{\psi}\), depending only on \(\overline{\mathbf{D}}_{\mathbb{I}}\), we define the ratio test statistic \[T_{\mathbb{I}}\left(\mathbf{D}_{n}\right)=L_{\mathbb{I}}\left(\hat{\boldsymbol {\psi}}_{\mathbb{I},n}\right)/l_{\mathbb{I}}\left(\tilde{\boldsymbol{ \vartheta}}_{\mathbb{I}}\right).\] The following result establishes the fact that the \(p\)-value \(P_{\mathbb{I}}\left(\mathbf{D}_{n}\right)=1/T_{\mathbb{I}}\left(\mathbf{D}_{n}\right)\) has the correct size, under \(\mathrm{H}_{0}\). **Proposition 2**.: _For any \(\alpha\in(0,1)\) and \(\boldsymbol{\vartheta}_{\mathbb{I}}^{*}\in\mathbb{T}_{\mathbb{I},0}\), \(\Pr_{\boldsymbol{\vartheta}_{\mathbb{I}}^{*}_{\mathbb{I}}}\left[P_{\mathbb{I} }\left(\mathbf{D}_{n}\right)\leq\alpha\right]\leq\alpha\)._ Proof.: Assume that \(\boldsymbol{\vartheta}_{\mathbb{I}}^{*}\in\mathbb{T}_{\mathbb{I},0}\). By Markov's inequality, we have \[\Pr_{\boldsymbol{\vartheta}_{\mathbb{I}}^{*}}\left[T_{\mathbb{I}} \left(\mathbf{D}_{n}\right)\geq 1/\alpha\right] \leq\alpha\mathrm{E}_{\boldsymbol{\vartheta}_{\mathbb{I}}^{*}} \left[T_{\mathbb{I}}\left(\mathbf{D}_{n}\right)\right]\] where the (i) is true due to the fact that \(l_{\mathbb{I}}\left(\tilde{\boldsymbol{\vartheta}}_{\mathbb{I}}\right)\geq l _{\mathbb{I}}\left(\boldsymbol{\vartheta}_{\mathbb{I}}^{*}\right)\), by the definition of (7), and the (ii) is true due to Lemma 1. We note that Propositions 1 and 2 are empirical Bayes analogues of Theorems 1 and 2 from Wasserman et al. (2020), which provide guarantees for universal inference confidence set and hypothesis test constructions, respectively. Furthermore, the use of Lemma 1 in the proofs also imply that the CIs constructed via Proposition 1 are \(e\)-CIs, as defined by Xu et al. (2022), and the \(p\)-values obtained via Proposition 2 can be said to be \(e\)-value calibrated, as per the definitions of Wang and Ramdas (2022). ## 3 FSEB examples and some numerical results To demonstrate the usefulness of the FSEB results from Section 2, we shall present a number of synthetic and real world applications of the confidence and testing constructions. All of the computation is conducted in the R programming environment (R Core Team, 2020) and replicable scripts are made available at [https://github.com/hiendn/Universal_EB](https://github.com/hiendn/Universal_EB). Where unspecified, numerical optimization is conducted using the optim() or optimize() functions in the case of multivariate and univariate optimization, respectively. ### Stein's problem We begin by studying the estimation of normal means, as originally considered in Stein (1956). Here, we largely follow the exposition of Efron (2010, Ch. 1) and note that the estimator falls within the shrinkage paradigm exposited in Serdobolskii (2008). We consider this setting due to its simplicity and the availability of a simple EB-based method to compare our methodology against. Let \(\left(\left(X_{i},\Theta_{i}\right)\right)_{i\in[n]}\) be IID and for each \(i\in[n]\), \(\Theta_{i}\sim\mathrm{N}\left(0,\psi^{2}\right)\) (\(\psi^{2}>0\)) and \(\left[X_{i}|\Theta_{i}=\theta_{i}\right]\sim\mathrm{N}\left(\theta_{i},1\right)\), where \(\mathrm{N}\left(\mu,\sigma^{2}\right)\) is the normal law with mean \(\mu\in\mathbb{R}\) and variance \(\sigma^{2}>0\). We assume that \(\psi^{2}\) is unknown and that we observe data \(\mathbf{D}_{n}\) and wish to construct CIs for the realizations \(\theta_{n}^{*}\), which characterize the DGP of the observations \(X_{n}\). Following Efron (2010, Sec. 1.5), when \(\psi^{2}\) is known, the posterior distribution of \(\left[\Theta_{n}|X_{n}=x_{n}\right]\) is \(\mathrm{N}\left(g\left(\psi^{2}\right)x_{n},g\left(\psi^{2}\right)\right)\), where \(g\left(\psi^{2}\right)=\psi^{2}/\left(1+\psi^{2}\right)\). Using the data \(\mathbf{D}_{n}\), we have the fact that \(\sum_{i=1}^{n-1}X_{i}^{2}\sim\left(\psi^{2}+1\right)\chi_{n-1}^{2}\), where \(\chi_{\nu}^{2}\) is the chi-squared distribution with \(\nu\) degrees of freedom. This implies a method-of-moment estimator for \(g\) of the form: \(\bar{g}_{n}=1-\left(n-2\right)/\sum_{i=1}^{n}X_{i}^{2}\), in the case of unknown \(\psi^{2}\). We can simply approximate the distribution of \(\left[\Theta_{n}|\mathbf{D}_{n}\right]\) as \(\mathrm{N}\left(\bar{g}_{n}X_{n},\bar{g}_{n}\right)\), although this approximation ignores the variability of \(\bar{g}_{n}\). As noted by Efron (2010, Sec. 1.5), via a hierarchical Bayesian interpretation using an objective Bayesian prior, we may instead deduce the more accurate approximate distribution: \[\mathrm{N}\left(\bar{g}_{n}X_{n},\bar{g}_{n}+2\left[X_{n}\left(1-\bar{g}_{n} \right)^{2}\right]/\left[n-2\right]\right). \tag{8}\] Specifically, Efron (2010) considers the hyperparameter \(\psi^{2}\) as being a random variable, say \(\Psi^{2}\), and places a so-called objective (or non-informative) prior on \(\Psi^{2}\). In particular, the improper prior assumption that \(\Psi^{2}+1\sim\mathrm{Uniform}\left(0,\infty\right)\) is made. Then, it follows from careful derivation that \[\mathrm{E}\left[\Theta_{n}|\mathbf{D}_{n}\right]=\bar{g}_{n}X_{n}\text{ and }\mathrm{var}\left[\Theta_{n}|\mathbf{D}_{n}\right]=\bar{g}_{n}+\frac{2X_{n} \left(1-\bar{g}_{n}\right)^{2}}{n-2},\] and thus we obtain (8) via a normal approximation for the distribution of \(\left[\Theta_{n}|\mathbf{D}_{n}\right]\) (cf. Morris 1983, Sec. 4). The approximation then provides \(100\left(1-\alpha\right)\%\) posterior credible intervals for \(\Theta_{n}\) of the form \[\bar{g}_{n}X_{n}\pm\zeta_{1-\alpha/2}\sqrt{\bar{g}_{n}+\frac{2\left[X_{n}\left(1 -\bar{g}_{n}\right)^{2}\right]}{n-2}}, \tag{9}\] where \(\zeta_{1-\alpha/2}\) is the \(\left(1-\alpha/2\right)\) quantile of the standard normal distribution. This posterior result can then be taken as an approximate \(100\left(1-\alpha\right)\%\) confidence interval for \(\theta_{n}^{*}\). Now, we wish to apply the FSEB results from Section 2. Here, \(\mathbb{I}=\left\{n\right\}\), and from the setup of the problem, we have \[f\left(x_{n}|\theta_{n}\right)=\phi\left(x_{n};\theta_{n},1\right)\text{ and }\pi\left(\theta_{n};\psi\right)=\phi\left(\theta_{n};0,\psi^{2}\right),\] where \(\phi\left(x;\mu,\sigma^{2}\right)\) is the normal PDF with mean \(\mu\) and variance \(\sigma^{2}\). Thus, \[L_{\mathbb{I}}\left(\psi\right)=\int_{\mathbb{R}}\phi\left(X_{n};\theta,1 \right)\phi\left(\theta;0,\psi^{2}\right)\mathrm{d}\theta=\phi\left(X_{n};0,1+ \psi^{2}\right)\] and \(l_{\mathbb{I}}\left(\theta_{n}\right)=\phi\left(x_{n};\theta_{n},1\right)\), which yields a ratio statistic of the form \[R_{\mathbb{I},n}\left(\theta_{n}\right) =L_{\mathbb{I}}\left(\psi_{-n}\right)/l_{\mathbb{I}}\left(\theta _{n}\right)\] \[=\phi\left(X_{n};0,1+\hat{\psi}_{-n}^{2}\right)/\phi\left(X_{n}; \theta_{n},1\right),\] when combined with an appropriate estimator \(\hat{\psi}_{-n}^{2}\) for \(\psi^{2}\), using only \(\bar{\mathbf{D}}_{\mathbb{I},n}=\mathbf{D}_{n-1}\). We can obtain the region \(\mathcal{C}_{\mathbb{I}}^{\alpha}\left(\mathbf{D}_{n}\right)\) by solving \(R_{\mathbb{I},n}\left(\theta_{n}\right)\leq 1/\alpha\) to obtain: \[\left(X_{n}-\theta\right)^{2}\leq 2\log\left(1/\alpha\right)+2\log\left(1+ \hat{\psi}_{-n}^{2}\right)+\frac{X_{n}^{2}}{\left(1+\hat{\psi}_{-n}^{2}\right)},\] which, by Proposition 1, yields the \(100\left(1-\alpha\right)\%\) CI for \(\theta_{n}^{*}\): \[X_{n}\pm\sqrt{2\log\left(1/\alpha\right)+2\log\left(1+\hat{\psi}_{-n}^{2} \right)+\frac{X_{n}^{2}}{\left(1+\hat{\psi}_{-n}^{2}\right)}}. \tag{10}\] We shall consider implementations of the CI of form (10) using the estimator \[\hat{\psi}_{-n}^{2}=\max\left\{0,s_{-n}^{2}-1\right\},\] where \(s_{-n}^{2}\) is the sample variance of the \(\bar{\mathbf{D}}_{\mathbb{I},n}\), and \(s_{-n}^{2}-1\) is the method of moment estimator of \(\psi^{2}\). The maximum operator stops the estimator from becoming negative and causes no problems in the computation of (10). We now compare the performances of the CIs of forms (9) and (10). To do so, we shall consider data sets of sizes \(n\in\left\{10,100,1000\right\}\), \(\psi^{2}\in\left\{1^{2},5^{2},10^{2}\right\}\), and \(\alpha\in\left\{0.05,0.005,0.0005\right\}\). For each triplet \(\left(n,\psi^{2},\alpha\right)\), we repeat the computation of (9) and (10) 1000 times and record the coverage probability and average relative widths of the intervals (computed as the width of (10) divided by that of (9)). The results of our experiment are presented in Table 1. From Table 1, we observe that the CIs of form (9) tended to produce intervals with the desired levels of coverage, whereas the FSEB CIs of form (10) tended to be conservative and contained the parameter of interest in almost all replications. The price that is paid for this conservativeness is obvious when viewing the relative widths, which implies that for 95% CIs, the EB CIs of form (10) are twice as wide, on average, when compared to the CIs of form (9). However, the relative widths decrease as \(\alpha\) gets smaller, implying that the intervals perform relatively similarly when a high level of confidence is required. We further observe that \(n\) and \(\psi^{2}\) had little effect on the performances of the intervals except in the case when \(n=10\) and \(\psi^{2}=1\), whereupon it was possible for the intervals of form (9) to not be computable in some cases. \begin{table} \begin{tabular}{l l l l l l} \hline \(n\) & \(\psi^{2}\) & \(\alpha\) & Coverage of (9) & Coverage of (10) & Relative Width \\ \hline \hline 10 & \(1^{2}\) & 0.05 & 0.948\({}^{*}\) & 1.000\({}^{*}\) & 1.979\({}^{*}\) \\ & & 0.005 & 0.988\({}^{*}\) & 1.000\({}^{*}\) & 1.738\({}^{*}\) \\ & & 0.0005 & 0.993\({}^{*}\) & 1.000\({}^{*}\) & 1.641\({}^{*}\) \\ & \(5^{2}\) & 0.05 & 0.943 & 1.000 & 1.902 \\ & & 0.005 & 0.994 & 1.000 & 1.543 \\ & & 0.0005 & 0.999 & 1.000 & 1.388 \\ & \(10^{2}\) & 0.05 & 0.947 & 1.000 & 2.058 \\ & & 0.005 & 0.994 & 1.000 & 1.633 \\ & & 0.0005 & 0.999 & 1.000 & 1.455 \\ \hline 100 & \(1^{2}\) & 0.05 & 0.937 & 0.999 & 2.068 \\ & & 0.005 & 0.997 & 1.000 & 1.806 \\ & & 0.0005 & 1.000 & 1.000 & 1.697 \\ & \(5^{2}\) & 0.05 & 0.949 & 1.000 & 1.912 \\ & & 0.0005 & 0.995 & 1.000 & 1.540 \\ & & 0.0005 & 1.000 & 1.000 & 1.395 \\ & \(10^{2}\) & 0.05 & 0.947 & 1.000 & 2.068 \\ & & 0.005 & 0.995 & 1.000 & 1.635 \\ & & 0.0005 & 0.999 & 1.000 & 1.455 \\ \hline 1000 & \(1^{2}\) & 0.05 & 0.949 & 0.999 & 2.087 \\ & & 0.005 & 0.991 & 1.000 & 1.815 \\ & & 0.0005 & 1.000 & 1.000 & 1.705 \\ & \(5^{2}\) & 0.05 & 0.963 & 1.000 & 1.910 \\ & & 0.005 & 0.997 & 1.000 & 1.544 \\ & & 0.0005 & 1.000 & 1.000 & 1.399 \\ & \(10^{2}\) & 0.05 & 0.942 & 1.000 & 2.066 \\ & & 0.005 & 0.995 & 1.000 & 1.632 \\ & & 0.0005 & 0.999 & 1.000 & 1.455 \\ \hline \end{tabular} \({}^{*}\)The results on these lines are computed from 968, 967, and 969 replicates, respectively, from top to bottom. This was due to the negative estimates of the standard error in the computation of (9). \end{table} Table 1: Stein’s problem simulation results reported as average performances over 1000 replications. From these results we can make a number of conclusions. Firstly, if one is willing to make the necessary hierarchical and objective Bayesian assumptions, as stated in Efron (2010, Sec. 1.5), then the intervals of form (9) provide very good performance. However, without those assumptions, we can still obtain reasonable CIs that have correct coverage via the FSEB methods from Section 2. Furthermore, these intervals become more efficient compared to (9) when higher levels of confidence are desired. Lastly, when \(n\) is small and \(\psi^{2}\) is also small, the intervals of form (9) can become uncomputable and thus one may consider the use of (10) as an alternative. ### Poisson-gamma count model The following example is taken from Koenker and Gu (2017) and was originally studied in Norberg (1989) and then subsequently in Haastrup (2000). In this example, we firstly consider IID parameters \(\left(\Theta_{i}\right)_{i\in[n]}\) generated with gamma DGP: \(\Theta_{i}\sim\operatorname{Gamma}\left(a,b\right)\), for each \(i\in[n]\), where \(a>0\) and \(b>0\) are the shape and rate hyperparameters, respectively, which we put into \(\mathbf{\psi}\). Then, for each \(i\), we suppose that the data \(\mathbf{D}_{n}=\left(X_{i}\right)_{i\in[n]}\), depending on the covariate sequence \(\mathbf{w}_{n}=\left(w_{i}\right)_{i\in[n]}\), has the Poisson DGP: \(\left[X_{i}|\Theta_{i}=\theta_{i}\right]\sim\operatorname{Poisson}\left( \theta_{i}w_{i}\right)\), where \(w_{i}>0\). We again wish to use the data \(\mathbf{D}_{n}\) to estimate the realization of \(\Theta_{n}\): \(\theta_{n}^{*}\), which characterizes the DGP of \(X_{n}\). Under the specification above, for each \(i\), we have the fact that \(\left(X_{i},\Theta_{i}\right)\) has the joint PDF: \[f\left(x_{i},\theta_{i};\mathbf{\psi}\right)=\frac{b^{a}}{\Gamma\left(a\right)} \theta_{i}^{a-1}\exp\left(b\theta_{i}\right)\frac{\left(\theta_{i}w_{i}\right) ^{x_{i}}\exp\left(-\theta_{i}w_{i}\right)}{x_{i}}, \tag{11}\] which we can marginalize to obtain \[f\left(x_{i};\mathbf{\psi}\right)=\binom{x_{i}+a+1}{x_{i}}\left(\frac{b}{w_{i}+b} \right)^{a}\left(\frac{w_{i}}{w_{i}+b}\right)^{x_{i}}, \tag{12}\] and which can be seen as a Poisson-gamma mixture model. We can then construct the likelihood of \(\mathbf{D}_{n}\) using expression (12), from which we may compute maximum likelihood estimates \(\hat{\mathbf{\psi}}_{n}=\left(\hat{a}_{n},\hat{b}_{n}\right)\) of \(\mathbf{\psi}\). Upon noting that (11) implies the conditional expectation \(\operatorname{E}\left[\Theta_{i}|X_{i}=x_{i}\right]=\left(x_{i}+a\right)/ \left(w_{i}+b\right)\), we obtain the estimator for \(\theta_{n}^{*}\): \[\hat{\theta}_{n}=\frac{X_{i}+\hat{a}_{n}}{w_{i}+\hat{b}_{n}}. \tag{13}\] #### 3.2.1 Confidence intervals We again wish to apply the general result from Section 2 to construct CIs. Firstly, we have \(\mathbb{I}=\left\{n\right\}\) and \[f\left(x_{n}|\theta_{n}\right)=\frac{\left(\theta_{n}w_{n}\right)^{x_{n}}\exp \left(-\theta_{n}w_{n}\right)}{x_{n}}\text{ and }\pi\left(\theta_{n};\mathbf{\psi} \right)=\frac{b^{a}}{\Gamma\left(a\right)}\theta_{n}^{a-1}\exp\left(b\theta_{ n}\right).\] As per (12), we can write \[L_{\mathbb{I}}\left(\mathbf{\psi}\right)=\begin{pmatrix}X_{n}+a+1\\ X_{n}\end{pmatrix}\begin{pmatrix}b\\ w_{n}+b\end{pmatrix}^{a}\begin{pmatrix}w_{n}\\ w_{n}+b\end{pmatrix}^{X_{n}}.\] Then, since \(l_{\mathbb{I}}\left(\theta_{n}\right)=f\left(X_{n}|\theta_{n}\right)\), we have \[R_{\mathbb{I},n}\left(\theta_{n}\right) =L_{\mathbb{I}}\left(\mathbf{\psi}\right)/l_{\mathbb{I}}\left(\theta_ {n}\right)\] \[=\begin{pmatrix}X_{n}+\hat{a}_{-n}+1\\ X_{n}\end{pmatrix}\begin{pmatrix}\hat{b}_{-n}\\ \hline w_{n}+\hat{b}_{-n}\end{pmatrix}^{\hat{a}_{-n}}\begin{pmatrix}w_{n}\\ \hline w_{n}+\hat{b}_{-n}\end{pmatrix}^{X_{n}}\frac{X_{n}}{\left(\theta_{n}w_{ n}\right)^{X_{n}}\exp\left(-\theta_{n}w_{n}\right)},\] when combined with an estimator \(\hat{\mathbf{\psi}}_{-n}=\left(\hat{a}_{-n},\hat{b}_{-n}\right)\) of \(\mathbf{\psi}\), using only \(\bar{\mathbf{D}}_{\mathbb{I},n}=\mathbf{D}_{n-1}\). For any \(\alpha\in\left(0,1\right)\), we then obtain a \(100\left(1-\alpha\right)\%\) CI for \(\theta_{n}\) by solving \(R_{\mathbb{I},n}\left(\theta_{n}\right)\leq 1/\alpha\), which can be done numerically. We shall use the MLE of \(\mathbf{\psi}\), computed with the data \(\bar{\mathbf{D}}_{\mathbb{I},n}\) and marginal PDF (12), as the estimator \(\hat{\mathbf{\psi}}_{-n}\). To demonstrate the performance of the CI construction, above, we conduct the following numerical experiment. We generate data sets consisting of \(n\in\left\{10,100,1000\right\}\) observations characterized by hyperparameters \(\mathbf{\psi}=\left(a,b\right)=\left\{\left(2,2\right),\left(2,5\right),\left(5,2 \right)\right\}\), and we compute intervals using significance levels \(\alpha\in\left\{0.05,0.005,0.0005\right\}\). Here, we shall generate \(\mathbf{w}_{n}\) IID uniformly between \(0\) and \(10\). For each triplet \(\left(n,\mathbf{\psi},\alpha\right)\), we repeat the construction of our CIs \(1000\) times and record the coverage probability and average width for each case. The results of the experiment are reported in Table 2. From Table 2, we observe that the empirical coverage of the CIs are higher than the nominal value and are thus behaving as per the conclusions of Proposition 1. As expected, we also find that increasing the nominal confidence level also increases the coverage proportion, but at a cost of increasing the lengths of the CIs. From the usual asymptotic theory of maximum likelihood estimators, we anticipate that increasing \(n\) will decrease the variance of the estimator \(\hat{\mathbf{\psi}}_{-n}\). However, as in Section 3.1, this does not appear to have any observable effect on either the coverage proportion nor lengths of the CIs. #### 3.2.2 Hypothesis tests Next, we consider testing the null hypothesis \(\mathrm{H}_{0}\): \(\theta_{n-1}^{*}=\theta_{n}^{*}\). To this end, we use the hypothesis testing framework from Section 2. That is, we let \(\mathbb{I}=\left\{n-1,n\right\}\) and estimate \(\mathbf{\psi}\) via the maximum likelihood estimator \(\hat{\mathbf{\psi}}_{\mathbb{I},n}=\left(a_{\mathbb{I},n},b_{\mathbb{I},n}\right)\), computed from the data \(\bar{\mathbf{D}}_{\mathbb{I},n}=\mathbf{D}_{n-2}\). We can write \[L_{\mathbb{I}}\left(\hat{\mathbf{\psi}}_{\mathbb{I},n}\right)=\prod_{i=n-1}^{n} \begin{pmatrix}X_{i}+a_{\mathbb{I},n}+1\\ X_{i}\end{pmatrix}\begin{pmatrix}b_{\mathbb{I},n}\\ w_{i}+b_{\mathbb{I},n}\end{pmatrix}^{a_{\mathbb{I},n}}\begin{pmatrix}w_{i}\\ \hline w_{i}+b_{\mathbb{I},n}\end{pmatrix}^{X_{i}},\] \begin{table} \begin{tabular}{l l l l l} \hline \(n\) & \(\boldsymbol{\psi}\) & \(\alpha\) & Coverage & Length \\ \hline \hline 10 & \((2,2)\) & 0.05 & 0.998 & 3.632 \\ & & 0.005 & 1.000 & 5.484 \\ & & 0.0005 & 1.000 & 6.919 \\ & \((2,5)\) & 0.05 & 0.999 & 2.976 \\ & & 0.005 & 0.999 & 3.910 \\ & & 0.0005 & 1.000 & 5.481 \\ & \((5,2)\) & 0.05 & 0.997\({}^{*}\) & 5.468\({}^{*}\) \\ & & 0.005 & 0.999\({}^{*}\) & 7.118\({}^{*}\) \\ & & 0.0005 & 1.000\({}^{*}\) & 8.349\({}^{*}\) \\ \hline 100 & \((2,2)\) & 0.05 & 0.998 & 3.898 \\ & & 0.005 & 0.999 & 5.277 \\ & & 0.0005 & 1.000 & 6.883 \\ & \((2,5)\) & 0.05 & 0.999 & 2.958 \\ & & 0.005 & 1.000 & 3.914 \\ & & 0.0005 & 1.000 & 5.374 \\ & \((5,2)\) & 0.05 & 1.000 & 5.628 \\ & & 0.005 & 1.000 & 7.124 \\ & & 0.0005 & 1.000 & 8.529 \\ \hline 1000 & \((2,2)\) & 0.05 & 1.000 & 4.070 \\ & & 0.005 & 1.000 & 5.424 \\ & & 0.0005 & 1.000 & 6.344 \\ & \((2,5)\) & 0.05 & 0.999 & 3.049 \\ & & 0.005 & 1.000 & 3.960 \\ & & 0.0005 & 1.000 & 5.479 \\ & \((5,2)\) & 0.05 & 0.998 & 5.297 \\ & & 0.005 & 1.000 & 7.205 \\ & & 0.0005 & 1.000 & 8.714 \\ \hline \end{tabular} \({}^{*}\)The results on these lines are computed from 999, 999, and 998 replicates, respectively. This was due to there being no solutions to the inequality \(R_{\mathbb{I},n}\left(\theta_{n}\right)\leq 1/\alpha\), with respect to \(\theta_{n}>0\) in some cases. \end{table} Table 2: Experimental results for CIs constructed for Poisson–gamma count models. The Coverage and Length columns report the coverage proportion and average lengths in each scenario, as computed from 1000 replications. \[l_{\mathbb{I}}\left(\mathbf{\phi}_{1}^{*}\right)=\prod_{i=n-1}^{n}\frac{\left(\theta_{ i}^{*}w_{i}\right)^{X_{n}}\exp\left(-\theta_{i}^{*}w_{i}\right)}{X_{i}},\] and \(\mathbf{\vartheta}_{\mathbb{I}}^{*}=\left(\theta_{n-1}^{*},\theta_{n}^{*}\right)\). We are also required to compute the maximum likelihood estimator of \(\mathbf{\vartheta}_{\mathbb{I}}^{*}\), under \(\mathrm{H}_{0}\), as per (7), which can be written as \[\tilde{\mathbf{\vartheta}}_{\mathbb{I}}\in\left\{\tilde{\mathbf{\theta}}=\left(\theta,\theta\right):l_{\mathbb{I}}\left(\tilde{\mathbf{\theta}}\right)=\sup_{\theta>0 }\ \prod_{i=n-1}^{n}\frac{\left(\theta w_{i}\right)^{X_{n}}\exp\left(-\theta w_{ i}\right)}{X_{i}}\right\}.\] Using the components above, we define the test statistic \(T_{\mathbb{I}}\left(\mathbf{D}_{n}\right)=L_{\mathbb{I}}\left(\hat{\mathbf{ \psi}}_{\mathbb{I},n}\right)/l_{\mathbb{I}}\left(\tilde{\mathbf{\vartheta}}_{ \mathbb{I}}\right)\), from which we can derive the \(p\)-value \(P_{\mathbb{I}}\left(\mathbf{D}_{n}\right)=1/T_{\mathbb{I}}\left(\mathbf{D}_{ n}\right)\) for testing \(\mathrm{H}_{0}\). To demonstrate the application of this test, we conduct another numerical experiment. As in Section 3.2.1, we generate data sets of sizes \(n\in\left\{10,100,1000\right\}\), where the data \(\mathbf{D}_{n-1}\) are generated with parameters \(\left(\Theta_{i}\right)_{i\in\left[n-1\right]}\) arising from gamma distributions with hyperparameters \(\mathbf{\psi}=\left(a,b\right)=\left\{\left(2,2\right),\left(2,5\right),\left(5,2 \right)\right\}\). The final observation \(X_{n}\), making up \(\mathbf{D}_{n}\), is then generated with parameter \(\Theta_{n}=\Theta_{n-1}+\Delta\), where \(\Delta\in\left\{0,1,5,10\right\}\). As before, we generate the covariate sequence \(\mathbf{w}_{n}\) IID uniformly between 0 and 10. For each triplet \(\left(n,\mathbf{\psi},\Delta\right)\), we test \(\mathrm{H}_{0}\): \(\theta_{n-1}^{*}=\theta_{n}^{*}\) 1000 times and record the average number of rejections under at the levels of significance \(\alpha\in\left\{0.05,0.005,0.0005\right\}\). The results are then reported in Table 3. The results for the \(\Delta=0\) cases in Table 3 show that the tests reject true null hypotheses at below the nominal sizes \(\alpha\), in accordance with Proposition 2. For each combination of \(n\) and \(\mathbf{\psi}\), as \(\Delta\) increases, the proportion of rejections increase, demonstrating that the tests become more powerful when detecting larger differences between \(\theta_{n-1}^{*}\) and \(\theta_{n}^{*}\), as expected. There also appears to be an increase in power due to larger sample sizes. This is an interesting outcome, since we can only be sure that sample size affects the variability of the estimator \(\mathbf{\psi}_{\mathbb{I},n}\). Overall, we can be confident that the tests are behaving as required, albeit they may be somewhat underpowered as they are not achieving the nominal sizes. ### Beta-binomial data series Data from genome-level biological studies, using modern high-throughput sequencing technologies (Krueger et al., 2012), often take the form of a series of counts, which may be modelled through sets of non-identical (possibly correlated) binomial distributions, with beta priors, in a Bayesian framework. The question of interest may vary, for example, from assessing the range of likely values for the binomial parameter in a particular region of the data, to comparing whether two sections of one or more data series are generated from identical distributions. For purposes of demonstrating the performance of the FSEB method in these scenario, we will make the simplifying assumption that all data points are \begin{table} \begin{tabular}{l c c c c c} \hline & & & \multicolumn{3}{c}{Rejection Proportion at level \(\alpha\)} \\ \(n\) & \(\boldsymbol{\psi}\) & \(\Delta\) & 0.05 & 0.005 & 0.0005 \\ \hline \hline 10 & \((2,2)\) & 0 & 0.000 & 0.000 & 0.000 \\ & & 1 & 0.004 & 0.000 & 0.000 \\ & & 5 & 0.280 & 0.193 & 0.128 \\ & & 10 & 0.413 & 0.363 & 0.317 \\ & \((2,5)\) & 0 & 0.000 & 0.000 & 0.000 \\ & & 1 & 0.007 & 0.002 & 0.000 \\ & & 5 & 0.143 & 0.096 & 0.064 \\ & & 10 & 0.222 & 0.192 & 0.170 \\ & \((5,2)\) & 0 & 0.001 & 0.000 & 0.000 \\ & & 1 & 0.001 & 0.000 & 0.000 \\ & & 5 & 0.177 & 0.107 & 0.052 \\ & & 10 & 0.389 & 0.320 & 0.254 \\ \hline 100 & \((2,2)\) & 0 & 0.000 & 0.000 & 0.000 \\ & & 1 & 0.014 & 0.003 & 0.000 \\ & & 5 & 0.401 & 0.289 & 0.194 \\ & & 10 & 0.562 & 0.489 & 0.427 \\ & \((2,5)\) & 0 & 0.000 & 0.000 & 0.000 \\ & & 1 & 0.015 & 0.000 & 0.000 \\ & & 5 & 0.208 & 0.127 & 0.074 \\ & & 10 & 0.296 & 0.235 & 0.179 \\ & \((5,2)\) & 0 & 0.000 & 0.000 & 0.000 \\ & & 1 & 0.004 & 0.000 & 0.000 \\ & & 5 & 0.264 & 0.150 & 0.090 \\ & & 10 & 0.500 & 0.425 & 0.344 \\ \hline 1000 & \((2,2)\) & 0 & 0.001 & 0.000 & 0.000 \\ & & 1 & 0.021 & 0.001 & 0.000 \\ & & 5 & 0.423 & 0.300 & 0.216 \\ & & 10 & 0.576 & 0.513 & 0.450 \\ & \((2,5)\) & 0 & 0.000 & 0.000 & 0.000 \\ & & 1 & 0.012 & 0.000 & 0.000 \\ & & 5 & 0.185 & 0.108 & 0.061 \\ & & 10 & 0.321 & 0.254 & 0.197 \\ \((5,2)\) & 0 & 0.000 & 0.000 & 0.000 \\ & & 1 & 0.003 & 0.001 & 0.000 \\ & & 5 & 0.276 & 0.168 & 0.088 \\ & & 10 & 0.507 & 0.428 & 0.354 \\ \hline \end{tabular} \end{table} Table 3: Experimental results for testing the hypothesis H\({}_{0}\): \(\theta_{n-1}^{*}=\theta_{n}^{*}\) for Poisson–gamma count models. The Rejection Proportion columns report the average number of rejections, from 1000 tests, at levels of significance \(\alpha\in\{0.05,0.005,0.0005\}\). independently distributed, within, as well as across, any of \(G\) data series that may be observed. #### 3.3.1 Confidence Sets First, let us assume that we only have a single series, i.e. \(G=1\). Then, we can assume \(X_{i}\sim\text{Bin}(m_{i},\theta_{i})\), and propose a common prior distribution for \(\Theta_{i}\) (\(i=1,\ldots,n\)): \(\text{Beta}(\gamma,\beta)\). Using the techniques described in Section 2, we can find confidence sets for \(\theta_{i}^{*}\), (\(i=1,\ldots,n\)). For each \(i\), we define, as previously, a subset \(\mathbb{I}=\{i\}\), so that \(\mathbf{D}_{\mathbb{I}}=X_{i}\) and \(\overline{\mathbf{D}}_{\mathbb{I}}=\left(X_{i}\right)_{i\in[n]\backslash\{i\}}\). We then have, \[R_{\mathbb{I},n}\left(\boldsymbol{\vartheta}_{\mathbb{I}}\right)=\frac{L_{ \mathbb{I}}\left(\hat{\boldsymbol{\psi}}_{\mathbb{I},n}\right)}{l_{\mathbb{I} }\left(\boldsymbol{\vartheta}_{\mathbb{I}}\right)},\] where \[l_{\mathbb{I}}\left(\boldsymbol{\vartheta}_{\mathbb{I}}\right)=\binom{m_{i}}{ x_{i}}\theta_{i}^{x_{i}}(1-\theta_{i})^{m_{i}-x_{i}}\] and \[L_{\mathbb{I}}\left(\hat{\boldsymbol{\psi}}_{\mathbb{I},n}\right)=\int_{ \theta_{i}}f(x_{i}|\theta_{i})\pi(\theta_{i};\;\hat{\gamma}_{-n},\hat{\beta}_{ -n})\text{d}\theta_{i},\] which gives the ratio \[R_{\mathbb{I},n}\left(\boldsymbol{\vartheta}_{\mathbb{I}}\right)=\frac{B(x_{i }+\hat{\gamma}_{-n},m_{i}-x_{i}+\hat{\beta}_{-n})}{B(\hat{\gamma},\hat{\beta}_ {-n})\theta_{i}^{x_{i}}(1-\theta_{i})^{m_{i}-x_{i}}}. \tag{14}\] Here, \(\hat{\gamma}_{-n}\) and \(\hat{\beta}_{-n}\) are the empirical Bayes estimates of \(\gamma\) and \(\beta\), given by \[\hat{\gamma}_{-n}=(\hat{\phi}_{\text{EB}}^{-1}-1)\hat{\mu}_{\text{EB}}\] and \[\hat{\beta}_{-n}=(\hat{\phi}_{\text{EB}}^{-1}-1)(1-\hat{\mu}_{\text{EB}}),\] where \[\hat{\mu}_{\text{EB}} =\frac{1}{n-1}\sum_{j\in[n]\backslash i}\frac{x_{j}}{m_{j}},\] \[\hat{\phi}_{\text{EB}} =\left[\frac{\bar{m}\hat{V}_{x}}{\mu(1-\mu)}-1\right]\bigg{/}(\bar {m}-1),\] \(\bar{m}=\frac{1}{n-1}\sum_{j\in[n]\setminus i}m_{j}\), and \(\hat{V}_{x}=\frac{1}{n-1}\sum_{j\in[n]\setminus i}(\frac{x_{j}}{m_{j}}-\hat{\mu} _{\text{EB}})^{2}\). Further, \(B\left(a,b\right)=\int_{0}^{1}t^{a-1}\left(1-t\right)^{b-1}\mathrm{d}t\) is the Beta function, taking inputs \(a>0\) and \(b>0\). We simulated data from the binomial model under two cases: (a) setting beta hyperparameters \((\alpha,\beta)=(10,10)\), and hierarchically simulating \(\theta_{i}^{*}\), \(i\in[n]\), and then \(x_{i}\) from a binomial distribution; and (b) setting a range of \(\theta_{i}^{*}\) (\(i\in[n]\)) values equidistantly spanning the interval \((0.1,0.9)\) for \(n=10,100\). Here, \(m_{i}\) (\(i\in[n]\)) were given integer values uniformly generated in the range \([15,40]\). In all cases, it was seen that the CIs had perfect coverage, always containing the true value of \(\theta_{i}^{*}\). An example of the \(n=10\) case is shown in Figure 1. #### 3.3.2 Hypothesis testing Aiming to detect genomic regions that may have differing characteristics between two series, a pertinent question of interest may be considered by testing the hypotheses: \(H_{0}\): \(\theta_{i1}^{*}=\theta_{i2}^{*}\) vs. \(H_{1}\): \(\theta_{i1}^{*}\neq\theta_{i2}^{*}\), for every \(i\in[n]\) (with \(G=2\) series). Then, \(\mathbf{D}_{n}=\left(\boldsymbol{X}_{i}\right)_{i\in[n]}\), where \(\boldsymbol{X}_{i}=(X_{i1},X_{i2})\). From Section 2, the ratio test statistic takes the form \[T_{\mathbb{I}}\left(\mathbf{D}_{n}\right)=L_{\mathbb{I}}\left(\hat{\gamma}_{ \mathbb{I},n},\hat{\beta}_{\mathbb{I},n}\right)/l_{\mathbb{I}}\left(\tilde{ \boldsymbol{\vartheta}}_{\mathbb{I}}\right),\] Figure 1: Plots of 95% confidence regions for \(\theta_{i}^{*}\) when true values of \(\theta_{i}^{*}\) span the interval \(0.1\) to \(0.9\) (\(n=10\)). Here, the 95% CIs are given by the points where the curves for \(\log R_{\mathbb{I},n}\left(\boldsymbol{\vartheta}_{\mathbb{I}}\right)\) intersect with the horizontal line (black), representing a confidence level of \(1-\alpha=0.95\). Each CI can be seen to contain the corresponding true value of \(\theta_{i}^{*}\), represented by a vertical line of the same colour as the interval. where \(\hat{\gamma}_{\mathbb{I},n}\) and \(\hat{\beta}_{\mathbb{I},n}\) are EB estimators of \(\gamma\) and \(\beta\), depending only on \(\bar{\mathbf{D}}_{\mathbb{I},n}=\mathbf{D}_{n}\backslash\{X_{i1},X_{i2}\}\). With \(\hat{\vartheta}_{\mathbb{I}}=\frac{x_{i1}+x_{i2}}{m_{i1}+m_{i2}}=\tilde{ \theta}_{i}\), write \(l_{\mathbb{I}}\left(\tilde{\vartheta}_{\mathbb{I}}\right)=f(x_{i1},x_{i2}| \tilde{\theta}_{i})\), and \[L_{\mathbb{I}}\left(\hat{\gamma}_{\mathbb{I},n},\hat{\beta}_{ \mathbb{I},n}\right) =\int_{\mathbb{T}}f(x_{i1}|\mathbf{\theta}_{i})f(x_{i2}|\mathbf{\theta}_ {i})\pi(\mathbf{\theta}_{i};\ \hat{\gamma}_{\mathbb{I},n},\hat{\beta}_{\mathbb{I},n}) \mathrm{d}\mathbf{\theta}_{i}\] \[=\binom{m_{i1}}{x_{i1}}\binom{m_{i2}}{x_{i2}}\frac{B(x_{i1}+\hat {\gamma}_{\mathbb{I},n},m_{i1}-x_{i1}+\hat{\beta}_{\mathbb{I},n})B(x_{i2}+ \hat{\gamma}_{\mathbb{I},n},m_{i2}-x_{i2}+\hat{\beta}_{\mathbb{I},n})}{\left[ B(\hat{\gamma}_{\mathbb{I},n},\hat{\beta}_{\mathbb{I},n})\right]^{2}},\] which gives \[T_{\mathbb{I}}\left(\mathbf{D}_{n}\right)=\frac{B(x_{i1}+\hat{ \gamma}_{\mathbb{I},n},m_{i1}-x_{i1}+\hat{\beta}_{\mathbb{I},n})B(x_{i2}+\hat {\gamma}_{\mathbb{I},n},m_{i2}-x_{i2}+\hat{\beta}_{\mathbb{I},n})}{[B(\hat{ \gamma}_{\mathbb{I},n},\hat{\beta}_{\mathbb{I},n})]^{2}\hat{\theta}_{i1}^{x_{ i1}+x_{i2}}(1-\tilde{\theta}_{i})^{m_{i1}+m_{i2}-x_{i1}-x_{i2}}},\] where \(\hat{\gamma}_{\mathbb{I},n}\) and \(\hat{\beta}_{\mathbb{I},n}\) are calculated in a similar fashion to Section 3.3.1 except that data from both sequences should be used to estimate \(\hat{\mu}_{\mathrm{EB}}\) and \(\hat{\phi}_{\mathrm{EB}}\), in the sense that \[\hat{\mu}_{\mathrm{EB}} =\frac{1}{2n-2}\sum_{k\neq i}\sum_{g=1}^{2}\frac{x_{kg}}{m_{kg}}, \ \text{and}\] \[\hat{\phi}_{\mathrm{EB}} =\left[\frac{\bar{m}V_{xy}}{\hat{\mu}_{\mathrm{EB}}(1-\hat{\mu}_{ \mathrm{EB}})}-1\right]\bigg{/}(\bar{m}-1),\] where \[\bar{m} =\frac{1}{2n-2}\sum_{k\neq i}\sum_{g=1}^{2}m_{kg},\ \text{and}\] \[V_{xy} =\frac{1}{2n-2}\sum_{k\neq i}\sum_{g=1}^{2}\left(\frac{x_{kg}}{m_ {kg}}-\hat{\mu}_{\mathrm{EB}}\right)^{2}.\] In our first simulation, we assessed the performance of the test statistic in terms of the Type I error. Assuming a window size of \(n=20\), realized data \((x_{i1},x_{i2})\) (\(i\in[n]\)), were simulated from independent binomial distributions with \(\theta_{i1}^{*}=\theta_{i2}^{*}=\theta_{i}^{*}\) (\(i=1,\ldots,n\)), with \(\theta_{i}^{*}\) ranging between \(0.1\) and \(0.9\), and \(m_{i1},m_{i2}\in\mathbb{N}\) uniformly and independently sampled from the range [15, 40]. The first panel of Figure 2 shows the calculated test statistic values \(T_{\mathbb{I}}\left(\mathbf{D}_{n}\right)\) for the \(20\) genomic indices on the logarithmic scale, over \(100\) independently replicated datasets, with horizontal lines displaying values \(\log(1/\alpha)\), for significance levels \(\alpha\in\{0.01,0.02,0.05\}\). No points were observed above the line corresponding to \(\alpha=0.01\), indicating that the Type I error of the test statistic does not exceed the nominal level. Next, we assessed the power of the test statistic at three levels of significance (\(\alpha\in\{0.01,0.02,0.05\}\)) and differing effect sizes. For each \(i\) (\(i\in[n]\)), \(\theta_{i1}^{*}\) was set to be a value between \(0.05\) and \(0.95\), and \(\theta_{i2}^{*}=\theta_{i1}^{*}+\Delta\), where \(0.1<\Delta<0.9\) (with \(\theta_{i2}^{*}<1\)). A sample of \(20\) replicates were simulated under each possible set of values of \((\theta_{1}^{*},\theta_{2}^{*})\). The second panel of Figure 2 shows that the power functions increased rapidly to 1 as the difference \(\Delta\) was increased. In our next numerical experiment, we generated data sets of sizes \(n\in\{10,100,1000\}\), where realized observations \(x_{i1}\), and \(x_{i2}\) are simulated from independent binomial distributions with parameters \(\theta_{i1}^{*}\) and \(\theta_{i2}^{*}\), respectively (\(i\in[n]\)). For each \(i\), \(\theta_{i1^{*}}\) was generated from a beta distribution, in turn, with hyperparameters \(\boldsymbol{\psi}=(\gamma,\beta)\in\{(2,2),(2,5),(5,2)\}\); and \(\theta_{i2}^{*}=\theta_{i1}^{*}+\Delta\), where \(\Delta\in\{0,0.2,0.5,0.9\}\). We generated 100 instances of data under each setting and assessed the power of the FSEB test statistic through the number of rejections at levels \(\alpha\in\{0.0005,0.005,0.05\}\). The results are shown in Table 4. Similarly to the Poisson-gamma example, it can be seen that the tests reject true null hypotheses at below the nominal sizes \(\alpha\), in each case. For each combination of \(n\) and \(\boldsymbol{\psi}\), as \(\Delta\) increases, the rejection rate increases, making the tests more powerful as expected, when detecting larger differences between \(\theta_{i1}^{*}\) and \(\theta_{i2}^{*}\), frequently reaching a power of 1 even when the difference was not maximal. There did not appear to be a clear increase in power with the sample size, within the settings considered. Overall, we may conclude, as previously, that the tests are behaving as expected, although both this example and the Poisson-gamma case show that the tests may be underpowered as they do not achieve the nominal size for any value of \(\alpha\). As an additional assessment of how FSEB performs in comparison to other tests in a similar setting, we carried out a number of additional simulation studies, in which FSEB was compared with Fisher's exact test and a score test, over various settings of \(n\), \(\boldsymbol{\psi}\) and \(\Delta\), as well as for different ranges of \(m_{i}\) (\(i=1\in[n]\)). Comparisons were made using the \(p\)-values as well as false discovery rate (FDR) corrected \(p\)-values arising from FDR control methods (Wang and Ramdas, 2022), and are presented Figure 2: Panel (a): Test statistic for 100 replications of the beta–binomial example under the null hypothesis of equality of proportions. The three horizontal lines correspond to cutoffs according to significance levels of \(\alpha=0.05\) (green), \(\alpha=0.02\) (blue), and \(\alpha=0.01\) (turquoise). Panel (b): Power function over different values of \(\Delta=\theta_{2}^{*}-\theta_{1}^{*}\) at three levels of significance: \(\alpha\in\{0.01,0.02,0.05\}\). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & & \multicolumn{5}{c}{Rejection proportion at level \(\alpha\)} \\ \(n\) & \(\boldsymbol{\psi}\) & \(\Delta\) & 0.0005 & 0.005 & 0.05 \\ \hline \hline 10 & \((2,2)\) & 0 & 0.000 & 0.000 & 0.000 \\ & & 0.2 & 0.000 & 0.004 & 0.039 \\ & & 0.5 & 0.305 & 0.471 & 0.709 \\ & & 0.9 & 0.980 & 1.000 & 1.000 \\ & \((2,5)\) & 0 & 0.000 & 0.000 & 0.000 \\ & & 0.2 & 0.000 & 0.001 & 0.025 \\ & & 0.5 & 0.249 & 0.464 & 0.692 \\ & & 0.9 & 0.995 & 1.000 & 1.000 \\ & \((5,2)\) & 0 & 0.000 & 0.000 & 0.000 \\ & & 0.2 & 0.000 & 0.006 & 0.052 \\ & & 0.5 & 0.281 & 0.459 & 0.690 \\ & & 0.9 & 0.993 & 0.993 & 1.000 \\ \hline 100 & \((2,2)\) & 0 & 0.000 & 0.000 & 0.000 \\ & & 0.2 & 0.000 & 0.004 & 0.037 \\ & & 0.5 & 0.272 & 0.459 & 0.700 \\ & & 0.9 & 0.996 & 0.998 & 1.000 \\ & \((2,5)\) & 0 & 0.000 & 0.000 & 0.000 \\ & & 0.2 & 0.000 & 0.003 & 0.032 \\ & & 0.5 & 0.267 & 0.459 & 0.693 \\ & & 0.9 & 0.994 & 0.999 & 1.000 \\ & \((5,2)\) & 0 & 0.000 & 0.000 & 0.000 \\ & & 0.2 & 0.000 & 0.004 & 0.047 \\ & & 0.5 & 0.269 & 0.459 & 0.697 \\ & & 0.9 & 0.987 & 0.998 & 0.999 \\ \hline 1000 & \((2,2)\) & 0 & 0.000 & 0.000 & 0.000 \\ & & 0.2 & 0.000 & 0.003 & 0.031 \\ & & 0.5 & 0.280 & 0.476 & 0.707 \\ & & 0.9 & 0.982 & 0.992 & 0.998 \\ & \((2,5)\) & 0 & 0.000 & 0.000 & 0.000 \\ & & 0.2 & 0.000 & 0.003 & 0.030 \\ & & 0.5 & 0.264 & 0.459 & 0.693 \\ & & 0.9 & 0.989 & 0.996 & 1.000 \\ \((5,2)\) & 0 & 0.000 & 0.000 & 0.000 \\ & & 0.2 & 0.000 & 0.005 & 0.047 \\ & & 0.5 & 0.279 & 0.474 & 0.706 \\ & & 0.9 & 0.986 & 0.995 & 0.999 \\ \hline \hline \end{tabular} \end{table} Table 4: Experimental results for testing the hypothesis H\({}_{0}\): \(\theta_{i1}^{s}=\theta_{i2}^{s}\) for Beta–binomial count series models. The Rejection proportion columns report the average number of rejections, from 100 test replicates, at levels of significance \(\alpha\in\{0.05,0.005,0.0005\}\). in the online Supplementary Materials (Tables S1-S8 and Figures S1-S8). It is evident in almost all cases (and especially in case C, which most closely resembles the real life application scenario) that (i) the power levels are very similar across methods, especially as values of \(n\), \(m_{i}\) (\(i\in[n]\)) and effect sizes increase, and (ii) in every case, there are some settings in which Fisher's test and the score test are anti-conservative (even after FDR correction), with their Type I error greatly exceeding the nominal levels of significance, while this never occurs for FSEB, even without FDR correction. ## 4 Real-data applications ### The Norberg data We now wish to apply the FSEB CI construction from Section 3.2.1 to produce CIs in a real data application. We shall investigate the Norberg data set from the REBayes package of Koenker and Gu (2017), obtained from Haastrup (2000). These data pertain to group life insurance claims from Norwegian workmen. Here, we have \(n=72\) observations \(\mathbf{D}_{n}\), containing total number of death claims \(X_{i}\), along with covariates \(\mathbf{w}_{n}\), where \(w_{i}\) is the number of years of exposure, normalized by a factor of \(344\), for \(i\in[n]\). Here each \(i\) is an individual occupation group. To analyze the data, we use the Poisson-gamma model and estimate the generative parameters \(\boldsymbol{\vartheta}_{n}^{*}\) using estimates of form (13). Here, each \(\theta_{i}^{*}\) can be interpreted as an unobserved multiplicative occupation specific risk factor that influences the number of claims made within occupation group \(i\). To obtain individually-valid \(95\%\) CIs for each of the \(n\) estimates, we then apply the method from Section 3.2.1. We present both the estimated risk factors and their CIs in Figure 3. From Figure 3, we notice that most of the estimates of \(\boldsymbol{\vartheta}_{n}^{*}\) are between zero and two, with the exception of occupation group \(i=22\), which has an estimated risk factor of \(\theta_{22}^{*}=2.59\). Although the risk factors are all quite small, the associated CIs can become very large, as can be seen in the top plot. This is due to the conservative nature of the CI constructions that we have already observed from Section 3.1. We observe that wider CIs were associated with observations where \(X_{i}=0\), with \(w_{i}\) being small. In particular, the largest CI, occurring for \(i=55\), has response \(X_{55}=0\) and the smallest covariate value in the data set: \(w_{55}=4.45\). The next largest CI occurs for \(i=5\) and also corresponds to a response \(X_{5}=0\) and the second smallest covariate value \(w_{5}=11.30\). However, upon observation of the bottom plot, we see that although some of the CIs are too wide to be meaningful, there are still numerous meaningful CIs that provide confidence regarding the lower limits as well as upper limits of the underlying risk factors. In particular, we observe that the CIs for occupation groups \(i=26\) and \(i=54\) are remarkably narrow and precise. Of course, the preceding Figure 3: Estimates of risk factors \(\boldsymbol{\vartheta}_{n}^{*}\) for the Norberg data set along with associated 95% CIs. The estimated risk factor for each occupation group is depicted as a cross and the associate (individually-valid) CI is depicted as a line. The top plot displays the CIs at their entire lengths, whereas the bottom plot displays only the risk factor range between 0 and 10. inferential observations are only valid when considering each of the \(n\) CIs, individually, and under the assumption that we had chosen to draw inference regarding the corresponding parameter of the CI, before any data are observed. If we wish to draw inference regarding all \(n\) elements of \(\boldsymbol{\vartheta}_{n}^{*}\), simultaneously, then we should instead construct a \(100\left(1-\alpha\right)\%\) simultaneous confidence set \(\bar{\mathcal{C}}^{\alpha}\left(\mathbf{D}_{n}\right)\), with the property that \[\Pr_{\boldsymbol{\vartheta}_{n}^{*}}\left[\boldsymbol{\vartheta}_{n}^{*}\in \bar{\mathcal{C}}^{\alpha}\left(\mathbf{D}_{n}\right)\right]\geq 1-\alpha.\] Using Bonferroni's inequality, we can take \(\bar{\mathcal{C}}^{\alpha}\left(\mathbf{D}_{n}\right)\) to be the Cartesian product of the individual \(100\left(1-\alpha/n\right)\%\) (adjusted) CI for each parameter \(\theta_{i}^{*}\): \[\bar{\mathcal{C}}^{\alpha}\left(\mathbf{D}_{n}\right)=\prod_{i=1}^{n}\mathcal{ C}_{i}^{\alpha/n}\left(\mathbf{D}_{n}\right).\] Using the \(\alpha=0.05\), we obtain the 95% simultaneous confidence set that appears in Figure 4. We observe that the simultaneous confidence set now permits us to draw useful inference regarding multiple parameters, at the same time. For example, inspecting the \(n\) adjusted CIs, we observe that the occupations corresponding to indices \(i\in\{8,22,50\}\) all have lower bounds above \(0.5\). Thus, interpreting these indices specifically, we can say that each of the three adjusted confidence intervals, which yield the inference that the risk factors \(\theta_{i}^{*}>0.5\) for \(i\in\{8,22,50\}\), contains the parameter \(\theta_{i}^{*}\) with probability \(0.95\), under repeated sampling. Since our individual CI and adjusted CI constructions are \(e\)-CIs, one can alternatively approach the problem of drawing simultaneously valid inference via the false coverage rate (FCR) controlling techniques of Xu et al. (2022). Using again the parameters \(\theta_{i}^{*}\) corresponding to \(i\in\{8,22,50\}\), as an Figure 4: Estimates of risk factors \(\boldsymbol{\vartheta}_{n}^{*}\) for the Norberg data set along with the associated simultaneous 95% confidence set. The estimated risk factors for each occupation group is depicted as a cross and the simultaneous confidence set can be constructed via the cartesian product of the adjusted CIs, depicted as lines. The plot is focused on the risk factor range between 0 and 10. example, we can use Theorem 2 of Xu et al. (2022) to make the statement that the three adjusted CIs \(\mathcal{C}_{i}^{3\alpha/n}\left(\mathbf{D}_{n}\right)\), for \(i\in\{8,22,50\}\), can be interpreted at the FCR controlled level \(\alpha\in(0,1)\), in the sense that \[\mathrm{E}_{\theta_{i}^{*}\left(\mathbf{D}_{n}\right)}\left[\frac{\sum_{i\in \mathbb{I}}\left[\theta_{i}^{*}\notin\mathcal{C}_{i}^{\left|\left(\mathbf{D}_ {n}\right)\right|\alpha/n}\left(\mathbf{D}_{n}\right)\right]}{\max\left\{1, \left|\mathbb{I}\left(\mathbf{D}_{n}\right)\right|\right\}}\right]\leq\alpha,\] where \(\mathbb{I}\left(\mathbf{D}_{n}\right)\) is a data-dependent subset of parameter indices. In particular, we observe the realization \(\{8,22,50\}\) of \(\mathbb{I}\left(\mathbf{D}_{n}\right)\), corresponding to the data-dependent rule of selecting indices with adjusted CIs \(\mathcal{C}_{i}^{\alpha/n}\left(\mathbf{D}_{n}\right)\) with lower bounds greater than \(0.5\). Here, \(\left|\mathbb{A}\right|=1\) if statement \(\mathsf{A}\) is true and \(0\), otherwise. Clearly, controlling the FCR at level \(\alpha\) yields narrower CIs for each of our the three assessed parameters than does the more blunt simultaneous confidence set approach. In particular, the \(95\%\) simultaneous adjusted CIs obtained via Bonferroni's inequality are \((0.775,4.485)\), \((1.375,5.520)\), and \((0.505,3.565)\), and the \(0.05\) level FCR controlled adjusted CIs are \((0.810,4.300)\), \((1.430,5.390)\), and \((0.555,3.390)\), for the parameters \(\theta_{i}^{*}\) corresponding to the respective parameters \(i\in\{8,22,50\}\). Overall, these are positive results as we do not know of another general method for generating CIs in this EB setting, whether individually or jointly. ### Differential methylation detection in bisulphite sequencing data DNA methylation is a chemical modification of DNA caused by the addition of a methyl (\(CH_{3}\)-) group to a DNA nucleotide - usually a C that is followed by a G - called a CpG site, which is an important factor in controlling gene expression over the human genome. Detecting differences in the methylation patterns between normal and ageing cells can shed light on the complex biological processes underlying human ageing, and hence has been an important scientific problem over the last decade (Smith and Meissner, 2013). Methylation patterns can be detected using high-throughput bisulphite sequencing experiments (Krueger et al., 2012), in which data are generated in the form of sequences of numbers of methylated cytosines, \(x_{ig}\), among the total counts of cytosines, \(m_{ig}\), for \(n\) CpG sites on a genome (\(i\in[n]\)), for \(G\) groups of cell types \(g\in[G]\). Often, there are \(G=2\) groups, as in our example that follows, for which the question of interest is to detect regions of differential methylation in the DNA of normal and ageing cells. Based on the setup above, a set of bisulphite sequencing data from an experiment with \(G\) groups might be considered as \(G\) series of (possibly correlated) observations from non-identical binomial distributions. The degree of dependence between adjacent CpG sites typically depends on the genomic distance between these loci, but since these are often separated by hundreds of bases, for the moment it is assumed that this correlation is negligible and is not incorporated into our model. #### 4.2.1 Application to Methylation data from Human chromosome 21 We evaluated the test statistic \(T_{\mathrm{I}}\left(\mathbf{D}_{n}\right)\) over a paired segment of methylation data from normal and ageing cells, from \(100,000\) CpG sites on human chromosome 21 (Cruickshanks et al., 2013). After data cleaning and filtering (to remove sites with too low or too high degrees of experimental coverage, that can introduce errors), \(58,361\) sites remained for analysis. Figure 5 shows the predicted demarcation of the data into differentially and non-differentially methylated sites over the entire region, at three cutoff levels of significance, overlaid with a moving average using a window size of 10 sites. It was observed that large values of the test statistic were often found in grouped clusters, which would be biologically meaningful, as loss of methylation in ageing cells is more likely to be highly region-specific, rather than randomly scattered over the genome. The overall rejection rates for the FSEB procedure corresponding to significance levels of \(\alpha=0.0005,0.05,0.02\) and \(0.01\) were found to be \(0.0012\), \(0.0154\), \(0.0092\), and \(0.0064\), respectively. As a comparison to other methods for detecting differential methylation, we also applied site-by-site Figure 5: FSEB test statistics over a segment of methylation data. The panels show the demarcation of loci into differentially methylated (coded as “1”) and non-differentially methylated sites (coded as “0”) with an overlay of a moving average with a window size of 10 CpG sites, at significance level cutoffs of \(0.0005\), \(0.005\), and \(0.05\). Fisher tests and score tests as implemented for bisulphite sequencing data in the R Bioconductor package DMRcaller[22]. For purposes of comparison, we used two significance level cutoffs of 0.05 and 0.0005 for our FSEB test statistic, along with the same cutoffs subject to a Benjamini-Hochberg FDR correction for the other two testing methods. Figure 6 shows the comparison between the calculated site-specific \(p\)-values of the Fisher and score tests with the calculated FSEB test statistic (all on the logarithmic scale) over the entire genomic segment, which indicates a remarkable degree of overlap in the regions of differential methylation. There are, however, significant differences as well, in both the numbers of differential methylation calls and their location. In particular, the FSEB test statistic appeared to have stronger evidence for differential methylation in two regions, one on the left side of the figure, and one towards the centre. The Fisher test, being the most conservative, almost missed this central region (gave a very weak signal), while the score test gave a very high proportion of differential methylation calls compared to both other methods - however, the results from the score test may not be as reliable as many cells contained small numbers of counts which may render the test assumptions invalid. Table 5 gives a summary of the overlap and differences of the results from the different methods at two levels of significance, indicating that with FDR corrections, the Fisher test appears to be the most conservative, the score test the least conservative, and the FSEB procedure in-between the two. We also calculated, for each pair of methods, the proportion of matching calls, defined as the ratio of the number of sites predicted by both methods as either differentially methylated, or non-differentially methylated, to the total number of sites. These proportions indicated a high degree of concordance, especially between FSEB and Fisher tests, with the score test showing the least degree of concordance at both levels of significance. As expected, the degree of concordance decreased with an increase in \(\alpha\), but only slightly so, between the FDR-corrected Fisher test and FSEB. ## 5 Conclusion EB is a powerful and popular paradigm for conducting parametric inference in situations where the DGP can be assumed to possess a hierarchical structure. Over the years, general frameworks for point estimation have been developed for EB, such as via the shrinkage estimators of Serdobolskii (2008) or the various method of moments and likelihood-based methods described in Maritz and Lwin (1989, Sec. 3). Contrastingly, the construction of interval estimators and hypothesis tests for EB parameters rely primarily on bespoke derivations and analysis of the specific models under investigation. In this paper, we have adapted the general universal inference framework for finite sample valid interval estimation and hypothesis testing of Wasserman et al. (2020) to construct a general framework within the EB setting, which we refer to as the FSEB technique. In Section 2, we proved that these \begin{table} \begin{tabular}{l l l l} \hline & \multicolumn{3}{c}{Proportion of rejections at level} \\ & \multicolumn{1}{c}{\(\alpha=0.0005\)} & \multicolumn{1}{c}{\(\alpha=0.05\)} \\ \hline \hline FSEB & 0.0012 & 0.0154 \\ FF & 0.0003 & 0.0097 \\ F & 0.0098 & 0.1102 \\ SF & 0.1333 & 0.1528 \\ S & 0.1457 & 0.2926 \\ \hline \end{tabular} \begin{tabular}{l l l l l l l l l l l} \hline Method & FF & F & SF & S & Method & FF & F & SF & S \\ \hline FSEB & 0.999 & 0.991 & 0.866 & 0.856 & FSEB & 0.992 & 0.905 & 0.860 & 0.723 \\ FF & 0.991 & 0.867 & 0.855 & FF & & 0.900 & 0.857 & 0.717 \\ F & & 0.858 & 0.864 & SF & & & 0.777 & 0.818 \\ SF & & & 0.988 & S & & & & 0.860 \\ \hline \end{tabular} \end{table} Table 5: Comparison of differential methylation calling results between different methods: (i) FSEB (ii) Fisher tests with FDR-adjusted \(p\)-values (FF) (iii) Fisher tests, unadjusted (F) (iv) score tests with FDR-adjusted \(p\)-values (SF) and (v) score tests, unadjusted (S). The upper table gives the proportions of sites called to be differentially expressed under the tests of sizes \(\alpha\in\{0.0005,0.05\}\). The lower table gives the proportion of overlaps between differential methylation calls from each pair of methods at a fixed level \(\alpha\in\{0.0005,0.05\}\). Figure 6: Results of three testing procedures to detect sites of differential methylation over a segment of methylation data. The first two panels show the negative logarithms of the FDR-corrected \(p\)-values for the (i) Fisher test (\(-\log p_{F}\)) and (ii) score test (\(-\log p_{S}\)), while the third panel shows the logarithm of the FSEB test statistic (\(\log T(D_{n})\)). The black curve in each plot corresponds to a moving average with a window size of 10. The points are coloured by differential methylation state call: green if differentially methylated, and red if not, at test size 0.05. FSEB techniques generate valid confidence sets and hypothesis tests of the correct size. In Section 3, we demonstrated via numerical simulations, that the FSEB methods can be used in well-studied synthetic scenarios. There, we highlight that the methods can generate meaningful inference for realistic DGPs. This point is further elaborated in Section 4, where we also showed that our FSEB approach can be usefully applied to draw inference from real world data, in the contexts of insurance risk and the bioinformatics study of DNA methylation. We note that although our framework is general, due to it being Markov inequality-based, it shares the same general criticism that may be laid upon other universal inference methods, which is that the confidence sets and hypothesis tests can often be conservative, in the sense that the nominal confidence level or size is not achieved. The lack of power due to the looseness of Markov's inequality was first mentioned and discussed in Wasserman et al. (2020), where it is also pointed out that, in the universal inference setting, the logarithm of the analogous ratio statistics to (6) have tail probabilities that scale, in \(\alpha\), like those of \(\chi^{2}\) statistics. The conservativeness of universal inference constructions is further discussed in the works of Dunn et al. (2021); Tse and Davison (2022), and Strieder and Drton (2022), where the topic is thoroughly explored via simulations and theoretical results regarding some classes of sufficiently regular problems. We observe this phenomenon in the comparisons in Sections 3.1 (and further expanded in the Supplementary Materials). We also explored subsampling-based tests within the FSEB framework, along the lines proposed by Dunn et al. (2021), which led to very minor increases in power in some cases with small sample sizes without affecting the Type I error. With such an outcome not entirely discernible from sampling error, and with the substantial increase to computational cost, it does not seem worthwhile to employ the subsampling-based approach here. A possible reason for the lack improvement in power observed, despite subsampling, can be attributed to the fact that the sets \(\mathbb{I}\), and their complements, are not exchangeable; since the indices fundamentally define the hypotheses and parameters of interest. However, we note that since the methodology falls within the \(e\)-value framework, it also inherits desirable properties, such as the ability to combine test statistics by averaging (Vovk and Wang, 2021), and the ability to more-powerfully conduct false discovery rate control when tests are arbitrarily dependent (Wang and Ramdas, 2022). Overall, we believe that FSEB techniques can be usefully incorporated into any EB-based inference setting, especially when no other interval estimators or tests are already available, and are a useful addition to the statistical tool set. Although a method that is based on the careful analysis of the particular setting is always preferable in terms of exploiting the problem specific properties in order to generate powerful tests and tight intervals, FSEB methods can always be used in cases where such careful analyses may be mathematically difficult or overly time consuming.
2309.12555
PlanFitting: Tailoring Personalized Exercise Plans with Large Language Models
A personally tailored exercise regimen is crucial to ensuring sufficient physical activities, yet challenging to create as people have complex schedules and considerations and the creation of plans often requires iterations with experts. We present PlanFitting, a conversational AI that assists in personalized exercise planning. Leveraging generative capabilities of large language models, PlanFitting enables users to describe various constraints and queries in natural language, thereby facilitating the creation and refinement of their weekly exercise plan to suit their specific circumstances while staying grounded in foundational principles. Through a user study where participants (N=18) generated a personalized exercise plan using PlanFitting and expert planners (N=3) evaluated these plans, we identified the potential of PlanFitting in generating personalized, actionable, and evidence-based exercise plans. We discuss future design opportunities for AI assistants in creating plans that better comply with exercise principles and accommodate personal constraints.
Donghoon Shin, Gary Hsieh, Young-Ho Kim
2023-09-22T00:55:52Z
http://arxiv.org/abs/2309.12555v1
# PlanFitting: Tailoring Personalized Exercise Plans with Large Language Models ###### Abstract A personally tailored exercise regimen is crucial to ensuring sufficient physical activities, yet challenging to create as people have complex schedules and considerations and the creation of plans often requires iterations with experts. We present PlanFitting, a conversational AI that assists in personalized exercise planning. Leveraging generative capabilities of large language models, PlanFitting enables users to describe various constraints and queries in natural language, thereby facilitating the creation and refinement of their weekly exercise plan to suit their specific circumstances while staying grounded in foundational principles. Through a user study where participants (\(N=18\)) generated a personalized exercise plan using PlanFitting and expert planners (\(N=3\)) evaluated these plans, we identified the potential of PlanFitting in generating personalized, actionable, and evidence-based exercise plans. We discuss future design opportunities for AI assistants in creating plans that better comply with exercise principles and accommodate personal constraints. **Human-centered computing \(\rightarrow\) Natural language interfaces; Empirical studies in HCI.** A personally tailored exercise regimen is crucial to ensuring sufficient physical activities, yet challenging to create as people have complex schedules and considerations and the creation of plans often requires iterations with experts. We present PlanFitting, a conversational AI that assists in personalized exercise planning. Leveraging generative capabilities of large language models, PlanFitting enables users to describe various constraints and queries in natural language, thereby facilitating the creation and refinement of their weekly exercise plan to suit their specific circumstances while staying grounded in foundational principles. Through a user study where participants (\(N=18\)) generated a personalized exercise plan using PlanFitting and expert planners (\(N=3\)) evaluated these plans, we identified the potential of PlanFitting in generating personalized, actionable, and evidence-based exercise plans. We discuss future design opportunities for AI assistants in creating plans that better comply with exercise principles and accommodate personal constraints. ## 1. Introduction Engaging in regular physical activity is a core constituent of maintaining a healthy lifestyle, significantly impacting the overall well-being of individuals. Not only do such activities help maintain optimal physical health, but physical activities are also known to improve mental health (Krishnan et al., 2017) and reduce the risk of various chronic diseases (Krishnan et al., 2017). Despite the widespread consensus on the importance of consistent exercise, a large portion of the global population is known to fall short of meeting the recommended physical activity guidelines. For example, Tucker _et al._(Tucker et al., 2017) revealed that less than 10% of individuals meet the amount of physical activities as recommended by the U.S. national guideline, leading to concerns at both individual and public health levels. To better undertake daily physical exercises, it is beneficial for individuals to adhere to personalized exercise regimes (Krishnan et al., 2017; Krishnan et al., 2017). However, creating individualized exercise plans is often challenging (Krishnan et al., 2017), and individuals often struggle to formulate plans that align with their unique lifestyle constraints (Krishnan et al., 2017). A potential solution is to involve professional exercise planners (_e.g._, personal trainers, medical practitioners) and seek their assistance in tailoring the individual exercise plans. Yet, this approach comes with its own setbacks, such as high costs, accessibility issues, and lack of customization due to broad client bases (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017). Previous studies have attempted to bridge this gap by employing technology-supported self-reflection (Krishnan et al., 2017; Krishnan et al., 2017) and leveraging peer groups (Beng et al., 2017) or crowdworkers (Beng et al., 2017) to formulate customized exercise plans. However, while these human-computational approaches can bypass the needs of experts and result in plans comparable to those generated by experts (Beng et al., 2017; Beng et al., 2017), they still require time and effort by people. They also depend on having the preferences, goals, and schedule of the client well specified upfront. However, without guidance, the information provided by the clients may be incomplete. Further, even the best-designed plans need adaptations to fit with the changing contexts of the clients. This can be challenging to achieve if peers, planners, or crowds are expected to be available 24-7 to perform this type of adaptation as needed, which becomes even more critical when iterating on the exercise routines for the long-term exercise. To address these issues, in this study, we focus on the potential of large language models (LLMs) for addressing these challenges within the realm of exercise planning. More specifically, we highlight the expressivity and comprehensibility of LLMs in steering the conversation to gather highly personalized constraints required to plan for exercise regimens, which we expected to streamline the process of personalizing exercise plans. To understand the everyday practice and challenges of exercise planning between experts and clients, we conducted preliminary interviews with professional exercise planners (\(N=5\)) and lay individuals (client; \(N=8\)) who have experience in setting up personalized exercise plans with planners. From the interview, we characterized the procedure of formulating tailored exercise plans between the experts and clients, which consists of goal setting, collecting availabilities and expected obstacles, prescribing plans, and iteration. In addition, we found that experts often struggle to integrate exercise prescriptions within the irregular schedule of clients with limited consideration of clients' input during the iterative process of revising their exercise plans. Informed by these insights, we designed and developed PlanFitting (Figure 1), a web interface that helps users create and iterate on their personalized exercise plans through a conversation driven by an LLM. Specifically, PlanFitting system engages users in an interactive dialogue, collecting necessary information on their constraints (_i.e._, exercise goals, availability, and potential obstacles that may inhibit adherence to the plan). PlanFitting then recommends exercises and provides the exercise plan in the form of _implementation intention_(Krishnan et al., 2017) (_i.e._. IF-THEN rules), which are succinct and generalized schedule format that associates the intentions with the user's specific events without stringently adhering to time-based schedules (Kumar et al., 2017). To understand how people interact with PlanFitting and the quality of AI-crafted plans, we conducted an exploratory study with 18 people motivated to plan their exercise. During the study, participants formulated a weekly plan and refined it further with the assistance of PlanFitting. In this process, PlanFitting successfully assisted users in articulating highly personalized constraints, while accommodating their own unique chatting style. Also, participants perceived PlanFitting system to be usable, and found the generated plans to be personalized and actionable. In addition, expert planners (\(N=3\)) who evaluated the generated outputs based on the exercise principle (_i.e._, FITT (Kumar et al., 2017)) evaluated the _frequency_, _intensity_, and _time_ composition of the generated plans to be above average, yet revealed the opportunities for enhancing the combination of exercise _types_. From the qualitative feedback from the participants and planners, we also discuss future design implications for further enhancing the qualities of the plans generated by an AI-infused exercise planner. Our study contributes: 1. A formative study (\(N=13\)) revealing the process and challenges of exercise planning between clients and expert planners 2. Design and development of PlanFitting, a novel exercise planning system driven by LLMs that helps generate a personalized exercise plan and iterate on it 3. Empirical results from an exploratory user study (\(N=18\)) demonstrating how people leverage PlanFitting to craft their exercise plans and how professional expert planners view the quality of the output plans ## 2. Related Work ### Personalized and Actionable Exercise Plans Maintaining physical activities is an integral part of a healthy lifestyle, but the majority of individuals find it challenging to incorporate adequate exercise into their daily routines (Kumar et al., 2017; Kumar et al., 2017). To resolve this, setting up and maintaining exercise plans can significantly aid in motivating individuals to maintain regular physical activity (Kumar et al., 2017; Kumar et al., 2017). Specifically, several empirically grounded guidelines have been proposed to facilitate this process. For example, the American College of Sports Medicine (ACSM) (Bednar et al., 2016) has developed universally recognized guidelines that health professionals commonly use to design effective exercise regimens (Kumar et al., 2017). Such guidelines offer broad advice and guidance on the planning process of exercise regimen (_e.g._, recommending at least 150 minutes of moderate-intensity exercise per week), as well as the definition of exercise-related terminologies. On top of adhering to such guidelines, prior studies have explored methods to generate exercise plans that foster successful adoption and maintenance of exercise regimens. Personalization of exercise plans is generally emphasized (Kumar et al., 2017), which takes into account various personal factors such as individual preferences, constraints, and everyday conditions that may be crucial to ensuring the individual's chance of adapting to the plans. However, tailoring such unique constraints of individuals demands exercise professionals or health experts. Another line of research in the fields of behavioral psychology and sports medicine has explored the effective concept and format of exercise prescriptions. One well-known approach is the _implementation intention_, which comprises a specific plan linking a particular circumstance to a corresponding action (Kumar et al., 2017; Kumar et al., 2017). Implementation intentions are formatted as IF-THEN rules and often practically combined or interchanged with action planning by including the environmental cues [(29)]. For example, one can set up an exercise plan in the form of implementation intention like "If _I come back home in the evening_, THEN _I will jog for 30 minutes_." By effectively turning intentions into actionable steps, implementation intentions have demonstrated success in various behavior change contexts, such as managing a healthy diet [(1; 3; 28; 45)], reducing bedtime procrastination [(47)], and aiding in smoking cessation [(16; 35)]. Similarly, in the context of exercise, adopting implementation intention to the exercise plans has been shown effective in promoting physical activities [(33)]. Building on this body of research, our paper introduces a technological approach for incorporating individual constraints to generate customized exercise plans in an implementation intention format. Specifically, aligning with the earlier research emphasizing the significance of specificity and context in generating implementation intentions [(19)], we believe that prescribing implementation intentions with the personalized constraints sourced from the individuals would help create personalized and actionable exercise plans. Furthermore, in this process, we posit that technology-mediated interactions can assist individuals in articulating their personal constraints and incorporating such constraints into implementation intentions, thus enabling the personalized planning of implementation intention without expert assistance. ### Technology-mediated Exercise Planning Advancements in computing technology have offered great potential in leveraging digital tools to assist individuals during the process of maintaining their daily physical activity and status [(30)], such as encouraging physical activities [(17; 32)] and tracking physical activities [(6; 18)]. Another application area is exercise planning, where several studies within the HCI community have explored technology-mediated interactions in the context of planning for the individual's exercise regimen to ensure consistent physical activities. For example, previous studies explored technologies that help maintain individuals' exercise plans on their own [(31; 54)]. Another line of research goes beyond individuals by leveraging other people, such as peers [(5)] or crowd workers to generate custom exercise plans to facilitate health behavior change [(4; 5)]. This line of research showcases the potential of leveraging technology in constructing personalized exercise regimens. Nevertheless, prior technology-mediated approaches often require substantial human involvement, such as self, friends, or crowds from the goal-setting to the actual formulation of the plan. Thus, considering the cost and availability of such human labor associated with these approaches, the sustainability and scalability of these approaches might be limited. Instead, in this study, we seek to mitigate such issues and aim to build more sustainable systems that support iterable planning with the help of large language models (LLMs) in assisting users in planning and implementing a feasible exercise regimen. ### Integration of Large Language Models in User Interfaces Recent advances in large language models (LLMs) offer new opportunities for equipping traditional interfaces with intelligence. Trained on a vast amount of textual data with human iteration, LLMs (_e.g._, GPT [(9; 41)], LLaMA [(37)], HyperCLOVA X [(14)], PaLM [(12)]) demonstrate remarkable proficiency and pose great potential in various natural language processing (NLP) tasks, ranging from text summarization (_e.g._, [(9; 53)]) to dialogue generation (_e.g._, [(52; 55)]). Incorporated into user interfaces, LLMs generally have two broad roles. First, LLMs augment core system interactions with NLP tasks, such as text analysis (_e.g._, VISAR [(56)]), story generation (_e.g._, TaleBrush [(13)]), inspiration generation (_e.g._, Sparks [(24)]), and data labeling (_e.g._, PaTAT [(23)]). Most of these tasks are challenging to tackle with traditional NLP models or yield comparable quality with LLMs with much smaller training samples [(9)]. Second, LLMs provide a conversational interaction component, as they can generate responses considering both the task-specific contextual information and the dialogue. Many LLM vendors provide generalized chatbot services (_e.g._, ChatGPT (Yang et al., 2017), Bard (Bard, 2017)) to demonstrate their LLM capability, and HCI research prototypes leverage LLMs to perform more specific tasks through conversational interaction (_e.g._, health data collection chatbot (Wang et al., 2018), recommendation (Wang et al., 2018; Wang et al., 2018)). For example, Wang _et al._ proposed a conversational interface that supports users to interact with mobile UI components through conversation (Wang et al., 2018). In this work, we leverage LLMs for both purposes: (1) understanding exercise context and constraints users described in natural language and generating exercise plans from them, and (2) carrying on a free-form conversation to maximize flexibility and expressivity of various exercise constraints. ## 3. Formative Study To understand the current practice of personalized exercise planning and the challenges that arise during those processes, we conducted a formative interview study with exercise planners (\(N=8\)) and clients (\(N=5\)). The study protocol was reviewed and approved by the institutional review board. _Exercise planners_. From a corporate clinic and our internal network, we recruited five experts (FP-1 - FP-5; three females and two males) who are experienced in setting up personalized exercise plans for clients. Of the five experts, three were physical therapists, another was a physiatrist, and the other was a kinesiologist. The exercise planners aged between 26 and 42 (\(M=34.6\)) and had an average of 9.8 years of relevant experience (\(SD=4.5\)). _Clients_. We recruited eight individuals (FC-1 - FC-8; 6 females and 2 males) by advertising our study on a local community platform and internal bulletin boards in a giant enterprise in South Korea. Our inclusion criteria were people to have experience setting up their personalized exercise plans under the advice of exercise experts (_e.g._, clinicians, physical therapists, personal trainers, etc.). Clients were aged between 26 and 45 (\(M=35\)). Three participants responded that they have/had engaged in exercise under the personalized exercise plans for less than three months, three participants for 3 to 6 months, and the other two participants for more than six months. We invited each participant to 1-hour semi-structured interview session. Then, we asked each exercise planner to primarily share insights into (i) their planning procedures and (ii) the obstacles they encountered while setting up plans for/with the clients. Likewise, clients were prompted to elaborate on (i) their experiences and process of planning exercises with exercise planners and (ii) any challenges they faced during the planning. The interviews were audio-recorded and transcribed. We compensated 50,000 KRW (approximately 35 USD) and 30,000 KRW (approximately 22 USD) for each planner and client, respectively. We analyzed the interview transcripts using thematic analysis (Brand, 2017). Specifically, the process was done in a bottom-up approach, where the authors first got familiarized with the raw responses, identified emerging themes from the responses, and compared the themes until the authors reached an agreement. As a result, we could derive the final themes as described in the following section. ### Practice of Personalized Exercise Planning From the interview, we identified that planners primarily follow well-known exercise guidelines, such as the ACSM guidebook, which emphasizes engaging in a minimum of 150 minutes of moderate-intensity exercise per week. However, these guidelines do not provide specific guidance on the personalization for varying lifestyles: "_Actually, even if you take a look at those exercise planning guidebooks, there won't be anything more detailed than [showing a page that defined some case studies of individuals] (...) that's the end of 'evidence-based' personalization._" (FP-4) As a result, planners use them as a flexible framework rather than strict rules, making tailored modifications while adhering to such high-level principles: "_I'm just following a broad guide and customizing a lot in that scope. Shouldn't the details within it be personalized?_" (FP-2) In addition, we were able to surface common information that the planners collect to personalize the plans for each client, such as the personal goal of the exercise, personal obstacles, and feedback (during the follow-up sessions), delivered through either verbal communication or a combination of a survey form and an oral interview with the client: _Understanding client's main goals for exercise._ Every planner responded that they start by setting up the goal of the exercise, emphasizing the influence of identifying the purpose and setting clear goals for exercise in the motivation of clients. They are reported to engage in conversations with clients to find out their own necessity and benefits of exercise to enhance motivation, particularly for newcomers: _"For managing exercise plans, it's crucial to first motivate by discussing goals first rather than just telling them to do it."_ (FP-1) _Surfacing available amount of times for exercise and potential obstacles._ Based on the goals of the exercises that the planners identified, planners ask clients questions to surface how much time they would be available for exercise: _"For those who don't have set regular office hours or for nurses working 3/4 shifts, I ask and look at how much personal time the client can exercise on a regular basis."_ (FP-3) Also, planners ask what factors may potentially make it challenging for them to exercise during those times (_e.g._, physical constraints, parenting), in order to make the exercise planning more viable and realistic: _"I told my planner when my menstrual cycle comes (...) And (as a developer in a company) I told them whenever there is a schedule for releasing a new version that my condition won't be good for about three following days."_ (FC-5) _Prescribing plans._ Based on collected exercise goals, availabilities, and obstacles, planners create a personalized exercise plan for clients. While planners are willing to provide detailed plans down to specific times, the limited availability of planners makes this approach impractical: _"I can't do detailed time planning (...) It seems inconsistent (with my current availability) to generate highly detailed plans, like scheduling at a certain time."_ (FP-4) As a result, planners and clients typically receive a weekly exercise plan with recommended days and hours, exercise types, allowing clients to exercise at their own convenience to meet their requirements: _"They (planners) didn't ask me to exercise at a specific time; they just told me to do a certain amount of some exercises during the week."_ (FC-4) _Revisiting regularly (e.g., weekly, bi-weekly) to share feedback and iterate on the plan._ Emphasizing the importance of viewing the exercise planning as a feedback-driven iteration, rather than a one-time interaction, planners and clients regularly meet (_e.g._, weekly, bi-weekly) to check if the exercises need to be modified: _"There are types of exercise that go in and out (...) After solving the urgent problem, if I wanna get a nicer body shape, other exercises may go in or out."_ (FC-1) Gathering newly emerged feedback and constraints, planners make adjustments to exercise types and/or duration: _"Clients first give it a try, and I gather feedback when they come back in the following week based on their experience trying the exercise plan. If they think it won't work for any reason, I ask them to let me know, and we can start the revisions from there, just like forming and iterating on a hypothesis."_ (FP-2) ### Challenges of Personalized Exercise Planning In addition to understanding the practice of establishing personalized exercise plans between planners and clients, we could also identify challenges that they frequently encounter: #### 3.2.1. Difficulty of contextualizing the exercise within their own schedule. After being prescribed the weekly exercise counts, clients are required to incorporate these exercises into their own schedules by themselves. However, during interviews, clients reported that such an 'autonomous' process, without more specific guidance on when to exercise, makes it difficult for clients to cope with unexpected variables (_e.g._, appointments, work schedules). Consequently, adhering to the plans becomes highly reliant on their own motivation, making clients prone to becoming '_I think it's mostly about getting the number of exercises and then performing them on my own, so my own willingness is the most important factor (...) If I suddenly have to work at night, I just end up not doing exercise that day because there's no one pushing me to do and I feel like I can just do it later._" (FC-2) Particularly, these issues are reported to worsen over time. Specifically, as time passes, various triggers that may lower motivation emerge, such as moments of stagnation in their exercise progress. In such situations, this complacency is exacerbated, leading to a tendency to continuously postpone exercise and eventually skip it: _"If you aim for a weight loss, there are times when you reach a point where you're not losing any more weight (...) then my motivation decreased a bit, so sometimes I took a day or two off, rested a bit more, or skipped it in various other ways. So, I'm skipping more than I did in the beginning."_ (FC-6) 2.2. Limited availability of planners affecting the iteration process and adaptation to fluctuating schedules Clients expressed struggles around accommodating sudden, unexpected time changes induced by their irregular lifestyles and work schedules. Often, reaching out to planners for real-time schedule adjustments isn't a practical possibility, as planners too have other clients and personal commitments. As a result, clients often expressed a desire for more flexibility in schedule planning: _"I have been meeting with my planner every week (...) it was sad to see whenever I have a schedule change and need an alternative, I couldn't ask about the plan iterations right away for the other days."_ (FC-6) Such an issue is reported to ironically make the whole exercise schedules of clients even more dependent on the planners' decision-making process. Consequently, if the weekly meeting is canceled as either the client or the planner is unable to attend the weekly meeting, it often results in a disruption of the exercise for the entire week: _"There were instances when the trainers were not available due to their other commitments (...) the whole exercise for the following week messed up."_ (FC-5) #### 3.2.3. Limited adaptability of planners in engaging with and incorporating client feedback Even when meeting to discuss the exercise regimen, clients often struggle to have their concerns and input incorporated into the plans. Indeed, clients shared several cases when they felt their opinions were dismissed, or they had to spend a considerable amount of time advocating for their points to be considered: _"You know, I can't see the planner every day and have to meet them face to face, and my daily conditions are different every day (...) but I always had to follow the same fixed program. I once went on a trip to [an attraction], but even when I explained this situation in advance my planner just asked me to keep exercising while traveling. It's too inflexible and feels too coercive."_ (FC-2) The prescribed regimen's inability to cater to unique constraints such as travel schedules could discourage clients from following through. In the worst case, disagreements stemming from this lack of flexibility have sometimes even led clients to discontinue their programs entirely: _"I and planners had disagreements on the types of exercise, and I discontinued planning for the exercise with my personal trainer from that moment."_ (FC-5) ## 4. Planfitting Our formative study surfaced the overall planning process of personalized exercise plans, as well as the difficulties that can emerge during such processes. Informed by these insights, we designed and implemented PlanFitting, a conversational interface aimed to help individuals set up their personalized exercise plan and iterate on it. Focusing on the expressivity and comprehensibility that LLMs offer, we designed our system using LLMs to foster engaging interaction, while adapting to the unique constraints of users and allowing them to iterate their plans. Informed by the procedure that our expert interviewees follow, we organized the interaction process of PlanFitting into the following three stages: (1) collecting exercise-related user constraints (_i.e._, goals, availabilities, obstacles), (2) personalized exercise recommendation, and (3) generating personalized weekly plan. In the following, we describe the design of PlanFitting's dialogue system with the underlying LLM pipeline, user interface components, and implementation details. ### Interaction and Interface Design PlanFitting was designed as a web application (Figure 1) consisting of two primary elements: a chat panel on the right panel (Figure 1-Chat panel) and a dashboard (Figure 1-Dashboard). The user mainly interacts with PlanFitting on the chat panel via natural language, and the dashboard provides an overview of the current status of the conversation, summarizing **exercise-related constraints** (Figure 1-A), **recommended exercise list from the system** (Figure 1-B), and **the finalized exercise plans** (Figure 1-C). Every information on the dashboard is automatically updated on every conversational turn so that the user can stay on track. #### 4.1.1. Collecting exercise-related user constraints The user starts planning by entering their name into the system (See Figure 1-B). The chatbot takes the lead by actively collecting essential information required for crafting an exercise plan. Specifically, the chatbot proactively asks questions aimed at gathering the personal constraints of the user, listed in the following: 1. **Exercise goals**: The user's goal of exercise, either in a format of intended purpose or the specific muscle group they aim to target 2. **Availability**: The user's available times for the exercise, either in the exact time format (_e.g._, '_7 pm_') or in a descriptive form (_e.g._, '_after work_') 3. **Potential obstacles**: Any expected obstacles they anticipate that could potentially impede their exercise routine (_e.g._, '_chance of working until late night_') #### 4.1.2. Exercise type recommendation After the user has shared all the necessary constraints, the chatbot proceeds to offer personalized exercise recommendations. The system provides up to five exercise options based on the curated list of exercises that contains 75 common exercises that the exercise experts summarized, drawing from the prior work (Bartos et al., 2018). The list contained the name of the exercise, as well as its well-known alternative name, intensity, laypeople description (_e.g._, definition, how to perform), and the muscles involved (See Appendix B for an example). The list is stored and loaded in CSV format, which can be easily expanded by altering with external exercise databases in the future. The recommended exercises are displayed on the dashboard with a brief description, which summarizes the definition of the exercise and, if available, the reasoning behind the recommendation (Figure 1-B). For users seeking more comprehensive information about a particular exercise, a'more' button is provided where the users may click to access additional details of the exercise. Then, users are asked to select their desired exercises by either clicking on them on the dashboard or typing the name of the exercise(s) into the chat screen in a free form; if they wish to explore additional exercise options, they are also allowed to simply ask a request to the chatbot, which will result in refreshing the recommendations. #### 4.1.3. Generating a personalized exercise plan After the user selects the exercise types they want to include, PlanFitting generates an exercise plan, which is displayed on the dashboard (Figure 1-C). _Format of the plan._ From the interview study, we found that prescribing exercise broadly (_e.g._, specifying a weekly amount) could burden users with scheduling and possibly lower motivation. Thus, to contextualize the exercise plan within the user's availabilities, PlanFitting offers each exercise plan in an _implementation intention_(Krishna et al., 2017) format, a grounded strategy rooted in behavioral psychology that aligns the user's intentions with specific events, hence offering a structured format in well-established IF-THEN statements. (_i.e._, "IF _[availability (time or situation)]_, THEN_ do [exercise type] for [amount] at [intensity]_") In addition, the system offers a _coping plan_ for each plan, which equips users with an alternative plan to follow when the original plan cannot be executed due to the obstacles that may happen. (_i.e._, "IF _[obstacle]_, THEN _[alternative]_") _Grounding a plan to global exercise guidelines._ To earn rigor for the generated plans, the system applies a set of guidelines based on the recommendations offered by ACSM (Bordes et al., 2017): First, the system aims to allocate exercises totaling more than 150 minutes per week. To comply with the ACSM guidelines, the system also accounts for vigorous-intensity exercises by doubling their allocated time when calculating the total exercise duration. In addition, to balance between cardio and strength training, if the user had initially chosen exercises of either type only, the system asks users to consider incorporating both types of exercise. Lastly, the system tries to put a minimum of a one-day rest period between exercise sessions, if possible, to prevent any potential negative effects of consecutive days of exercising the same or adjacent muscle group. Following the initial planning phase, when the user returns to the system, PlanFitting inquires about their satisfaction with the existing plan. If the user is satisfied with their plan, the system asks if they are willing to extend the allotted time to adhere to the progression principle (_i.e._, gradually increase the engagement in exercise) of the exercise (Bordes et al., 2017). However, if the user indicates dissatisfaction, the system solicits feedback on the specific aspects that require revision, facilitating an iterative approach to refining the plan. In this manner, the system approaches exercise planning as an ongoing, open-ended process, conducive to continuous improvement based on user input. In summary, the conversational flow within our chatbot interface is structured to facilitate user engagement, provide exercise recommendations, and enable the creation of personalized exercise plans that adhere to recognized exercise guidelines while allowing for the iteration of generated plans. ### Conversational Pipeline Design Figure 2 illustrates the pipeline of PlanFitting's dialogue system. The pipeline consists of two LLM-driven components: a **response generator** (Figure 2-C) and the **dialogue analyzer** (Figure 2-C). The response generator generates the AI message based on a global instruction (Figure 2-C) and the current dialogue (Figure 2-C). The user's constraints and generated plans are maintained in a data structure called "plan summary" (Figure 2-C). The plan summary maintains the current status as well as provides information to be displayed on the UI dashboard. _Plan summary update._ Inspired by memory management techniques from the NLP discipline (_e.g._, (Bordes et al., 2017)), we designed the dialogue analyzer to generate edit commands that modify the previous state of the plan summary. The dialogue analyzer receives the latest turn pair (_i.e._, the AI message and the user's response; Figure 2-C) and the plan summary of the previous cycle (Figure 2-C) as inputs and generates a list of edit commands (_e.g._, add, update, and remove; Figure 2-C) that reflect the changes caused by the new messages. Then, the system applies the edit commands to the plan summary and generates a new plan summary (Figure 2-C). The system updates the plan summary every time before the system generates a response. The dialogue analyzer runs with the following input prompt: - Analyze the input dialogue and return an array of JSON objects each of which denotes an update for this summary object. - The user may mention multiple entities, such as goals and obstacles, or corrections to previous entities. - You are allowed to use the following set of methods for update: For goal, availability, obstacle, recommended_exercise, and implementation_intention: { target: "goal" | "availability" | "obstacle" | "recommended_exercise" | "implementation_- intention", method: "add" | "update" | "remove" params: { // for update id: string, update: {} // will be overwritten to the corresponding element. } | { // for addition entity: {} // a new entity without ID; ID will be assigned by the system. Only for implementation_intention, // assign a random ID in case you use the "parent_ids" property. } | (// for removal Figure 2: Illustration of how the PlanFitting computes and returns the next dialogue of the chatbot and updates the dashboard, based on the current dialogues id: string } - If there is nothing to be updated, return []. _Conversation._ Once the plan summary is updated, the system formulates an instruction prompt (Figure 2-(E)) to be fed to the response generator. The instruction includes the task descriptions (See Appendix A) on how to carry on the conversation (Figure 2-(F), Task descriptions), and the current plan summary to inform the model with which constraints are missing, thus what needs to be asked in the following dialogues (Figure 2-(F), Planning status). When defining tasks for exercise type recommendation and generating plans, we defined rules to append XML data to the message so the system can parse the information and use it for the user interface. For example, we defined the message rules for creating the plan as follows (See Appendix A.4 for the detailed instructions): Using the exercise types that the user selected, plan for and return the user's exercise plan in the implementation intention format... Each implementation intention rule should be accompanied by corresponding coping plans that can be plan B when the user fails to adhere to meet the main rules. It should assume the failure of each of the user's availabilities due to the obstacles the user mentioned... Each exercise/coping plan should be described in an IF-THEN format along with AMOUNT inside... (Example: <If>Monday after work</If> <Then><Exercise>Running</Exercise> <Amount>@@ minutes - moderate intensity</Amount></Then> <If>After running</If> <Then><Exercise>Pilates</Exercise> <Amount>30 minutes - vigorous intensity</Amount></Then> <If>Too sleepy after work on Monday</If> <Then> <CopingPlan>Do the same exercises on Tuesday</Coping Plan></Then>) To compute the exercise recommendations, we employed a combination of function calling [40] and cosine similarity techniques. First, we embedded the title and description of each exercise from our prepared list and saved them as an embedding. Once the user finishes providing their constraints and the function calling detects if the exercise recommendation is needed, a function that takes the goal and obstacles as input and returns the recommended exercises is triggered. Specifically, the function is programmed to embed the parameter as an embedding, which is then compared to each embedding of each exercise from the list to calculate the cosine similarity and return the types of exercise that have the top 5 cosine similarities to the user in a JSON format. Then, similar to how the system does for generating the exercise plan, PlanFitting formats the output JSON to the XML format through Regex postprocessing, which is then populated in the dashboard (Figure 2-(E)). ### Implementation PlanFitting system consists of two components: (i) a web interface and (ii) a backend server, where the user interacts with the web interface whose chat is computed to return the response from the backend server. The web interface was built as a web-based application, which was built and deployed using a Javascript-based framework (SvelteKit). For the backend, we employed a Python server that takes the user's name and chat message as inputs and generates the subsequent message along with detected metadata, such as exercise goals, availability, obstacles, and recommended/selected exercise types. This server is connected to the server via the API interface. We used OpenAI chat completion API (Zhu et al., 2019) to implement the conversational pipeline components. For each API call, we ran GPT with the following parameters: model = gpt-4-0613, temperature = 0.5, top_p = 1, frequency_penalty = 0, and presence_penalty = 0. ## 5. Exploratory user study To gain a comprehensive understanding of the use of the PlanFitting system, we conducted an exploratory user study with 18 individuals. Specifically, participants interacted with PlanFitting, setting up their exercise plans with their own goals and constraints. To assess the rigor of the crafted plans, we evaluated the plans that participants produced with three physical therapists. The study protocol was approved by the institutional review board. ### Participants We advertised our study to a local online community platform and the company's bulletin board, where we required participants to be individuals who are (i) aged over 19, (ii) motivated to do regular exercise, (iii) and not currently doing exercise under the specific plan advised by planners, (iv) who can participate in an in-person lab study. As a result, we recruited 18 participants (P1 - P18; 11 females and 7 males) who were aged between 19 and 54 (\(M=33.2\)). Of all, six were full-time/part-time employees by the time they were participating in our study, six were college students, one was a retiree, and five responded that they were either stay-at-home parents or unemployed. We compensated 50,000 KRW (approximately 35 USD) as a gift card for their participation. ### Study Procedure & Tasks To comprehensively explore how the clients create and refine their exercise plans using the PlanFitting system, as well as understand their perceptions toward these processes, we organized the user study with clients into the following phases: (i) initial exercise planning, (ii) plan iteration, and (iii) debriefing. Throughout the planning, we employed a think-aloud approach to better surface their lively experience interacting with the system. _Initial exercise planning._ The initial phase of the study involved clients being guided through the process of configuring their exercise plans with the assistance of the PlanFitting system. Within this phase, clients were asked to interact with the system to articulate and input their specific exercise goals, availabilities, and any potential obstacles that may arise when they exercise. Here, clients were asked to interact with the system by mainly following the guidance provided by the chatbot. At the same time, they were also asked to freely ask questions to the chatbot and iterate on their plans until they were satisfied. As such, we aimed to mirror the process of tailoring exercise plans to individual constraints based on the overall guidance of the PlanFitting system. _Plan iteration._ After setting up their weekly exercise plan initially, participants were instructed to move on to the second phase. In this phase, they were asked to imagine themselves in the upcoming week, having completed their exercises successfully, and to also consider scenarios that may have hindered their progress in the previous weeks. To assist them in this process, we presented example scenarios for their reference (_e.g., "I intended to swim last week, but I'd rather avoid such location-dependent activities due to the hassle of making reservations"_). In cases where they had nothing to change, we suggested they engage with the system as if they were completely satisfied with their plan. Once they had formulated their scenarios, participants were encouraged to use the system to review and fine-tune their exercise plans over a designated time frame. They were asked to freely describe adjustments to the system that they would want to make, such as exercise availabilities, types, and amounts. _Debriefing._ During the final debriefing phase, we conducted a survey and a semi-structured interview with each client to gather their feedback, insights, and reflections on both the planning process and their interactions with PlanFitting system. The survey was designed to assess their subjective evaluation of how personalized and actionable the generated plan is, as well as their degree of acceptance and adoption of the PlanFitting system. For evaluating the level of personalization and actionability, we asked _follow_ and _fit_ for measuring **personalization**, and _specificity_, _encouragement_, _vocabulary_, and _accuracy_ for measuring **actionability** on a 7-point Likert scale, following the rubric that Agapie _et al._[(4)] formulated to evaluate the quality of the plan based on ACSM principles. For evaluating the acceptance and adoption of our system, we used the Technology Acceptance Model (TAM) scale (Sandel, 2018). Then, they were guided to the interview session, where we inquired about the overall usability of our system, the quality of the generated plans, their feedback on the iteration process, the obstacles they faced while interacting with the PlanFitting system, and the potential future enhancements they were envisioning. The overall procedure took approximately 1 hour for each client. _Expert evaluation._ To assess the appropriateness and quality of the generated plans from the perspective of experts, we recruited three expert planners (E1 - E3; one male and two females) from a corporate clinic. The experts were nationally licensed physical therapists aged between 28 and 39 (\(M=31.3\)), and had an average of 7 years in professional exercise planning (\(SD=4.6\)). We asked the experts to evaluate plans from the initial exercise planning phase both quantitatively and qualitatively, where the expert was randomly assigned six plans each and asked to evaluate them. Specifically, the experts holistically reviewed the plans as well as the constraints and conversation history, with private information masked, presented on the PlanFitting interface. For each plan, they filled out our evaluation form that consists of 7-point Likert scales of four items from the FITT principles (Fill et al., 2018)--_frequency_ (_i.e._, how often the exercises in the plan are), _intensity_ (_i.e._, how intense the exercises consisting of the plan are), _time_ (_i.e._, duration of the exercises consisting of the plan), and _type_ (_i.e._, composition of the types of exercise consisting of the plan)--a recognized and empirically validated framework consisting of salient factors in exercise plan design and assessment (1: highly unsatisfactory, 7: highly satisfactory). For each item, we also included an open-ended field asking for the rationales for the assessment. ### Analysis Similar to what we did for our formative study, we coded (i) participants' responses and (ii) qualitative responses from expert evaluations using a thematic analysis, where the authors identified the initial themes and teamed to discuss and compare the themes until they reached a consensus. To construct the sequence of actions that the user performed using PlanFitting, we first associated each action within the interaction log with the user-specified constraints (_i.e._, _goal, availability_, _obstacles_, and _exercise type_). Then, we categorized each action as either add, edit, or remove, indicating whether it aimed to introduce a new entity, modify an existing one, or delete it. In addition to these basic actions, we defined and characterized _amount_ (asking to change the amount of exercise), _question_ (asking questions to the chatbot), and _querying exercise list_ (asking the chatbot for exercise recommendations based on the user-specified constraint). These actions were then structured into a sequence for each individual client. ## 6. Results In this section, we report the results of our study in three parts: (1) overall usage and interaction patterns, (2) exercise plans, and (3) subjective feedback. ### Collected Constraints and Interaction Patterns Table 1 summarizes the goals, availabilities, and potential obstacles that participants provide to PlanFitting during the initial planning phase. Participants provided a wide range of constraints related to their lifestyle (See Table 1). Throughout these interactions, participants shared 2.28 exercise goals (\(SD=1.04\)), 1.72 availabilities (\(SD=0.80\)), and anticipated 1.33 potential obstacles (\(SD=0.88\)) on average. Some common goals that the participants described include weight loss (\(N=11\)), recovering daily energy (\(N=8\)), and maintaining/improving muscular strength (\(N=5\)). For availability, only 5 participants described their availability in the exact time format (_e.g._, after 7 pm); the others described all of their availabilities freely in a descriptive form (_e.g._, after school). Lastly, participants described their potential obstacles in a highly personalized expression by drawing connections to various aspects of their own lifestyles and circumstances, such as heavy drinking (P2, P3), kid's schedule (P7, P9), and party (P15). Figure 3. Sequence of the edits that the participants made to tailor their exercise plan during our exploratory study Figure 3 illustrates how these constraints were provided and modified for each participant across the two study phases. In the early stage, participants generally followed the ordering of information that PlanFitting was programmed to collect. As the interaction progressed, they iterated their constraints in individual ways through flexible conversational interaction. Ten participants (56%) also asked questions (Figure 3; gray rectangles) to PlanFitting about exercise and other related topics. \begin{table} \begin{tabular}{|l|l|l|l|} \hline **ID** & **Goal** & **Availability** & **Potential obstacles** \\ \hline **P1** & * Weight loss & * Weekdays at night after 6 pm & * Do not wanna do exercises that heavily affect knees \\ & * Recover energy & * Weekends in the morning & * Company dinner or other appointments \\ \hline **P2** & * Maintain muscular strength & * After waking up & * Light exercise at night \\ & * Be more energetic in daily life & * If it fails, exercise afternoon or & * Hard to exercise on the day after drinking \\ & * Weight loss & at night instead & * Sudden schedules afternoon \\ & * Maintain daily health & * Light exercise after lunch & * Sudden schedules at night \\ & * Cardio & & \\ \hline **P3** & * Recover basic energy & * After school & * Difficult to exercise after heavy drinking \\ \hline **P4** & * Weight loss & * Thu–Sun after 7 pm & * Don’t want to exercise on rainy days \\ & * Overcome exercise shortage caused by & & \\ & COVID-19 & & \\ \hline **P5** & * Improve muscular strength & * Everyday in the morning & * Want to exercise without equipment \\ & * Fix posture & * Not familiar with exercise \\ \hline **P6** & * Weight loss & * Everyday in the morning except & * Diagnosed with right shoulder subluxation \\ & * Improve shoulder muscles & for late night & \\ & * Relieve wrist pain & & \\ \hline **P7** & * Recover energy & * Weekdays in the morning \& at \\ & Weight loss & night & * Kids’ day off from school or appointment \\ & * Improve muscles & & \\ \hline **P8** & * Weight loss & * Weekdays in the morning & * Difficult to exercise after drinking or sleeping late \\ & * Recover energy & * Weekdays afternoon & Postpone the exercise if there is a schedule with others \\ & * Relieve stress & * Weekends at any time & \\ & Get hobies & & \\ \hline **P9** & * Improve swimming skills & * Weekdays in the morning & * Difficult to exercise if a kid is sick \\ & * Improve muscular strength & * Unable to exercise on Mon–Fri & \\ & & as already doing swimming & \\ \hline **P10** & * Weight loss & * Weekdays after school at night & * Sleepy after school \\ & * Recover energy & * Weekends afternoon & \\ & * Tuesday afternoon–night & & \\ \hline **P11** & * Weight loss & * After dinner & * Location constraint \\ & * Cardio & & \\ \hline **P12** & * Weight loss & * Everyday in the morning \& at \\ & * Weight loss & * Three times per week in the & \\ & & morning (9–12 am) & * Defer indoor exercise \\ \hline **P13** & * Weight increase & * Everyday after 7 pm except for & \\ & * Recover energy & Sat & \\ \hline **P15** & * Improve arm muscles & * Weekdays at night & * Weekday night party \\ & * Want to make waist look thinner & * Weekends 10–12 am & * Wish to exercise three times per week \\ \hline **P16** & * Weight loss & * Tue–Thu after school & N/A (Provided no obstacle) \\ & * Relieve waist pain & * Fri \& Sat before work & \\ \hline **P17** & * Improve golf \& backswing skills & * Mon at anytime & \\ & & * Thu \& Fri at night & \\ \hline **P18** & * Recover energy & * After work & N/A (Provided no obstacle) \\ & * Improve muscles & * Weekends afternoon & \\ & * Relieve back pain & & \\ \hline \end{tabular} \end{table} Table 1. Exercise goals, availabilities, and potential challenges that our participants described to the chatbot ### User Evaluation of Exercise Plans and Planning Figure 3(a) and Figure 3(b) illustrate the distribution of user scores evaluating the quality of personalization and actionability, respectively. In the following, we cover the participants' quantitative evaluation and their related feedback in debriefing about the two aspects. #### 6.2.1. Personalization Overall, participants found the generated plans to be thoroughly personalized, where they reported that they were generally likely to **follow** the plans (\(M=5.83\), \(SD=0.99\)) and found the output plan to **fit** their personal lifestyle (\(M=6.00\), \(SD=0.77\)), scored on a 7-point Likert scale (1: strongly disagree, 7: strongly agree). During the study, participants reported that they felt PlanFitting could tailor exercise plans according to their preferences and constraints. Also, by incorporating those personal constraints into the guidelines, PlanFitting made participants perceive the plans as both well-grounded and highly personalized. P15 remarked, "_It was refreshing to have schedules tailored to my personal time and listen to my request. I was really surprised to see that AI could do that well (...) once I requested, it extended the duration of each exercise session by 15 minutes._" Similarly, P10 noted, "_I wanted to do simple, sweat-free, and noiseless exercises at home. Tailoring my plan using this system, it was great to see that my preferences and conditions were reflected exactly in the plan that looks easy to follow._" Participants also appreciated that PlanFitting allowed them to reiterate their existing plans and constraints afterward. P16 noted, "_It is really nice having the option to easily modify the existing exercise plan when a new goal arises (...) For example, if I suddenly injure my leg and need rehabilitation, I'm sure it would also be well-reflected in my plan._" Participants also emphasized that PlanFitting could quickly adapt to unexpected changes. Without having to abandon their exercise plans when their schedules abruptly changed or new obstacles emerged, participants expressed the hope that they could easily check and adjust their exercise plans using PlanFitting without relying on the presence of an expert: "_If there's a change in my availability, being able to make adjustments instantly like this, I believe I would use it frequently._" (P14) With such support that enables users to freely iterate on their plans, participants reported various future uses of PlanFitting, such as finding and engaging in lightweight exercises that can be done on the go or when they suddenly have some free time: "_Let's assume that I want to utilize some spare moments, for example, when I finish lunch early and have about 20-30 minutes left. Then I could easily use this system in my workplace to use those spare moments._" (P5) #### 6.2.2. Actionability As in Figure 3(b), participants also gave positive ratings on the actionability of the plans. The plans were generally received to be **specific** with enough details to act upon (\(M=5.06\), \(SD=1.51\)). In addition, participants found the presentation of the plan and its accompanying information **encouraging** (\(M=5.56\), \(SD=1.34\)), described with straightforward **vocabulary** (\(M=6.19\), \(SD=1.11\)), and **accurate** (\(M=5.72\), \(SD=1.23\)), responded on a 7-point Likert scale (1: strongly disagree, 7: strongly agree). Figure 3. Survey results from our exploratory study with participants. The results for each subset of (c) are calculated as the mean score of its sub-questionnaires, and the rightmost points denote the responses with 7-point on average From our qualitative analysis, we identified that participants found PlanFitting-driven plans presented in IF-THEN format to be more practical and adaptable, especially for individuals with fluctuating schedules. Avoiding vague timing instructions (_e.g._, 3 times per week) or rigid time constraints (_e.g._, 7 pm) while contextualizing the plan to the user-described situations, the format made them perceive the plan as realistic and easy to remember, making participants perceive the prescribed plans as more actionable. P6 noted, _"I think it's better when it tells you to do some exercise based on the situation like this. Honestly, sticking to a set time isn't always easy to follow through with, in reality."_ Furthermore, participants reported the plans to be well-adhering to the specific constraints they provided: _"For every information I added to the chat, the system successfully reflected those to my exercise plans."_ (P3) The plans were also reported to be presented in sufficient detail to follow by specifying the exercise type, and amount. This level of specificity made participants perceive the plan as clear and easy to follow: _"What surprised me was how it instructed me on what to do on each day, like there was a clear outline. I liked that it was so specific. I tend to prefer clear instructions (...) Nowadays, there are just too many choices, and I tend to dislike making decisions. So, having such clear instructions made me appreciate why I should use this and why I rated it highly."_ (P4) With such specificity of the plans, participants noted that the generated plans are systematic, making them perceive the plan as more actionable: _"I felt like I could systematically handle various types of exercises a bit better. It gave me a feeling of being well-grounded."_ (P6) On top of the specificity, participants noted that offering coping plans for each exercise regimen further improved the actionability of the exercise plan. Specifically, they expected that, even when they encountered obstacles that could make them skip exercise, the coping plans would motivate them to attempt the exercise: P18 remarked, _"If I find myself unable to do my exercise and I'm debating whether to skip it for the day, seeing this alternative [coping plan] might make me think, 'Well, if I can't follow the original plan, I might as well do the alternative one today,' and it would induce to start exercising anyway."_ ### Expert Evaluation of the Plans As illustrated in Figure 5, expert planners generally evaluated the exercise plans generated by PlanFitting positively, based on the core components of the FITT principle--how adequately the _frequency_, _intensity_, _time_, and _type_ of exercises were recommended. Here, we describe the assessment and feedback we gained from the experts and the potential room for further enhancing the plans. #### 6.3.1. Frequency Experts generally rated the frequency of exercise of the plans to be well-defined, with an average score of 5.67 on a 7-point Likert scale (\(SD=1.53\)). In the subjective feedback, the experts attributed the success of frequency mainly to the system's ability to accommodate both the well-known guideline of assigning 150 minutes per week and the participant's desired exercise frequency. They particularly valued the system's approach to listening to the participant's preference while avoiding any excessive burden by distributing the weekly exercise frequency evenly Figure 5. Survey results from our exploratory study with expert planners throughout the week: E1 reported, "_It's highly commendable to reflect the exercise guideline by scheduling exercise with the assigned time for at least 3 times a week and incorporating the concept of rest on the day after exercise._" At the same time, we could also observe some future considerations when setting up the frequency of plans. First, the evaluation of experts implied the need for prompting the system to set up the frequency of plans according to the number of exercises the participants wish to do. For example, after seeing a four-day regimen containing many (4) types of exercises that the user selected, E3 proposed bumping up the frequency higher, to five or six days per week: "_Given the four different exercise types (that the participant mentioned they wished to do), it may make sense to increase the exercise frequency from the current four times a week to five or six._" Conversely, experts cautioned against overly frequent exercise sessions with similar exercise types. After reviewing a plan where one participant requested and was prescribed a seven-day strength training planning, E2 stressed the need to consider muscle fatigue, highlighting the system's need to adjust the frequency when the potential risk of injury exists: _"The plan consists of 7 days of exercise sessions that target the abdomen and lower body, which could potentially lead to muscle fatigue. It's essential to reduce the frequency."_ #### 6.3.2. Intensity The experts evaluated the intensity of the exercise plan as slightly positive, averaging 4.28 (\(SD=1.32\)), indicating that while generally favorable, there was room for some improvement in this aspect. Particularly, they positively evaluated the role of the PlanFitting in preventing intensity-related issues through the form of coping plans, based on the obstacles that the participants provide. For instance, they highly appreciated the coping plan that recommends one participant to discontinue exercise and consult with a professional if the intensity of exercise posed a strain on participants with back pain: _"I found cautionary comments for the patients with back pain to be great, along with the appropriate intensity of exercise offered._" (E2) Despite these positive aspects, we also identified feedback from experts based on their experience to enhance PlanFitting's guidance for intensity. Currently, PlanFitting recommends increasing the amount of exercise as a progression measure if they are satisfied with the previous plan. In addition to the time, E1 suggested that the intensity of the plans can also be used as a measure for the progression: _"In terms of the intensity of this plan, I consider it appropriate. Given that the participant is healthy, I also recommend the user start with moderate intensity and gradually progress to higher intensity._" Additionally, PlanFitting is currently fed with the commonly advised intensity information in our predefined exercise list to inform the intensity of recommended exercises in the plans. However, after seeing the recommendation of moderate-intensity exercises to participants seeking weight loss or muscle strength improvement, E1 implied that these can be customized based on the goals that the participants may have (_e.g._, perform the exercise in _high-intensity_ for achieving weight loss or improving muscle strength goals): _"To achieve weight loss, I believe it is necessary to include high-intensity aerobic exercises that have a higher level of intensity._" #### 6.3.3. Time From the plans that the participants generated, all participants were prescribed plans that satisfied the guideline for exercise time (_i.e._, 150 minutes of moderate-intensity exercise or equivalent), except for two participants who mentioned they were already engaging in another exercise for their hobby prior to planning, and one participant who manually requested the system to exclude a session from their exercise plan. Aligning with such adherence of the PlanFitting-generated plans with the exercise guideline, the evaluation of the time component within the exercise plans received a positive rating, with an average of 5.06 (\(SD=1.80\)). Expressing their satisfaction, experts additionally provided further recommendations to enhance the flexibility of the plans, such as breaking down the session further to make the plan more actionable: _"I think the amount of time has been planned well. If the client is unable to commit to a 30-minute exercise, you can also advise them to break it down into three 10-minute sessions."_ (E1) Furthermore, we found the potential improvement of PlanFitting to operationalize the time not only in weekly total duration but also in terms of per-session duration. Currently, PlanFitting is designed to align with the general guidelines set by the ACSM regarding the total weekly duration of exercise. Although the experts assessed the plans as meeting these guidelines, they were also evaluating the duration per session, pointing out session-wise issues present in PlanFitting-generated plans. For instance, although our system may offer a long vigorous-intensity exercise session if the user provides limited time available for exercise and the system may have to satisfy the guideline within fewer available exercise days, planners argued that these long, intense sessions might result in overexertion and cautioned against such exceptional cases: _"For the case of high-intensity exercises, prescribing a 50-minute session of strength training is excessive for the participants."_ (E2) #### 6.3.4. Type Unlike other evaluation criteria, the type of exercises within the plans received a rating slightly below satisfactory, averaging 3.89 (\(SD=1.45\)), indicating that there was room for enhancement in tailoring the exercise recommendations. From the feedback that the experts offered, we surfaced rooms for the exercise types that PlanFitting provides to improve. First, even if we prompted PlanFitting to induce the balance between aerobic and resistance exercises, the final choice of exercise types still depended on the participant's intent, leading some participants to end up getting the plan that consists of either type of exercise only. On such an account, E2 suggested presenting the'required' exercises with a stronger tone during the process of setting up the plan: _"Only the exercises the user wanted to do were included. However, as this is an interaction where AI sets exercise goals together with the participant, 'necessary exercises' should also be guided."_ From the expert evaluation, we also noticed occasional inaccuracies in exercise recommendations when specific muscle groups to improve were not explicitly mentioned in participants' goals. For example, when trying to suggest exercises that might benefit one participant's golfing skills, PlanFitting recommended exercises by calculating the cosine similarity between the keyword 'golf' and the description of various exercises-which may not have the golf-related keyword, as pointed out by E3: _"Other exercises that could enhance golf performance were not adequately suggested (...) recommendations for improving golf backswings should include exercises that enhance flexibility, core strength, and lower body strength."_ Thus, identifying and addressing the specific muscle groups that need improvement based on the goal, even when they are not explicitly mentioned, may enhance the precision of exercise recommendations. ### User Experience & Future Enhancements Participants indicated a positive inclination towards adopting and using PlanFitting. The perceived usefulness received a positive rating of 5.43 on average (\(SD=0.99\)). Similarly, participants rated the system to be easy to use, with an average score of 6.00 (\(SD=1.12\)), indicating that participants found the system easy to navigate and utilize. As for the intention to continue using the system, participants responded with an average rating of 5.52 (\(SD=1.26\)). During our interviews with participants, we uncovered factors that made them intend to keep using this system. First, generative enabled free-flowing and flexible conversations allowed participants to respond in a non-linear manner: _"Even if I suddenly went back to a previous question or said something else, the system seamlessly continued the conversation which made the chatting more convenient."_ (P3) Secondly, preserving conversational history in the form of a dashboard and making it visible allowed users to keep track of the constraints they expressed and helped them to further adjust plans easily, without having to manually look up all the previous chat history: _"The dashboard neatly organizes and updates the information every time I entered constraints, which I find very convenient. Often times when I plan things like this, I have to make separate notes on my phone, right? Now I can just input it to AI, and it automatically organizes it for me (...) I consider this as a very useful component."_ (P12) At the same time, participants also highlighted future integration of other contextual information that can further personalize the plans generated by PlanFitting. For instance, some participants suggested that incorporating context-aware features, such as providing exercise recommendations based on their current location and weather conditions, could significantly enhance the system's utility. Additionally, soliciting more detailed constraints from participants, such as whether they have children (P13) or specific muscle areas requiring rehabilitation (P15), was identified as a future enhancement that would further make them perceive the system to be useful. ## 7. Discussion In this paper, we explored the potential of PlanFitting in its capacity to generate personalized and evidence-based exercise plans. By leveraging large language models (LLMs), PlanFitting streamlines the exercise planning process by assisting users in articulating various constraints that may affect their exercise, reflecting the constraints to their exercise plans, and allowing for continuous iteration without any external human computation. Our user study showcased that LLM-assisted exercise planning can effectively assist users in creating exercise plans that are personalized and aligned with the guidelines. During the iteration phase of the study, PlanFitting successfully adjusted the plans per users' request, demonstrating its potential for long-term use as an _exercise companion._ After seeing PlanFitting reflecting various edits they requested in their plans, participants expressed intention to use PlanFitting in the long-term and frequently throughout their exercise journey. Furthermore, they envisioned various possible use cases for PlanFitting, such as an exercise scheduler during spare time and an on-the-go exercise planner. These reports and potential uses of PlanFitting hint at its future role in playing the proactive aide during the long-term exercise planning process and its implementation through the continuous iteration of the plan. Despite such efficacy, the evaluation from expert planners also suggests room for future enhancements, such as incorporating further empirical guidelines that are validated by professionals during the process of creating exercise plans. From the evaluation, we identified several reports of planners that pointed to certain edits that they would have applied to the plans based on their own hands-on experiences, such as recommending a certain exercise intensity for achieving specific exercise goals. This points out that, although PlanFitting takes into account various individualized factors and exercise guidelines during the exercise planning process, human expertise remains paramount for assuring the quality and effectiveness of the plans. As PlanFitting's interactions are driven by the instructions fed into the LLM, we anticipate that this can be easily implemented by listing up such empirical guidance and adding them as a form of instruction. Moreover, future versions of AI-based exercise planning systems like PlanFitting might consider setting up a repository that contains expert knowledge and feedback and letting users choose among these to further inform the planning with experts' know-how. On top of following the constraints collected from the users, our findings further pose the potential of LLM's generative capabilities in playing the role of assisting users to think through several factors that the users may have overlooked. Currently, the system predominantly tailors plans to match user preferences. During the user study, however, instances arose where PlanFitting's strict adherence to user-specified availabilities resulted in creating a strenuous exercise plan. As such, the expert planners called for a more considerate approach by highlighting the importance of not overburdening the user, especially when incorporating high-intensity exercise sessions into their plans. By leveraging the generative capabilities of LLMs, we can envision presenting AI-generated post-hoc checklists that prompt users to think through various potential factors that might lead the plans to be excessively demanding or pose harm to safety based on the initial version of a plan. Not limited to preventing injury, we believe the future system may also harness LLMs to create a plan that the user is more likely to stick to, by predicting potential obstacles that the user forgot to add once the plan draft is ready and asking users to think through them. Another key aspect of our system involves the integration of implementation intentions, where the users are provided with IF-THEN statements linked to their availabilities collected through chatting with PlanFitting. From the study, we identified that the participants perceived such situation-based expressions as highly comprehensible and adaptable, compared to vague amount-based or rigid time-based instructions. Similarly, as such implementation intention strategies have been shown effective in a variety of behavior change tasks (_e.g._, diet control [(1; 3; 28; 45)], smoking cessation [(16; 35)]), we posit that our approach is also adaptable to various other behavior change contexts. Particularly, since our system is composed of a set of easy-to-alter instructions in a natural language that defines the constraints to be collected, we believe that the adaptation process for various other tasks is significantly simple, and only minimal changes would be required to tailor these instructions to reflect the domain-specific constraints of each new context. As the system scales up, we believe that incorporating context-aware functionalities would make the plans even more contextualized. For instance, integrating location-aware recommendations could enable PlanFitting to take into account factors driven by real-time information, such as weather conditions, nearby exercise facilities, or nearby routes that allow exercise on the go (_e.g._, a specific route for running while going back home). Such a level of contextualization would make the generated plans even more closely connected to the user's real-world context. Similarly, other features that may reflect up-to-date health stats of the user could be incorporated into future revisions of PlanFitting to create even richer and more personalized exercise planning. ### Limitation and Future Work Although our user study offers an in-depth understanding of how users employ PlanFitting to articulate their constraints related to exercise planning and formulate the plans, conducting a real-world deployment study will further enhance our understanding of how our system affects users' real-world exercise routines. We believe that the initial insights gained from our system will provide valuable information for future research. ## 8. Conclusion In this study, we designed and developed PlanFitting, an LLM-infused web interface that assists users in creating their personalized exercise plans through conversation. From the user study consisting of individuals and the expert evaluation of the plans generated from the user study, we highlighted the potential of PlanFitting to produce personalized and guideline-informed plans. We also discuss design implications for enhancing the design of AI assistants for personalized exercise planning. To this end, we anticipate that this work will serve as guidance to inform and inspire researchers in the HCI and broad AI communities that leverage LLMs to foster flexible, sustainable, and iterable exercise planning. ###### Acknowledgements. We thank our study participants for their time and efforts. We also thank Elena Agapie, who provided the exercise knowledge base dataset. Yuncheol Ha and Hyosang Kim at NAVER Clinic helped us recruit exercise experts for this study. This work was supported by NAVER AI Lab in terms of a research internship and a research fund.
2310.00032
Pretrain, Prompt, and Transfer: Evolving Digital Twins for Time-to-Event Analysis in Cyber-physical Systems
Cyber-Physical Systems (CPSs), e.g., elevator systems and autonomous driving systems, are progressively permeating our everyday lives. To ensure their safety, various analyses need to be conducted, such as anomaly detection and time-to-event analysis (the focus of this paper). Recently, it has been widely accepted that digital Twins (DTs) can serve as an efficient method to aid in the development, maintenance, and safe and secure operation of CPSs. However, CPSs frequently evolve, e.g., with new or updated functionalities, which demand their corresponding DTs be co-evolved, i.e., in synchronization with the CPSs. To that end, we propose a novel method, named PPT, utilizing an uncertainty-aware transfer learning for DT evolution. Specifically, we first pretrain PPT with a pretraining dataset to acquire generic knowledge about the CPSs, followed by adapting it to a specific CPS with the help of prompt tuning. Results highlight that PPT is effective in time-to-event analysis in both elevator and ADSs case studies, on average, outperforming a baseline method by 7.31 and 12.58 in terms of Huber loss, respectively. The experiment results also affirm the effectiveness of transfer learning, prompt tuning and uncertainty quantification in terms of reducing Huber loss by at least 21.32, 3.14 and 4.08, respectively, in both case studies.
Qinghua Xu, Tao Yue, Shaukat Ali, Maite Arratibel
2023-09-29T13:12:58Z
http://arxiv.org/abs/2310.00032v3
Pretrain, Prompt, and Transfer: Evolving Digital Twins for Time-to-Event Analysis in Cyber-physical Systems ###### Abstract Cyber-Physical Systems (CPSs), e.g., elevator systems and autonomous driving systems, are progressively permeating our everyday lives. To ensure their safety, various analyses need to be conducted, such as anomaly detection and time-to-event analysis (the focus of this paper). Recently, it has been widely accepted that digital Twins (DTs) can serve as an efficient method to aid in the development, maintenance, and safe and secure operation of CPSs. However, CPSs frequently evolve, e.g., with new or updated functionalities, which demand their corresponding DTs be co-evolved, i.e., in synchronization with the CPSs. To that end, we propose a novel method, named PPT, utilizing an uncertainty-aware transfer learning for DT evolution. Specifically, we first pretrain PPT with a pretraining dataset to acquire generic knowledge about the CPSs, followed by adapting it to a specific CPS with the help of prompt tuning. Results highlight that PPT is effective in time-to-event analysis in both elevator and ADSs case studies, on average, outperforming a baseline method by 7.31 and 12.58 in terms of Huber loss, respectively. The experiment results also affirm the effectiveness of transfer learning, prompt tuning and uncertainty quantification in terms of reducing Huber loss by at least 21.32, 3.14 and 4.08, respectively, in both case studies. ## 1 Introduction Cyber-Physical Systems (CPSs) serve as essential elements in actualizing the vision of Industry 4.0 [1]. Unlike conventional physical systems, a typical CPS incorporates a cyber component, linking physical systems through a network. This combination of cyber and physical systems enables more intelligent and adept industrial applications, especially in crucial infrastructures such as transportation systems. However, the increasing complexity, heterogeneity, and constantly evolving nature of CPSs, brought about by introducing a rich array of functionalities, opens them up to significant threats and challenges. This often renders existing security and safety techniques ineffective, emphasizing the need to devise novel techniques to ensure the dependability of various CPS tasks. Among these tasks, time-to-event (TTE) analysis [2, 3], also known as survival analysis, is of great importance, as CPSs are characterized by the interaction of computational and physical processes, often facing uncertainty, and the reliability of the systems is of paramount importance. TTE analysis allows for modeling and predicting the time until certain events occur, such as predicting the passenger waiting time in an elevator system and predicting time-to-collision in aADS. TTE analysis can also help to understand and quantify the reliability and operational resilience of the systems under different conditions or in response to different threats. Therefore, applying TTE analysis in CPSs can facilitate CPS operators, and other relevant stakeholders, to take timely preventive measures, optimize resource allocation, etc., so to make the systems safer and more efficient. Digital Twins (DTs) have gained substantial attention in recent years by performing safety and security tasks such as anomaly detection. Early works [4, 5] rely heavily on rule-based models and domain expertise to construct DTs, whereas data-driven DT construction is receiving increasing interest [6], due to the success of applying machine learning in software engineering. The efficacy of a DT function hinges on its synchronization with the real CPS, which inspires researchers and practitioners to create a DT that faithfully simulates the CPS. However, the continuous evolution of the CPS, e.g., due to hardware or software updates, demands the evolution of its corresponding DT. One straightforward solution is to train a new DT from scratch with data collected from the updated CPS. However, data from the updated CPS is not always guaranteed, such as in the case of a newly deployed elevator producing limited data that is insufficient for deep learning training. To combat the data scarcity, in our prior work, we proposed RISE-DT [7], an uncertainty-aware transfer learning method to evolve DT for industrial elevators. RISE-DT aims to transfer knowledge from the DT constructed for the source elevator system to a target (new) elevator system. Concretely, RISE-DT first employs uncertainty quantification (UQ) to select the most uncertain samples, which are the most informative samples as well since they tend to be close to the decision boundary. We then train a source DT and a target DT with these samples. The transfer learning process minimizes the conditional and marginal losses between the representations in the source and target DTs, allowing knowledge to be transferred across the domains. In this paper, we propose PPT to extend RISE-DT. Our key contributions are three-fold. First, we improve the performance of the RISE-DT by employing prompt tuning in PPT. Prompt tuning has emerged as an effective method for tuning pretrained models, especially for large language models, to downstream tasks [8]. Second, comparing with RISE-DT, we study two more UQ methods, namely Bayesian and ensemble methods, to select the most suitable one for TTE analysis. Third, we newly introduce an autonomous driving system (ADS) dataset in our empirical study to demonstrate the generalizability of PPT. Hence, we evaluate the application of PPT in two domains: elevator systems (vertical transportation) and ADS (horizontal transportation). Experiment results show that PPT is effective in TTE analysis in both elevator and ADS case studies, averagely outperforming the baseline by 7.31 and 12.58, respectively, in terms of Huber loss. We also dissect the individual contribution of each subcomponent in PPT and find that prompt tuning, UQ, and transfer learning are effective and efficient. The rest of the paper is as below. We present the background and definitions in Section 2. Section 3 delineates the architecture details of PPT. In Sections 4 and 5, we show the design of our experiment and present the results. Section 6 presents the related work and Section 7 concludes the paper. ## 2 Background and Definitions ### CPS evolution Industry 4.0 [1] has advanced the digital transformation of the manufacturing sector, mainly via CPSs. A typical CPS architecture has physical and cyber elements, the symbiotic relationship facilitated through a feedback loop, incorporating sensors, actuators, communication networks, and computational units. The advances in developing these components, particularly computational units, have spurred the widespread adoption of CPSs in our daily lives. **CPS evolution** is often triggered by internal changes, such as upgrading old or introducing new CPS functionalities. CPS behaviors are also closely intertwined with their operating environment. Such an operating environment can be very dynamic and uncertain, e.g., the driving environment of autonomous vehicles, which subsequently influences their decision-making at runtime. Therefore, we posit that a CPS should be studied along with its operating environment. Formally, we define a subject system \(\Sigma\) comprising the CPS \(\Psi\) and its environment \(\Phi\) below: \[\Sigma:\Psi\rightleftharpoons\Phi \tag{1}\] Correspondingly, the evolution of the CPS is, thus, defined in Equation 2, where \(\Sigma_{S}\) and \(\Sigma_{T}\) are the source and the target system of the evolution, and \(\Delta\Psi_{S}\) and \(\Delta\Phi_{S}\) represent changes in the CPS and the environment, respectively. \[\Sigma_{S}\xrightarrow[\Delta\Psi_{S}]{\Delta\Psi_{S}}\Sigma_{T} \tag{2}\] ### TTE analysis in industrial elevators and ADSs **TTE analysis** tasks, in general, can be described as a \(4\)-tuple: \(\langle\Sigma,\mathcal{D},\mathcal{E},\tau\rangle\), where \(\Sigma,\mathcal{D},\mathcal{E},\tau\) represent the subject system, dataset, events of interest, and time interval. TTE analysis analyses dataset \(\mathcal{E}\) collected from \(\Sigma\) to predict time interval \(\tau\) after which event \(\mathcal{E}\) will occur (Equation 3). \[f:\mathcal{D}\mapsto\tau_{\mathcal{E}} \tag{3}\] **Industrial elevators** are vertical transportation systems for buildings and are essential for modern urban lives. The cyber aspect of an elevator, such as the control algorithm, is encapsulated in the elevator software, while the physical components, including elements like motors, brakes, and cables, facilitate the movement of the elevator. Typically, each building has multiple elevators deployed and controlled by dedicated controllers, which are connected to a _traffic master_ with a _dispatcher_ (i.e., software) scheduling elevator operations to optimize the Quality of Services (QoS) to deliver. A common TTE analysis is about predicting the waiting time of each passenger based on information such as arrival floor, destination floor, weight, and capacity. We formally define the elevator TTE analysis task as in Definition 2.1. **Definition 2.1** (Industrial Elevator TTE Analysis): \[\Sigma^{E} \mapsto\text{An elevator system and its environment}\] (4) \[\mathcal{D}^{E} \mapsto\text{A sequence of passenger information}\] \[\mathcal{E}^{E} \mapsto\text{Passengers arrive at their destinations}\] \[\tau^{E} \mapsto\text{Estimated arrival time}\] Elevator dispatchers vary between buildings, and their usage patterns depend on traffic factors such as building type, time of day, and day of the week (known as traffic template). Thus, both elevator dispatchers and their environment evolve, which affects TTE analysis's performance. We define the evolution directions in Definition 2.2, where \(\Delta\Psi^{E}_{S}\) represents CPS changes, that is, dispatcher version changes, and \(\Delta\Phi^{E}_{S}\) denotes environment changes, i.e., traffic template changes. **Definition 2.2** (Industrial Elevator Evolution): \[\Sigma^{E}_{S}\xrightarrow{\Delta\Phi^{E}_{S}}\Sigma^{E}_{T}\] (5) **ADSs**, as another type of CPSs, are equipped with various sensors, such as optical and thermographic cameras, radar, lidar, and GPS. Their cyber part is mainly responsible for planning and controlling the vehicles' behaviour. Specifically, an ADS relies on sensors to perceive its environment (e.g., road conditions, weather conditions, and other vehicles), which are then utilized to guide the ADS's decision-making, e.g., determining an appropriate navigation path and formulating strategies to manage traffic controls (e.g., stop signs) and obstacles. TTE analysis such as predicting the time to a potential collision (known as time-to-collision) can facilitate the ADS to make well-informed decisions, which is defined as in Definition 2.3. **Definition 2.3** (ADS TTE Analysis): \[\Sigma^{A} \mapsto\text{An ADS and its running environment}\] (6) \[\mathcal{D}^{A} \mapsto\text{A sequence of vehicle and environment properties}\] \[\mathcal{E}^{A} \mapsto\text{Vehicle collisions}\] \[\tau^{A} \mapsto\text{Estimated collision time}\] ADS behaviours differ under varying driving conditions, including weather conditions and behaviours of nearby vehicles. In this work, we concern the evolution of ADSs under different driving conditions as depicted in Definition 2.4, an area widely studied in ADS testing [9, 10]. **Definition 2.4** (ADS Evolution): \[\Sigma^{A}_{S}\xrightarrow{\Delta\Phi^{A}_{S}}\Sigma^{A}_{T}\] (7) ### Digital Twin El Saddik [11] has defined DT as a digital replica of a physical entity. Yue et al. [12] extended this definition and proposed a DT conceptual model (see Figure 1). In the CPS context, a CPS (e.g., an elevator system or an ADS) is considered the physical twin. A typical DT comprises two key components: a Digital Twin Model (DTM) and a Digital Twin Capability (DTC). The DTM is a digital representation of the CPS, including heterogeneous models corresponding to various components, e.g., software, hardware, and communication. The DTC is the DT's functionality, e.g., predicting non-functional properties, detecting uncertainties, and preventing failures. PPT adopts this conceptual model for DT construction and evolves the DTM and DTC. ### Uncertainty Quantification UQ is intensively studied in both academia and industry, underpinning numerous applications such as trustworthy decision-making [13] and software risk analysis [14]. In our context, we use UQ to select the most uncertain samples \(U\) from a dataset \(\mathcal{D}\). To that end, UQ assigns an uncertainty score \(\xi\) to each sample \(x\in\mathcal{D}\). We have defined a comprehensive UQ metric CS score in our prior work [7]. In this paper, we further investigate UQ by adding two mainstream UQ approaches: Bayesian and ensemble UQ. Let \(\mathcal{M}_{I}\) be an indicator model which assesses the uncertainty of each sample and shares the same structure as the DTM (Section 3.1.2). We introduce each UQ approach below. _CS score_ combines the calibration and sharpness metrics. Calibration represents the consistency between the prediction distribution and the observation, while sharpness assesses the concentration of the prediction distribution [15]. By combining these two metrics with a weighted sum parameterized by \(\lambda\) (decided empirically), we follow [7] and define the comprehensive uncertainty metric _CS score_\(\xi^{cs}\) as shown in Equation 8. \[\xi^{ut}_{i}=\lambda c(x_{i})+(1-\lambda)s(x_{i}) \tag{8}\] _Bayesian Method_ probabilistically interprets predictions, which can be leveraged to derive UQ metrics. One popular Bayesian method for UQ in neural network models is the Monte Carlo (MC) dropout. The MC dropout randomly sets the activation of neurons to 0 with a fixed probability for a subset of layers, resulting in a set of indicator models with dropout \(\{\mathcal{M}_{d}^{B}\}_{d=1}^{N_{B}}\), where \(N_{B}\) is the number of indicator models. Dropout represents randomly set some neurons inactive in one indicator model. Each indicator model makes an individual prediction, and we define the uncertainty of each sample as the standard deviation of these predictions as in Equation 9. \[\bar{y}_{i} =\frac{1}{N_{B}}\sum_{d=1}^{N_{B}}\mathcal{M}_{d}^{B}(x_{i}) \tag{9}\] \[\xi^{bm}_{i} =\sqrt{\frac{1}{N_{B}}\sum_{d=1}^{N_{B}}(\mathcal{M}_{d}^{B}(x_{i }-y_{i})^{2}} \tag{10}\] Figure 1: Digital Twin for Cyber-Physical System _Ensemble Method_ is a commonly used technique in machine learning to combat overfitting by training multiple models with different configurations simultaneously. Building on the idea of ensemble learning, ensemble UQ divides the dataset into \(N^{E}\) subsets and trains a distinct indicator model \(M^{E}\) on each subset. Similar to the MC dropout approach, each indicator model generates predictions independently, and the uncertainty of each sample \(x_{i}\) is determined by calculating the standard deviation of the predictions, as shown in Equation 11. \[\bar{y}_{i} =\frac{1}{N_{E}}\sum_{d=1}^{N_{E}}\mathcal{M}_{d}^{E}(x_{i}) \tag{11}\] \[\xi_{i}^{em} =\sqrt{\frac{1}{N^{E}}\sum_{d=1}^{N_{E}}(\mathcal{M}_{d}^{E}(x_{i} -y_{i})^{2}} \tag{12}\] ## 3 Approach PPT is a closed-loop deep learning approach, which requires training on the relevant dataset. We introduce the architecture of PPT in Section 3.1 and the training method in Section 3.1.3. ### Overall Architecture Figure 3 depicts the overall architecture of PPT, comprising the _Data Processing Component_, the _Digital Twin Component_ and the _Transfer Learning Component_ (denoted as TL component in the figure). Let the source and target Figure 2: Overall architecture of PPT subject systems be \(\Sigma_{S}\) and \(\Sigma_{T}\) and their corresponding datasets be \(\mathcal{D}_{S}\) and \(\mathcal{D}_{T}\). The _data processing component_ takes as input \(\mathcal{D}_{S}\) and \(\mathcal{D}_{T}\) and selects the most uncertain contextualized samples \(CU_{S}\) and \(CU_{T}\). The source and target _digital twin components_ use these samples to construct a source DT (SDT) and a target DT (TDT). Finally, the _Transfer Learning Component_ transforms hidden representations in SDT and TDT into shared intermediate spaces, which signify the shared knowledge between the source and target domain. #### 3.1.1 Data Processing Component The data processing component takes source and target data (\(\mathcal{D}_{S}\) and \(\mathcal{D}_{T}\)) as input and selects the most uncertain samples with their context information using the data processing module. According to _Information Theory_, higher uncertainty entails richer information [16]. Hence, machine learning models can greatly benefit from training with more informative samples [17]. We illustrate the details of the data processing module in Figure 3. Given datasets \(\mathcal{D}_{1},\mathcal{D}_{2},...,\mathcal{D}_{N}\), we utilizes an _uncertainty quantification module_ to select the most uncertain samples \(U_{1},U_{2},...,U_{N}\). However, the context information of these samples is also critical for \(\mathsf{TTE}\) analysis since each sample is dependent on previous samples and influences future samples. Picking out the most uncertain samples has the possibility of losing the context information. As a remedy, we use a _multi-head attention module_ to calculate representations \(CU_{1},CU_{2},...,CU_{N}\) that fuse information from both the sample itself and its context. We will detail _uncertainty quantification module_ and _multi-head attention module_ in the rest of this section. **Uncertainty Quantification Module** ranks and selects the most uncertain samples. In our study, we explored three UQ methods: \(\mathsf{CS}\)\(\mathsf{score}\), Bayesian and ensemble UQ methods and select the most suitable one to perform UQ. UQ assigns an uncertainty score \(\xi_{i}\in\Xi\) to each sample \(x_{i}\in\mathcal{D}\). We then rank all samples based on their scores and select the top \(K\) ranked samples for transfer learning (Equation 13). \[U=topK(\mathcal{D},key=\Xi) \tag{13}\] **Multi-head Attention Module** UQ selects the most uncertain samples out from dataset \(\mathcal{D}\). However, the CPS data used in this study is contextual, meaning that considering only the uncertain samples in isolation could potentially harm our model's performance. To preserve the contextual information, we employed a technique called multi-head self-attention (MHSA). We project the input data into a hidden space, where each vector contains information about both the input Figure 3: Structure of the data processing module itself and its surrounding context. This approach allows our model to incorporate relevant contextual information, and hence make more accurate predictions. To facilitate parallel computation, MHSA discards the positional information, which is critical in our case. As a remedy, we follow the common practices and encode such information with a positional vector as in Equation 14. \[U=concat(U,PosEnc(U)) \tag{14}\] We then utilize linear transformations to map the input matrix \(U\) to three distinct spaces, creating three new matrices: \(Q\), \(K\), and \(V\). \(Q\) and \(K\) are the query and key matrices, and \(V\) is the new vector representation of the input. By multiplying \(Q\) and \(K\), we generate an attention weight matrix, which is subsequently multiplied with \(V\) and scaled with a softmax function and a scaling factor of \(d_{k}\), as formulated in Equation 15. \[Q =W_{Q}U_{in}+b_{Q} \tag{15}\] \[K =W_{K}U_{in}+b_{K}\] (16) \[V =W_{V}U_{in}+b_{V}\] (17) \[U_{att} =softmax(\frac{QK^{T}}{\sqrt{d_{k}}})V \tag{18}\] To enhance the representation, we apply a feed-forward neural network to \(U_{att}\). This network has a ReLU activation layer and a linear layer (Equation 19). The trainable weight matrices \(W_{ffn}\) and \(b_{ffn}\) are used in this transformation. \[U_{ffn}=W_{ffn}\cdot ReLU(U_{att})+b_{ffn} \tag{19}\] To mitigate the problem of catastrophic forgetting in deep learning models, we utilize a residual connection (Equation 20), i.e., summing \(U_{ffn}\) and \(U_{att}\). \[CU=U_{att}+U_{ffn} \tag{20}\] #### 3.1.2 Digital Twin Component As the backbone model of PPT, a generic DT has a DTM simulating the physical twin and a DTC with functionalities, e.g., waiting time prediction. **DTM** aims to approximate the underlying distribution of input data (\(\mathcal{D}\)). It has three layers: the transformer, GRU and prediction layers, which extract features from each sample, capture temporal features and project the intermediate representation vectors into the sample space to predict the next sample with CNN, respectively. _The transformer layer_ takes the contextualized uncertain samples \(CU\) as input and feeds them into a stack of \(L\) MHSA modules. Each MHSA module takes the output of the prior MHSA module as input (Equation 21). \[CU_{1}^{M} =CU \tag{21}\] \[CU_{L}^{M} =MHSA(CU_{L-1}^{M})\] _The GRU layer_ takes the output of the transformer layer as input. For each data sample \(x_{i}\in CU_{L}^{M}\), the layer computes the hidden representation (\(H_{M}^{G}\)) using Equation 22. \[z_{t} =\sigma_{g}(W_{z}x_{t}+U_{z}h_{t-1}+b_{z}) \tag{22}\] \[r_{t} =\sigma_{g}(W_{r}x_{t}+U_{r}h_{t-1}+b_{r})\] (23) \[\hat{h}_{t} =\phi_{h}(W_{h}x_{t}+U_{h}(r_{t}\cdot h_{t-1})+b_{h})\] (24) \[H_{M}^{G}[t] =(1-z_{t})\cdot h_{t-1}+z_{t}\cdot\hat{h}_{t} \tag{25}\] _The CNN layer_ is responsible for predicting the next state vector for the subject system, where \(S\) is the size of the state vector. Each of its dimensions is a continuous scalar value. However, direct training with continuous labels can result in overfitting issues with insufficient data. To overcome this, we discretize continuous scalar values into 10 categories, thereby transforming these continuous prediction tasks into classification tasks. The core operation of the CNN layer is the kernel convolution, which calculates the probability \(P_{i,j}\) for \(i\)th label on the \(j\)th dimension (Equation 26). \(\mathcal{K}\) denotes the convolution kernel. \[P_{i,j}=\sum_{m}\sum_{n}X_{i-m,j-n}\cdot\mathcal{K}_{m,n} \tag{26}\] We hence predict the next data samples by assigning labels to each dimension with the highest probabilities (Equation 27). \[CU^{M}=argmax(P_{i,j}) \tag{27}\] **DTC** performs TTE analysis. Figure 3 presents the design of the source DTC (SDTC) and target DTC (TDTC), whose architectural designs are identical. In the following part, we only illustrate SDTC's architecture for brevity. SDTC combines the real data \(\mathcal{D}_{S}\) and predicted data \(\mathcal{D}^{\prime}_{S}\) as input and feeds it into three layers sequentially: the transformer, GRU and prediction layers. _The transformer layer_ concatenates the real sample \(CU\) and predicted sample \(CU^{M}\) and feeds it to a size-L stack of MHSA modules, which are computed recursively (Equation 28). \[\begin{split} CU_{1}^{C}&=concat([CU,CU^{M}])\\ CU_{L}^{C}&=MHSA(CU_{L-1}^{C}))\end{split} \tag{28}\] _The GRU layer_ captures the dependency between the current input and previous inputs (Equation 29). The detailed structure of GRU has been described in Equation 22, where \(H_{C}^{G}\) denotes the output of DTC's GRU layer. \[H_{C}^{G}=GRU(CU_{L}^{C}) \tag{29}\] _The prediction layer_ transforms the intermediate representations into the estimated time (e.g., passenger waiting time in the elevator case study and time-to-collision in the ADS case study) as in Equation 30, where \(W_{\tau}\) and \(b\) are weight matrices. \[\hat{\tau}=W_{\tau}^{T}H_{C}^{G}+b_{\tau} \tag{30}\] #### 3.1.3 Transfer Learning Component As shown in Figure 3, PPT uses a projection layer to map the hidden representations in SDT and TDT to shared spaces. _The projection layer_ first uses a linear transformation to map the hidden representations (\(H\)) to representations \(H^{P}\) in the shared spaces and benefits from an activation function \(tanh\) to add non-linearity (Equation 31). \[H^{P}=tanh(W_{P}H+b_{P}) \tag{31}\] Then, we perform transfer learning by aligning SDT and TDT in the intermediate spaces, aiming to reduce marginal and conditional losses. _Marginal loss_ is calculated as Kullback-Leibler (KL) divergence between hidden layer representations of SDT and TDT. We first project the GRU layer outputs of SDTM, SDTC, TDTM and TDTC (\(H_{SM}^{G}\), \(H_{SC}^{G}\), \(H_{TM}^{G}\), and \(H_{TC}^{G}\)) into an intermediate space using Equation 31, yielding \(H_{SM}^{PG}\), \(H_{TM}^{PG}\), \(H_{SC}^{PG}\) and \(H_{TC}^{PG}\), respectively. We then calculates the marginal loss in the intermediate space as in Equation 32. \[\begin{split}\mathcal{L}_{mar}^{M}&=\sum_{t}H_{SM}^{ G}[t]\cdot logH_{TM}^{G}[t]\\ \mathcal{L}_{mar}^{C}&=\sum_{t}H_{SC}^{G}[t]\cdot logH _{TC}^{G}[t]\end{split} \tag{32}\] _Conditional loss_\(\mathcal{L}_{cond}^{M}\) is calculated between the prediction layer representations of SDTC and TDTC. We first transform the output of SDTC and TDTC into an intermediate space as in Equation 33. \[\begin{split} P_{S}^{P}&=proj(P_{S})\\ P_{T}^{P}&=proj(P_{T})\end{split} \tag{33}\] We then calculate the Maximum Mean Discrepancy (MMD) in the intermediate space as in Equation 34. \[\mathcal{L}_{cond}^{M}=||\frac{1}{n^{s}}\sum_{i=1}^{n^{s}}P_{S}^{P}[i]+\sum_{i =1}^{n^{t}}P_{T}^{P}[i]\frac{1}{n^{t}}|| \tag{34}\] ### Training Process of PPT This process includes the pretraining phase (Section 3.2.1) and prompt tuning (Section 3.2.2) phase. The former induces better initializations for the model parameters, while the later quickly adapts the model to the target subject system. #### 3.2.1 Pretraining Phase Neural network methods, including PPT, tend to be trapped in local optimal easily. Pretraining on large datasets can alleviate this issue by inducing the optimizer more towards the global optimal. In this phase, we aim to find the optimal parameters for PPT, as described in Algorithm 1. We collect source and target dataset pairs \((\mathcal{D}_{S}^{pre},\mathcal{D}_{T}^{pre})\) and output the pretrained \(SDT^{pre}\) and \(TDT^{pre}\) (Lines 2-3). In each pair, we first perform UQ to select the \(K\) most uncertain samples for transfer learning (Lines 4-9). SDT and TDT take these samples as input and make predictions (Lines 10-13). We calculate the marginal loss and conditional loss (Lines 14-15) to accomplish transfer learning. Additionally, we calculate the Huber loss between the predicted TTE (\(r_{S}^{\prime}\) and \(\tau_{T}^{\prime}\)) and the real TTE (\(\tau_{S}\) and \(\tau_{T}\)). Minimizing Huber loss (Section 4.3) can induce the DTC to make more accurate TTE analysis. The last step of Algorithm 1 minimizes all losses by adjusting the model parameters (Line 18). ``` Input:\(\Sigma^{SPE}\) and \(\Sigma^{TPre}\): source and target subject systems; \(N\): Number of source and target system pairs. Output:\(SDT^{pre}\) and \(TDT^{pre}\): the pretrained source and target DTs. 1for i in 1:Ndo 2\(\mathcal{D}_{S}^{S}\)=collect_from(\(\Psi_{i}^{SPE}\)); 3\(\mathcal{D}_{T}^{i}\)=collect_from(\(\Psi_{i}^{TPre}\)); /* Data processing */ 4\(\omega^{S}\)=UQ(\(\mathcal{D}_{i}^{SPE}\)); 5\(\omega^{T}\)=UQ(\(\mathcal{D}_{T}^{TPre}\)); 6\(U^{S}=topK(\mathcal{D}_{i}^{SPE},key=\omega_{i}^{S})\); 7\(U_{T}=topK(\mathcal{D}_{i}^{TPre},key=omega_{i}^{T})\); 8\(CU^{S}\)=MHSA(\(U^{S}\)); 9\(CU^{T}\)=MHSA(\(U^{T}\)); 10/* Train \(SDT\) and \(TDT\) with transfer learning */ 11\(H_{SM}^{G},P_{S},CU_{SM}=SDTM(CU_{S})\); 12\(H_{TM}^{G},P_{T},CU_{TM}=SDTM(CU_{T})\); 13\(H_{SC}^{G},\hat{\tau}_{S}=SDTC(CU_{S},CU_{SM})\); 14\(H_{TC}^{G},\hat{\tau}_{T}=TDTTC(CU_{T},CU_{TM})\); 15\(\mathcal{L}_{mar}=mar(H_{SM}^{G},H_{TM}^{G})+mar(H_{SC}^{G},H_{TC}^{G})\); 16\(\mathcal{L}_{marignal}=conditional(P_{S},P_{T})\); 17\(\mathcal{L}_{huber}=huber(\hat{\tau}_{S},\tau_{S})+huber(\hat{\tau}_{T},\tau_{ T})\); 18\(\mathcal{L}=L_{huber}+\mathcal{L}_{cond}+\mathcal{L}_{mar}\); 19\(minimize(\mathcal{L})\); 20 21 end for ``` **Algorithm 1**Pretraining Phase of PPT #### 3.2.2 Prompt Tuning Phase The pretrained DTs are trained on the pretraining datasets \(\mathcal{D}_{pre}\), which does not include the dataset collected from the target subject system \(\mathcal{D}_{T}\). To acquire expertise in the target subject system \(\Sigma\), supervised learning with a sufficient dataset collected from \(\Sigma\) is required. For this purpose, we employed the fine-tuning technique in our previous work RISE-DT [7]. However, recent research has shown that prompt tuning can be even more effective [18]. Hence, in PPT, we employ prompt tuning to enhance its overall performance. Prompt tuning involves designing prompts to test a pretrained model's ability to distinguish between the source and target domain data. The feedback from the test helps the model learn more about the salient features in the target domain, potentially leading to improved performance. A typical prompt tuning phase has three steps: prompt template designing, answer generation, and answer mapping [19], as depicted in Figure 4. **Step 1: Prompt template designing.** Cloze-style prompts are a well-studied technique, where certain parts of the data are masked, leaving a blank for the model to fill in. At each time point \(i\), we collect a L-lengthed sequence of CPS data \(x_{i},x_{i-1},x_{i-2},...,x_{i-L+1}\). We generate a prompt template by masking the time interval \(\tau_{i}\) for the current time point \(i\) and the previous one \(i-1\) (denoted as "_IMASK J_" in Figure 4). Using this template, we generate a positive prompt, where we fill in the true time interval \(\tau_{i-1}^{T}\) for time point \(i-1\) from the target domain (denoted as \(\tau_{T}\) in Figure 4), and a negative prompt, where we fill in the same blank with the time interval predicted with SDTC (denoted as \(\tau_{i-1}^{S}\) in Figure 4). **Step 2: Answer generation.** We ask the target DTC (TDTC) to fill in the blank in both positive and negative prompts. We hypothesize that SDTC captures knowledge about the source domain, while TDTC captures knowledge about the target domain. Therefore, we expect to make accurate predictions on the positive prompt while making noticeable errors when predicting the negative prompt, as it has been "tampered" by SDTC. As shown in Figure 4, TDTC fills in the two prompts, yielding a positive answer (denoted as _Positive Answer_ in Figure 4) and a negative answer (denoted as _Negative Answer_ in Figure 4). We calculate the Huber loss by comparing these two answers with the actual time interval \(W\tau_{i}\) using Equation 35. Note that we reverse the sign of the Huber loss for the negative prompt by multiplying it by \(-1\) because we assume a well-adapted TDTC should be able to distinguish between source and target domain data. This approach helps the model learn the salient features of the target domain and improves its performance, for instance, in predicting waiting times for elevator passengers in the target domain for our elevator case study and predicting the time-to-collision in the target domain for the ADS case study. \[\mathcal{L}_{prompt}=(\tau-\tau_{+})^{2}-(\tau-\tau_{-})^{2} \tag{35}\] **Step 3: Answer mapping.** Compared to fine-tuning, prompt tuning can reduce or even obviate the need for extra model extensions. In prompt tuning, downstream task predictions are acquired by mapping prompt answers to the prediction space. In our context, we do not need to perform such mapping since the positive prompt prediction can be considered as TTE analysis directly. Algorithm 2 describes the prompt tuning phase in pseudo-code. For each time point \(i\), we consider not only the current data point but also history data points within a time window of \(K\). We generate positive and negative prompts with the help of the prompt generator (Lines 3-5). SDT and TDT make predictions with \(\mathcal{D}_{S}\) and the latest \(\omega\)\(\mathcal{D}_{T}\) data, respectively (Lines 6-9). In Lines 10-12, TDTC fills in the positive prompt and negative prompt. We calculate the prompt loss function as in Equation 35, and optimize the parameters of DTs stochasticly (Lines 13-14). ## 4 Experiment Design In Section 4.1, we introduce four research questions, followed by detailing the case studies in Section 4.2. Section 4.3 shows the evaluation metrics, while Section 4.4 introduces the statistical tests employed. Finally, we introduce the settings and execution environment in Section 4.5. Figure 4: PPT’s prompt tuning. Note that we only display components related to prompt tuning and omit the detailed DT structure and UQ strategies for brevity since it is identical to that in the pretraining phase. ### Research Questions (RQs) In this paper, we plan to answer four RQs as follows. * **RQ1**: How effective is PPT in TTE analysis, as compared to RISE-DT? * **RQ2**: How efficient and effective is transfer learning? * **RQ3**: Does UQ help to improve the performance of transfer learning? If so, which UQ method is the best? * **RQ4**: Is prompt tuning effective and efficient for improving the performance of transfer learning? RQ1 aims to compare PPT with the baseline, i.e., RISE-DT in TTE analysis. RQ2-RQ4 dissect PPT and assess the cost-effectiveness of introducing transfer learning, UQ, and prompt tuning to it. Specifically, RQ2 evaluates the efficiency and effectiveness of transfer learning by comparing the performance of PPT with/without transfer learning (denoted as w/o TL). With RQ3, we study the impact of using or not using UQ on the performance of PPT (denoted as w/o UQ) and select the most suitable UQ method from three UQ methods: CS score, Bayesian method and ensemble methods (denoted as CS, BUQ and EUQ, respectively). With RQ4, we plan to assess the improvement brought about by prompt tuning by comparing PPT with and without prompt tuning (denoted as w/o PT). ### Case Studies #### 4.2.1 Orona Elevator System **Orona elevator system** was studied in our prior work, where we evaluate RISE-DT with 11 versions dispatchers \(d_{*},d_{1},d_{2},...,d_{10}\), and two traffic templates, i.e., Lunchpeak \(\Gamma_{L}\) and Uppeak traffic templates \(\Gamma_{U}\). Notice the dispatcher \(d*\) denotes the best dispatcher and \(d_{1:10}\) denotes ten previous versions. In this work, we acquire 20 more dispatchers \(d_{11},d_{12},...,d_{30}\) for a more comprehensive evaluation of PPT. In total, we have access to 62 different subject systems \(\Sigma_{1}^{E}=\langle d_{*},\Gamma_{L}\rangle,\Sigma_{2}^{E}=\langle d_{*}, \Gamma_{L}\rangle,\Sigma_{3}^{E}=\langle d_{1},\Gamma_{L}\rangle,...,\Sigma_{ 2}^{E}=\langle d_{30},\Gamma_{L}\rangle,\Sigma_{33}^{E}=\langle d_{1},\Gamma_ {U}\rangle,...,\Sigma_{62}^{E}=\langle d_{30},\Gamma_{U}\rangle\). We collect 62 datasets \(\mathcal{D}_{1},..,\mathcal{D}_{62}\) from these subject systems by performing simulation on Elevate, a commercial simulator used by Orona to test their dispatchers in software in the loop simulation environment. These 62 subject systems can be categorized into four types of subject systems: _LunchBest_ (or _LunchWorse_) denoting that the elevator dispatcher with the highest performance (or sub-par) operates during lunch rush (12:15 - 13:15 p.m.); _UpBest_ (or _UpWorse_) representing that the best (or an under-performing) elevator dispatcher operates during the morning rush hour (8:30 a.m. - 9:30 a.m.). **Evolution dataset construction**, in this case study, encompasses four types of evolutions: (1) \(LunchBest\to UpBest\);(2) \(UpBest\to LunchBest\); (3) \(LunchWorse\to LunchBest\); (4) \(UpWorse\to UpBest\). (1) and (2) are dispatcher-variant evolutions where the dispatcher has undergone changes in the evolution. (3) and (4) are traffic-variant evolutions, where the source and target subject systems only differ in traffic templates, while the elevator dispatcher remains unchanged. #### 4.2.2 Autonomous Driving Systems Dataset **ADS dataset** is taken from DeepScenario [20]- an open-source dataset containing 33530 driving scenarios. These scenarios were generated with different strategies (e.g., greedy search and reinforcement learning) to achieve various objectives (e.g., reducing time-to-collision and distance to obstacles). Each driving scenario is characterized by the properties and behaviors of the self-driving car under study (known as the ego vehicle) and other objects in the driving environment, such as pedestrians and other cars (known as NPC vehicles). One example can be described as "A red BoxTruck is overtaking the ego vehicle and maintaining the lane." DeepScenario provides us with 19 features of the ego and NPC vehicles with regard to their speed, location, rotation, etc. The complexity of driving scenarios differs in terms of the number of NPCs involved. For example, a driving scenario without any NPC vehicle is much less challenging for the ADS of the ego vehicle to make decisions as compared to a scenario with NPC vehicles around. We acquired two datasets with different complexity levels from DeepScenario. We name the dataset with fewer NPC vehicles on average as the _Simple_ dataset and the one with more NPC vehicles as the _Complex_ dataset. Their descriptive statistics are given in Table 1. **ADS evolution dataset construction** encompasses the bidirectional evolution between the _Simple_ and _Complex_ datasets, i.e., \(Simple\to Complex\) and \(Complex\to Simple\). ### Metrics We introduce the metrics to evaluate the predictive performance in Section 4.3.1 and efficiency metrics in Section 4.3.2. Metrics for assessing the UQ methods are introduced in Section 4.3.3. In Section 4.4, we present the statistical tests used in the evaluation. #### 4.3.1 Predicative Performance Evaluation Metrics TTE analysis is essentially a regression prediction task. In this study, we prefer to use Huber loss because, unlike Mean Squared Error (MSE), Huber loss does not heavily penalize data points that deviate significantly from the rest, thus making the prediction model more robust in handling outliers in the data. Huber loss is calculated using the following formula [21], which involves two conditions: \[L_{\delta}(y,f(x))=\begin{cases}\frac{1}{2}(y-f(x))^{2}&\text{for }|y-f(x)|\leq \delta,\\ \delta\cdot|y-f(x)|-\frac{1}{2}\delta^{2}&\text{otherwise.}\end{cases} \tag{36}\] In this equation, \(y\) is the true value, \(f(x)\) is the predicted value, and \(\delta\) is a hyperparameter that controls the transition between the loss for small and large residuals. #### 4.3.2 Efficiency Performance Evaluation Metrics We evaluate the efficiency with training time spent by the pretraining and prompt tuning phases of PPT process. We denote \(\mathcal{D}^{S}\) as the source dataset and \(\mathcal{D}^{T}\) as the target dataset for transfer learning. PPT's pretraining is executed on the pretraining dataset \(\mathcal{D}^{pre}=\{\langle\mathcal{D}^{SPE}_{1},\mathcal{D}^{TPre}_{1}, \rangle,\langle\mathcal{D}^{SPE}_{2},\mathcal{D}^{TPre}_{2}\rangle,..., \langle\mathcal{D}^{SPE}_{N},\mathcal{D}^{TPre}_{N}\rangle\}\) of \(N\) individual transfers. We determine the convergence time for one transfer with Equation 37, where \(time_{early\_stopping\_end}\) denotes the point at which early stopping occurs (i.e., no improvement for five consecutive epochs), while \(time_{start}\) signifies the commencement point of training. \[time_{convergence}=time_{early\_stopping\_end}-time_{start} \tag{37}\] \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline Dataset & Mean & Std & Min & Q1 & Q2 & Q3 & Max \\ \hline Easy & 4.81 & 3.59 & 0 & 2 & 4 & 7 & 21 \\ Difficult & 6.51 & 3.96 & 0 & 3 & 6 & 9 & 36 \\ \hline \hline \end{tabular} \end{table} Table 1: Descriptive statistics of the number of NPC vehicles in the _Simple_ and _Difficult_ datasets. Q1, Q2 and Q3 denote 25%, 50% and 75% quantiles. _Pretraining time_ is computed by aggregating the convergence times for \(N\) individual transfers with Equation 38. \[time_{pretrain}=\sum_{i=1}^{N}time_{convergence}(\mathcal{D}_{i}^{SPre},\mathcal{D}_{i}^{TPre}) \tag{38}\] _Prompt tuning time_ is determined by the convergence time on source dataset \(\mathcal{D}_{S}\) and target dataset \(\mathcal{D}_{T}\), as defined by Equation 39. \[time_{proproptuning}=time_{convergence}(\mathcal{D}^{S},\mathcal{D}^{T}) \tag{39}\] #### 4.3.3 UQ Method Evaluation Metrics **UQ Effectiveness Metric.** We compare samples selected by each UQ method with Precision@K [22]; Let \(l_{A}\) and \(l_{B}\) denote the samples selected by method A and method B, respectively and _Precision@K_ measures to what extent \(l_{A}\) and \(l_{B}\) overlap in the top \(K\) samples (Equation 40). \[\text{Precision@K}=\frac{overlap(l_{A},l_{B})}{K} \tag{40}\] **UQ Efficiency Metric** measures the efficiency of a UQ method as the total time \(\tau_{UQ}\) required for sample selection. We denote \(\tau_{UQ}\) required by CS score, Bayesian and ensemble UQ as \(\tau_{CS}\), \(\tau_{BUQ}\) and \(\tau_{EUQ}\), respectively. ### Statistical testing To counteract the inherent variability associated with training neural networks, we conducted each experiment 30 times. Subsequently, we employed the Mann-Whitney U test [23] to investigate the statistical significance of observed improvements of PPT over the baseline RISE-DT. This was done for all pair-wise comparisons within each RQ. The baseline assumption or null hypothesis presumes no significant distinction between PPT and RISE-DT under comparison. If this null hypothesis is dispelled, we deduce that they are not equivalent. We choose the significance level as 0.01; thus, \(p-value<0.01\) denotes a significant improvement \(\Delta\). As recommend in [23], we chose Vargha and Delaney's A12 as the measure of effect size. This metric illustrates the probability of _PPT_ outperforming _RISE-DT_. If the A12 value exceeds 0.5, we can infer that _PPT_ is more likely to yield superior results compared to _RISE-DT_, and vice versa. We consider the effect size in the range \([0.56,0.64)\) as _Small_\(\downarrow\), \([0.64,0.71)\) as _Medium_\(\rightarrow\), and \([0.71,1]\) as _Large_\(\uparrow\). ### Settings and Execution Assigning hyperparameter values manually can potentially introduce bias. To mitigate this issue, we carried out a 10-fold cross-validation process to select optimal hyperparameters. This involved partitioning the dataset into 10 sequential segments, using the first nine for training and the final one for validation. Due to the difference in complexity, we set the hyperparameters differently for the elevator and ADS subject systems. We present some key values in Table 2 \begin{table} \begin{tabular}{c|c|c} \hline \hline Parameter & For Elevator System & For ADS \\ \hline d\_model & 16 & 128 \\ batch size & 1 & 1 \\ n\_heads & 1 & 32 \\ \hline dim\_feedforward & 128 & 1024 \\ n\_layers & 1 & 24 \\ proj\_dim & 32 & 128 \\ \hline \hline \end{tabular} \end{table} Table 2: Hyperparameter values for PPT. d_model, n_heads, dim_feedforward, n_layers denote the hidden dimension, number of heads, feedforward network dimension, and number of the MHSA modules in the transformer. proj_dim represents the dimension of the projection module in the transfer learning component. Our code is written in Python with Pytorch 2.0 library [24]. CS score is calculated with Uncertainty Toolbox [25]. We execute our code on a national, experimental, heterogeneous computational cluster called eX3. This node contains 2x Intel Xeon Platinum 8186, 1x NVIDIA V100 GPUs. ## 5 Results and Analysis In this section, we answer each RQ. A replication package of PPT is provided here for reference 1. Footnote 1: [https://github.com/qhml/ppt](https://github.com/qhml/ppt) ### RQ1 - PPT's Overall Effectiveness RQ1 aims to evaluate the overall effectiveness of PPT in TTE analysis by comparing it to RISE-DT. The Huber loss results are shown in Table 3, which includes the results of both the elevator and ADS case studies. In the case study, we find PPT outperforms RISE-DT in both traffic-variant evolutions (i.e., \(UpBest\to LunchBest\) and \(LunchBest\to UpBest\)) and dispatcher-variant evolutions i.e., \(LunchWorse\to LunchBest\) and \(UpWorse\to UpBest\)). The minimum improvement is 5.70, for the case \(LunchBest\to UpBest\)), while the maximum improvement is 9.62 for the case of \(UpWorse\to UpBest\). The average improvement reaches 7.31 in all four evolutions. In the ADS case study, we observe larger improvements compared to RISE-DT. The average improvement in this case study is 12.58. Table 4 presents the statistical testing results of comparing PPT with RISE-DT. In the elevator case study, we find that the improvements in the traffic-variant evolutions are significant (\(p-value<0.01\)) with strong effect sizes (\(A12>0.71\)). The majority of the improvements in the dispatcher-variant evolutions are significant (21 out of 30) and the effect sizes are mostly strong (17 out of 30 for \(LunchWorse\to LunchBest\) and 20 out of 30 for the case \(UpWorse\to UpBest\)). In the ADS case study, we observe significance and very strong effect sizes (close to 1) in both improvements. \begin{table} \begin{tabular}{c|c c c} \hline \hline & Evolution & p-value & A12 \\ \hline \multirow{5}{*}{Elevator} & UpBest\(\rightarrow\)LunchBest & \(\Delta\) & 0.828 (\(\uparrow\)) \\ \cline{2-4} & LunchBest\(\rightarrow\)UpBest & \(\Delta\) & 0.774 (\(\uparrow\)) \\ \cline{2-4} & LunchWorse\(\rightarrow\)LunchBest & \(\Delta\times 21\) & \(\downarrow\times 4;\rightarrow\times 3;\uparrow\times 17\) \\ \cline{2-4} & UpWorse\(\rightarrow\)UpBest & \(\Delta\times 21\) & \(\downarrow\times 2;\rightarrow\times 1;\uparrow\times 20\) \\ \hline \multirow{3}{*}{ADS} & Simple\(\rightarrow\)Complex & \(\Delta\) & 0.993 (\(\uparrow\)) \\ \cline{2-4} & Complex\(\rightarrow\)Simple & \(\Delta\) & 0.927 (\(\uparrow\)) \\ \hline \hline \end{tabular} \end{table} Table 4: Mann-Whitney statistical test results and A12 effect size of comparing RISE-DT and PPT. \(\Delta\) denotes a significant testing result. \(\downarrow,\rightarrow,\uparrow\) represent small, medium, and large effect sizes, respectively. \begin{table} \begin{tabular}{c|c c c c} \hline \hline & Evolution & RISE-DT & PPT & Difference \\ \hline \multirow{5}{*}{Elevator} & UpBest\(\rightarrow\)LunchBest & 87.89 & 80.14 & 7.75 \\ \cline{2-5} & LunchBest\(\rightarrow\)UpBest & 109.49 & 103.79 & 5.70 \\ \cline{2-5} & LunchWorse\(\rightarrow\)LunchBest & 89.11 & 82.95 & 6.16 \\ \cline{2-5} & UpWorse\(\rightarrow\)UpBest & 114.88 & 105.26 & 9.62 \\ \cline{2-5} & **Average** & 100.35 & 93.03 & **7.31** \\ \hline \multirow{3}{*}{ADS} & Simple\(\rightarrow\)Complex & 230.26 & 215.85 & 14.41 \\ \cline{2-5} & Complex\(\rightarrow\)Simple & 118.81 & 108.06 & 10.75 \\ \cline{1-1} \cline{2-5} & Average & 174.54 & 161.95 & 12.58 \\ \hline \hline \end{tabular} \end{table} Table 3: Huber loss of TTE analysis and statistical testing results. The “Difference” column shows the difference between RISE-DT and PPT. We conclude that PPT is effective in TTE analysis in both elevator and ADS case studies, averagely outperforming RISE-DT by 7.31 and 12.58, respectively. Improvements are significant and have large effect sizes in both case studies. ### RQ2 - Transfer Learning Effectiveness and Efficiency With RQ2, we aim to investigate the contribution of the transfer learning in PPT. Table 5 presents the experiment results of comparing PPT and PPT without transfer learning (denoted as "w/o TL"). We find that the Huber loss increases sharply after removing transfer learning. Specifically, in the elevator case study, the average increase reaches 21.32, with a minimum increase of 12.96 for the case \(LunchWorse\to LunchBest\). In the ADS case study, the increases in Huber loss are even higher with an average value of 28.26. We also investigated the efficiency of transfer learning as depicted in Table 6. In this table, we report the training time of transfer learning, comprising the pretraining phase (denoted as "Pre") and prompt tuning phase (denoted as "PT"). We find that pretraining consumes marginally more time compared to prompt tuning. The average pretraining times in the elevator and ADS case study are 70 hours and 97.5 hours, respectively. Whereas the prompt tuning time in these two cases is merely 2.77 hours and 5.05 hours, respectively. Such results are expected since the pretraining dataset is larger than the prompt tuning dataset. Moreover, the pretraining phase only requires a single execution before the transfer learning, making the large pretraining time acceptable for the production environment. We conclude that transfer learning is effective in both elevator and ADS case studies. Removing transfer learning from PPT leads to surges in Huber loss, i.e., 21.32 and 28.26 hours in the elevator and ADS case studies. The majority of the time cost of transfer learning is spent in the pretraining phase, which we believe acceptable as one only needs to pretrain PPT once. \begin{table} \begin{tabular}{c|c c c c} \hline \hline & Evolution & Pre & PT & Total \\ \hline \multirow{4}{*}{Elevator} & UpBest\(\rightarrow\)LunchBest & 76h & 2.9h & 78.9h \\ & LunchBest\(\rightarrow\)UpBest & 76h & 4.1h & 80.1h \\ \cline{2-5} & LunchWorse\(\rightarrow\)LunchBest & 61h & 2.5h & 63.5h \\ \cline{2-5} & UpWorse\(\rightarrow\)UpBest & 67h & 1.6h & 68.6h \\ \cline{2-5} & Average & 70h & 2.77h & 72.77h \\ \hline \multirow{2}{*}{ADS} & Simple\(\rightarrow\)Complex & 98h & 5.1h & 103.1h \\ & Complex\(\rightarrow\)Simple & 97h & 5.0h & 102h \\ \cline{2-5} & Average & 97.5h & 5.05h & 102.05h \\ \hline \hline \end{tabular} \end{table} Table 6: Time cost of each training phase in PPT. "Pre", "PT" and "Total" denote the time cost for the pretraining phase, prompt tuning phase, and sum of both. \begin{table} \begin{tabular}{c|c c c c} \hline \hline & Evolution & w/o TL & PPT & Difference \\ \hline \multirow{4}{*}{Elevator} & UpBest\(\rightarrow\)LunchBest & 95.91 & 80.14 & 15.77 \\ & LunchBest\(\rightarrow\)UpBest & 132.80 & 103.79 & 29.01 \\ \cline{2-5} & LunchWorse\(\rightarrow\)LunchBest & 95.91 & 82.95 & 12.96 \\ \cline{2-5} & UpWorse\(\rightarrow\)UpBest & 132.80 & 105.26 & 27.54 \\ \cline{2-5} & Average & 114.35 & 93.03 & 21.32 \\ \hline \multirow{4}{*}{ADS} & Simple\(\rightarrow\)Complex & 241.19 & 215.85 & 25.34 \\ \cline{2-5} & Complex\(\rightarrow\)Simple & 139.23 & 108.06 & 31.17 \\ \cline{1-1} \cline{2-5} & Average & 190.21 & 161.95 & 28.26 \\ \hline \hline \end{tabular} \end{table} Table 5: Huber loss of PPT and PPT without transfer learning (denoted as "w/o TL"). Column “Difference” represents the difference between PPT and “w/o TL” ### RQ3 - UQ Effectiveness and efficiency RQ3 aims to evaluate the effectiveness and efficiency of UQ by comprehensively assessing its influence on TTE analysis, selected samples, and time cost. **UQ's Influence on TTE Analysis.** To highlight the effectiveness of UQ, we compare PPT and PPT without UQ (denoted as "w/o UQ") in Table 7. In the elevator case study, we see an average increase of 4.08 after removing UQ from PPT. The maximum increase is 7.42 for case \(UpBest\to LunchBest\). In the ADS case study, we find the Huber loss boost from 215.85 to 224.80 for case \(Simple\to Complex\) and from 108.06 to 115.25 for case \(Complex\to Simple\). We also compare three UQ methods (i.e., CS score, Bayesian and Ensemble UQ, denoted as CS, BUQ and EUQ) in terms of Huber loss in Table 8. In the elevator case study, we find EUQ tends to be the most effective UQ method, achieving the lowest Huber loss in all evolutions except for \(LunchBest\to UpBest\). However, the difference between CS and EUQ is nominal with a maximum of 1.59 (105.26-103.67) for case \(UpWorse\to UpBest\). In the ADS case study, CS beats EUQ and shows the lowest Huber loss for both evolutions: \(Simple\to Complex\) and \(Complex\to Simple\). **UQ's Influence on Samples Selected.** To compare the three UQ methods, we look into samples selected by the UQ methods. We calculate the Precision@K metrics to demonstrate the overlaps among the methods and the results are shown in Table 9. The precision@1 (denoted as P@1) and precision@3 (denoted as P@3) are all 100% in each evolution in the elevator and ADS case studies, indicating that the top 1 and top 3 samples selected by one UQ method are always selected by the other two methods. The precision@10 metric (denoted as P@10) gives lower results. In the elevator case study, the precision@10 metric remains 100% in the traffic-variant evolutions (i.e., \(UpBest\to LunchBest\) and \(LunchBest\to UpBest\)), indicating the top 10 samples selected by one UQ method are also selected by the other two. As for the dispatcher-variant evolutions, the precision@10 results are still high (\(\geq 0.93\)), though not 100%. In the ADS case study, the precision@10 results are higher than 82%, implying approximately 8 out of the top 10 samples selected by one UQ method are also selected by the other two. \begin{table} \begin{tabular}{c|c c c c} \hline \hline & Evolution & w/o UQ & PPT & Difference \\ \hline \multirow{4}{*}{Elevator} & UpBest\(\rightarrow\)LunchBest & 87.56 & 80.14 & 7.42 \\ & LunchBest\(\rightarrow\)UpBest & 105.09 & 103.79 & 1.30 \\ & LunchWorse\(\rightarrow\)LunchBest & 87.04 & 82.95 & 4.09 \\ \cline{2-5} & UpWorse\(\rightarrow\)UpBest & 108.76 & 105.26 & 3.50 \\ & **Average** & **97.11** & **93.04** & 4.08 \\ \hline \multirow{4}{*}{ADS} & Simple\(\rightarrow\)Complex & 224.80 & 215.85 & 8.95 \\ & Complex\(\rightarrow\)Simple & 115.25 & 108.06 & 7.19 \\ \cline{1-1} \cline{2-5} & Average & 170.02 & 161.95 & 8.07 \\ \hline \hline \end{tabular} \end{table} Table 7: Huber loss of PPT and PPT without UQ (denoted as ”w/o UQ”). Column ”Difference” represents the difference between PPT and ”w/o UQ” \begin{table} \begin{tabular}{c|c c c c} \hline \hline & Evolution & CS & BUQ & EUQ \\ \hline \multirow{4}{*}{Elevator} & UpBest\(\rightarrow\)LunchBest & 80.14 & 85.23 & **79.40** \\ \cline{2-5} & LunchBest\(\rightarrow\)UpBest & 103.79 & **101.8** & 105.20 \\ \cline{2-5} & LunchWorse\(\rightarrow\)LunchBest & 82.95 & 85.79 & **80.09** \\ \cline{2-5} & UpWorse\(\rightarrow\)UpBest & 105.26 & 113.28 & **103.67** \\ \hline \multirow{2}{*}{ADS} & Simple\(\rightarrow\)Complex & **215.85** & 219.44 & 216.10 \\ \cline{2-5} & Complex\(\rightarrow\)Simple & **108.06** & 112.90 & 110.60 \\ \hline \hline \end{tabular} \end{table} Table 8: Huber loss for TTE analysis with different UQ methods. CS, BUQ, and EUQ denote CS score, Bayesian UQ, and Ensemble UQ, respectively. **UQ's Time Cost.** To assess the efficiency of UQ methods, we calculate the time cost for each UQ method in the elevator and ADS case studies. We find that the time cost for ADS (127.04 seconds on average) is much higher than that for the elevator case study (56.40 seconds on average). CS score spends the least time (denoted as \(\tau_{CS}\)), while the ensemble UQ method (denoted as \(\tau_{EQU}\)) takes the most time to perform UQ in both case studies. We conclude that CS score is the most efficient UQ approach compared to ensemble and Bayesian UQ, while retaining comparable effectiveness in TTE analysis. Hence we use CS score as the UQ method in PPT. ### RQ4 - Prompt Tuning Effectiveness and Efficiency RQ4 aims to demonstrate the effectiveness and efficiency of prompt tuning. We first compare PPT with PPT without prompt tuning (denoted as "w/o" PT) as shown in Table 11. In the elevator case study, PPT outperforms PPT without prompt tuning by 4.89 on average. The maximum Huber loss reduction is 6.17 for the case \(UpBest\to LunchBest\). In the ADS case study, we find similar reductions in both evolutions of \(Simple\to Complex\) and \(Complex\to Simple\) with an average of 3.14. As for efficiency, we compare prompt tuning (denoted as PT) with fine-tuning (denoted as FT) and report the time cost in Table 12. Notice that fine-tuning is used in RISE-DT. We find both fine-tuning and prompt tuning times in the ADS case study are higher than those in the elevator case study. However, there is no dominating choice in terms of time cost since the time spent on fine-tuning and prompt tuning is quite close. Fine-tuning takes less time for cases \begin{table} \begin{tabular}{c|c c c|c} \hline \hline Case Study & \(\tau_{CS}\) & \(\tau_{BUQ}\) & \(\tau_{EQU}\) & Average \\ \hline Elevator System & **10.27s** & 77.00s & 81.92s & 56.40s \\ ADS & **25.81s** & 156.02s & 199.3s & 127.04s \\ \hline \hline \end{tabular} \end{table} Table 10: Time cost of the three UQ methods. \(\tau^{CS}\), \(\tau^{BUQ}\) and \(\tau^{EUQ}\) denote the time cost for CS score, Bayesian and ensemble UQ methods. \begin{table} \begin{tabular}{c|c c c} \hline \hline & Evolution & w/o PT & PPT & Difference \\ \hline \multirow{5}{*}{Elevator} & UpBest\(\rightarrow\)LunchBest & 86.31 & 80.14 & 6.17 \\ & LunchBest\(\rightarrow\)UpBest & 109.07 & 103.79 & 5.28 \\ \cline{1-1} & LunchWorse\(\rightarrow\)LunchBest & 88.65 & 82.95 & 5.70 \\ & UpWorse\(\rightarrow\)UpBest & 107.68 & 105.26 & 2.42 \\ \cline{1-1} \cline{2-5} & Average & 97.93 & 93.03 & 4.89 \\ \hline \multirow{3}{*}{ADS} & Simple\(\rightarrow\)Complex & 219.25 & 215.85 & 3.40 \\ & Complex\(\rightarrow\)Simple & 110.94 & 108.06 & 2.88 \\ \cline{1-1} \cline{2-5} & Average & 165.10 & 161.95 & 3.14 \\ \hline \hline \end{tabular} \end{table} Table 11: Huber loss of TTE analysis and TTE analysis without prompt tuning. Column ”Difference” shows the difference between PPT and PPT w/o PL. \begin{table} \begin{tabular}{c|c c c c} \hline \hline & Evolution & P@1 & P@3 & P@10 \\ \hline \multirow{5}{*}{Elevator} & UpBest\(\rightarrow\)LunchBest & 100\% & 100\% & 100\% \\ & LunchBest\(\rightarrow\)UpBest & 100\% & 100\% & 100\% \\ & LunchWorse\(\rightarrow\)LunchBest & 100\% & 100\% & 95\% \\ & UpWorse\(\rightarrow\)UpBest & 100\% & 100\% & 93\% \\ \hline \multirow{3}{*}{ADS} & Simple\(\rightarrow\)Complex & 100\% & 100\% & 82\% \\ & Complex\(\rightarrow\)Simple & 100\% & 100\% & 86\% \\ \hline \hline \end{tabular} \end{table} Table 9: Precision@K results for samples selected by the UQ methods. ”P@1”, ”P@3” and ”P@10” denote Precision@1, Precision@3 and Precision@10. \(UpBest\to LunchBest\) and \(LunchWorse\to LunchBest\) in the elevator case study and \(Complex\to Simple\) in the ADS case study. We conclude that prompt tuning effectively reduces Huber loss in TTE analysis, and the time cost is approximately on the same level as fine-tuning. ### Threats to Validity **Construct Validity** concerns whether the metrics we choose can reflect the quality of TTE analysis. We know that other metrics can be used to evaluate a regression task like TTE analysis, such as MSE and RMSE. We choose the Huber loss instead of MSE or RMSE because of its robustness against outliers. Unlike MSE and RMSE, which are sensitive to outliers, Huber loss is a smoothed metric that can provide a more stable evaluation of PPT. **Internal Validity** refers to the extent to which the cause-and-effect relationship aiming to be established in our study is not due to other factors. One possible threat lies in the choices of hyperparameters, which might introduce biases into the experiment. To alleviate this issue, we performed a 10-fold cross-validation to select the hyperparameters automatically. **Conclusion Validity** pertains to the validity of the conclusions. PPT is a neural network-based method, which tends to introduce randomness into the experiments. To reduce the influence of randomness, we repeat each experiment 30 times and perform statistical testing to draw more significant conclusions. **External Validity** is about to what extent PPT can generalize to other domains. We design PPT to be applicable for TTE analysis in CPSs. To demonstrate the generalizability of PPT, we evaluate PPT with datasets collected from two different real-world domains, namely elevator and ADS. ## 6 Related work We discuss the related work from five aspects: CPS safety and security in Section 6.1, DT in CPSs in Section 6.2, transfer learning in Section 6.3, UQ in Section 6.4 and prompt tuning in Section 6.5. ### Cyber-physical Systems Security and Safety CPSs inherently bear susceptibilities that originate from both physical and cyber dimensions. To ameliorate these risks, numerous security and safety improvement methodologies have been suggested [26, 27, 28, 29]. The multifaceted and heterogeneous nature of CPS permits adversaries to launch attacks from various points of entry, such as physical devices [30, 31, 32], cyber networks [33, 34, 35], or even a combination of both [36]. This adds to the challenge of guaranteeing comprehensive system safety and security for CPS, especially given environmental uncertainties, security breaches, and physical device errors [37]. With the growing application of deep learning for enhancing CPS security and safety [38, 39, 40], a critical bottleneck experienced by researchers and practitioners is the high cost of collecting data and, in some cases, the infeasibility of obtaining labeled data for real-world CPSs. This shortage of labeled data forms a roadblock for the effective training of deep learning models, which often require sufficient data generated from the subject system - a condition that cannot \begin{table} \begin{tabular}{c|c c c c} \hline \hline & Evolution & FT & PT & Difference \\ \hline \multirow{4}{*}{Elevator} & UpBest\(\rightarrow\)LunchBest & 2.7h & 2.9h & -0.2h \\ \cline{2-5} & LunchBest\(\rightarrow\)UpBest & 4.1h & 4.1h & 0h \\ \cline{2-5} & LunchWorse\(\rightarrow\)LunchBest & 2.4h & 2.5h & -0.1h \\ \cline{2-5} & UpWorse\(\rightarrow\)UpBest & 1.8h & 1.6h & 0.2h \\ \cline{2-5} & Average & 2.75h & 2.77h & -0.02h \\ \hline \multirow{2}{*}{ADS} & Simple\(\rightarrow\)Complex & 5.3h & 5.1h & 0.2h \\ \cline{2-5} & Complex\(\rightarrow\)Simple & 4.9h & 5.0h & -0.1h \\ \cline{2-5} & Average & 5.1h & 5.05h & 0.1h \\ \hline \hline \end{tabular} \end{table} Table 12: Time cost of fine tuning and prompt tuning. Column ”Difference” shows the difference between the time cost of fine-tuning and prompt tuning. always be ensured. To combat this, we propose PPT in this paper, a deep learning methodology tailored to tackle the issue of scarce labeled CPS data by incorporating transfer learning and prompt tuning. ### Digital Twins in Cyber-physical Systems DT technologies facilitate real-time synchronization with CPSs [41, 42, 4, 43, 5, 44, 45]. For instance, Becue et al. [41] proposed to use DTs for analyzing the appropriate engineering of CPSs under attack scenarios. Eckhart et al. [46] incorporated rules into DTs to determine whether an attacker could compromise programmable logic controllers. Bitton et al. [42] recommended conducting tests on a DT as a safer alternative to testing real CPS. Furthermore, Damjanovic-Behrendt [47] employed DTs for the privacy assessment of actual smart car systems. These examples attest to the remarkable advantages of DT technologies. However, to the best of our understanding, our work is unique in its emphasis on the evolution and development of DTs. ### Transfer Learning Transfer learning involves four predominant strategies. The first, known as the _model control strategy_, implements transfer learning at the model tier. For instance, Duan et al. [48] introduced the Domain Adaptation Machine (DAM) which utilizes data from several source domains, constructs a classifier for each, and employs regularizers to maintain the final model's complexity. The second strategy, the _parameter control strategy_, operates assuming that a model's parameters embody the knowledge it has assimilated. For instance, Zhuang et al. [49] proposed directly sharing parameters between the source and target models in the context of text classification. The _model ensemble strategy_ is the third approach, where transfer learning is achieved by amalgamating various source models. For instance, Gao et al. [50] trained several weak classifiers with different model structures on multiple source domains and determined the final model based on a weighted vote from these weak classifiers. Lastly, _deep learning transfer techniques_ facilitate knowledge transfer between two deep learning models by aligning corresponding layers from source and target models. Zhuang et al. [49] proposed a transfer learning method with autoencoder that aligns reconstruction, distribution, and regression representations. This method was later expanded by Tzeng et al. [51], who introduced an adaptation layer. Long et al. [52] took it a step further by aligning multiple layers in their Deep Adaptation Networks model. In summary, early strategies such as model and parameter control performed knowledge transfer using intuitive methods such as adding regularizers and sharing parameters. Their performance is comparable to the more recent model ensemble and deep learning transfer techniques. Model ensemble is particularly efficient when dealing with multiple heterogeneous source domains [53], although it demands substantial computing resources. Deep learning transfer techniques are apt for transferring knowledge between two neural network models. Given that PPT is a neural network-based DT, we follow this latter research trend, aligning the representation of the GRU layer and the prediction layer. ### Uncertainty Quantification Numerous UQ methods are derived from Bayesian methods. For example, Wang et al. [54] suggested using probability theory to interpret the parameters of neural networks. Later, Srivastava et al. [55] incorporated Monte Carlo dropout as a regularization term for calculating prediction uncertainty, eliminating the need for posterior probability computation. In a further development, Salakhutdinov et al. [56] proposed a stochastic gradient Markov chain Monte Carlo (SG-MCMC) method, which only necessitates estimating the gradient on small mini-batch sets, significantly reducing computational load compared to direct posterior distribution estimation. Neural networks have also been employed for posterior distribution estimation, such as the variational autoencoder (VAE) proposed by Ghosh et al. [57], featuring both an encoder and decoder based on neural network structure. Other notable UQ techniques include deep Gaussian processes [58] and ensemble-based UQ [59]. Several open-source UQ tools exist for practical implementation. For instance, Uncertainty Wizard [60] is a TensorFlow Keras plugin that supports commonly used quantification methods, including Bayesian and ensemble-based methods. Similarly, Uncertainty Toolbox [25], built on Pytorch, provides common Bayesian and ensemble UQ methods, alongside additional metrics such as calibration, sharpness, and accuracy. The availability of UQ methods has fostered their applications in various application domains. For instance, Catak et al. [61] proposed NIRVANA validating deep learning model predictions based on MC dropout. Regarding uncertainty-aware analyses, Han et al. [62] presented approaches to systematically classify uncertainties based on the Cynefin framework and evaluated the robustness of industrial elevator systems based on the results of uncertainty classification. Zhang et al. [63, 64, 65] proposed a series of methods for specifying, modeling and quantifying uncertainties in CPSs and testing CPSs under environmental uncertainties. ### Prompt Tuning Prompt tuning, a burgeoning field in the machine learning domain, has attracted significant attention in recent years. Multiple techniques have been proposed to enhance the effectiveness of prompt tuning. At the foundational level, several works have explored using prompts for language models. For example, Brown et al. [8] proposed GPT-3, which utilizes prompts to facilitate natural language processing tasks without any explicit supervision. Inspired by this, Shin et al. [66] proposed AutoPrompt, an automated process to discover efficient prompts for language models. In terms of prompt selection strategies, a line of work focuses on generating diverse and effective prompts. Liu et al. [18] proposed P-tuning, a method incorporating trainable continuous prompts into pre-trained models, which showed impressive improvements on multiple benchmark datasets. Though prompt tuning is an exciting and active area of research, with various strategies, toolkits, and applications being proposed, there is no prior work that has focused on using prompt tuning in DT construction and evolution. In our work, we take advantage of advances in this field to develop our prompt-based learning method, aligning it with the needs of our application domain. ## 7 Conclusion and Future Work In this paper, we propose a novel method PPT to evolve the DTs for Time-to-Event prediction in CPSs. To alleviate the data scarcity problem, we utilize transfer learning to transfer knowledge across different subject systems, with the help of uncertainty quantification and prompt tuning. We evaluate PPT on two CPSs, namely an elevator system and an autonomous driving system (ADS). The experiment results show that PPT is effective in TTE analysis in both elevator and ADS case studies, averagely outperforming the baseline by 7.31 and 12.58, respectively, in terms of Huber loss. Further analysis into transfer learning, uncertainty quantification, and prompt tuning demonstrate their individual contribution to reducing the Huber loss. In the future, we plan to investigate more prompt tuning techniques by exploring other prompt designing methods. We are also interested in applying our method in other CPSs, such as power grids and railway systems. ## Acknowledgment Qinghua Xu is supported by the security project funded by the Norwegian Ministry of Education and Research. The work is also partially supported by the Horizon 2020 project ADEPTNESS (871319) funded by the European Commission and the Co-tester project (No. 314544) funded by the Research Council of Norway. The experiment has benefited from the Experimental Infrastructure for Exploration of Exascale Computing (eX3), which is financially supported by the Research Council of Norway under contract 270053.
2309.03458
Thermophysical Model Development for Hera Mission to Simulate Non-Gravitational Acceleration on Binary Asteroid
The surface temperature of an asteroid is fundamental information for the design of an exploration mission and the interpretation of scientific observations. In addition, the thermal radiation of the asteroid causes a non-gravitational acceleration that induces secular changes in its orbit and spin. We have been developing a numerical calculation library for simulating the dynamics and thermophysics of asteroids. The asteroid dynamical simulator, \texttt{Astroshaper}, can calculate the temperature distribution based on a 3-dimensional shape model of an asteroid and predict the non-gravitational acceleration. In recent years, asteroid exploration missions such as Hayabusa2 and Hera have been equipped with thermal infrared imagers. The asteroid thermography can provide the thermal properties of the surface material of the target bodies. The functionality of thermophysical modeling in \texttt{Astroshaper} contributes to simulating the thermal environment on the asteroids, estimating the thermal properties, and predicting the dynamical evolution controlled by the non-gravitational effects.
Masanori Kanamaru, Tatsuaki Okada, Hiroki Senshu, Hirohide Demura, Naru Hirata, Yuto Horikawa, Giacomo Tommei
2023-09-07T02:45:06Z
http://arxiv.org/abs/2309.03458v1
Thermophysical Model Development for Hera Mission to Simulate Non-Gravitational Acceleration on Binary Asteroid ###### Abstract The surface temperature of an asteroid is fundamental information for the design of an exploration mission and the interpretation of scientific observations. In addition, the thermal radiation of the asteroid causes a non-gravitational acceleration that induces secular changes in its orbit and spin. We have been developing a numerical calculation library for simulating the dynamics and thermophysics of asteroids. The asteroid dynamical simulator, Astroshaper, can calculate the temperature distribution based on a 3-dimensional shape model of an asteroid and predict the non-gravitational acceleration. In recent years, asteroid exploration missions such as Hayabusa2 and Hera have been equipped with thermal infrared imagers. The asteroid thermography can provide the thermal properties of the surface material of the target bodies. The functionality of thermophysical modeling in Astroshaper contributes to simulating the thermal environment on the asteroids, estimating the thermal properties, and predicting the dynamical evolution controlled by the non-gravitational effects. Asteroid 65803 Didymos, Binary asteroid, Thermophysical model, Yarkovsky effect, YORP effect pacs: + Footnote †: These authors contributed equally to the work of the University of Texas at Austin, Austin, TX, USA + Footnote †: These authors contributed equally to the work of the University of Texas at Austin, Austin, TX, USA + Footnote †: These authors contributed equally to the work of the University of Texas at Austin, Austin, TX, USA ## Nomenclature \(A_{\rm B}\) : : Albedo at visible wavelength \(A_{\rm TH}\) : Albedo at thermal radiation wavelength \(a\) : Area of a facet, \(\rm m^{2}\) \(C_{p}\) : Heat capacity at constant pressure, \(\rm J/kg/K\) \(c_{0}\) : Speed of light in vacuum, \(\rm m/s\) \(df\) : Thermal force on a facet, N \(E\) : Sum of emittance of scattered light and thermal radiation from a facet, \(\rm W/m^{2}\) \(E_{\rm cons}\) : \(E_{\rm out}\) / \(E_{\rm in}\) \(E_{\rm in}\) : Energy incident on an asteroid, W \(E_{\rm out}\) : Energy emitted from an asteroid, W \(F_{\rm rad}\) : Energy flux by thermal radiation from surrounding facets, \(\rm W/m^{2}\) \(F_{\rm scat}\) : Energy flux by scattered light from surrounding facets, \(\rm W/m^{2}\) \(F_{\rm sum}\) : Energy flux by direct sunlight, \(\rm W/m^{2}\) \(F_{\rm total}\) : Total energy flux into a facet, \(\rm W/m^{2}\) \(f\) : View factor between two facets \(k\) : Thermal conductivity, \(\rm W/m/K\) \(\bar{n}\) : Normal vector of a facet \(r\) : Position vector, m \(T\) : Temperature, K \(t\) : Time, s \(z\) : Depth, m \(\alpha\) : Thermal force on an asteroid, N \(\Gamma\) : Thermal inertia, \(\rm J\cdot m^{-2}\)-K\({}^{-1}\)-s\({}^{-0.5}\) (tiu) \(\varepsilon\) : Emissivity \(\theta\) : Tilt angle of a facet \(\rho\) : Density, \(\rm kg/m^{3}\) \(\sigma\) : Stefan-Boltzmann constant, \(\rm W/m^{2}/K^{4}\) \(\tau\) : YORP torque on an asteroid, \(\rm N\cdot m\) Subscripts \(\rm Didy\) : Didymos \(\rm Dimo\) : Dimorphos, the satellite of Didymos \(i\) : Index of a facet of a shape model \(j\) : Index of a facet visible from facet \(i\) ## 1 Introduction ### Thermophysical modeling of an asteroid Therrophysical modeling (TPM) is a numerical simulation to obtain temperature distribution on the surface of an asteroid. TPM plays a vital role in a small-body mission's science and engineering aspects as follows. * TPM simulates the thermal environment around the asteroid that is critical for a proximity operation and a touchdown operation to the surface. * It is possible to map the asteroid's thermal inertia and surface roughness by comparing TPM and thermal infrared spectroscopy or imaging.[1, 2, 3] * TPM can predict the non-gravitational acceleration on the asteroid induced by anisotropic thermal radiation. The changes in orbit and rotation of asteroids due to thermal radiation are known as the Yarkovsky and YORP effects, respectively.[4, 5] * The orbit evolution by the Yarkovsky effect is also important for assessing the risk of asteroid impact on Earth in planetary defense.[6, 7] * Changes in surface temperatures may cause material ejection from the asteroid and comet nuclei.[8] * Thermal radiation pressure from the asteroid's surface affects the trajectory of the spacecraft or the ejecta particle in the vicinity of the asteroid.[9, 10] * Thermal radiation causes a bias in the infrared spectra of the asteroid. To interpret the spectra at \(\sim 3\) um or longer wavelength, removing this "thermal tail" is necessary.[11] ### Hera mission to explore a binary asteroid DART and Hera are planetary defense missions to a binary asteroid with a satellite.[12, 13] The DART spacecraft successfully impacted Dimorphos, a satellite of the asteroid Didymos, in September 2022.[14] The momentum transfer efficiency by the DART impact was estimated from the change in the mutual orbit period of the binary asteroid.[15, 16] The Hera spacecraft is scheduled to rendezvous with Didymos and Dimorphos in December 2026 to observe in detail the crater formed by the DART impact.[13] Japan's team led by the Institute of Space and Astronautical Science (ISAS) is developing a thermal infrared imager (TIRI) onboard the Hera spacecraft. TIRI is the successor to the thermal infrared imager (TIR) on Hayabusa2, with higher sensitivity and resolution and six band filters for mid-infrared spectroscopy. Asteroid thermography by TIRI will provide us with the thermal inertia or density of the boulders and gravel that make up the target asteroids, which is essential for assessing the efficiency of the asteroid deflection experiment by DART. ### Development of thermophysical models for single/binary asteroids Several thermophysical models have been developed for single asteroids. One of the most elaborate models is the Advanced Thermophysical Model (ATPM), including the effect of small-scaled surface roughness.[17] We have been developing a numerical simulator for the dynamics and thermophysics of asteroids, Astroshapper. This simulator was originally developed for YORP prediction of asteroid Ryugu, a target asteroid of the Hayabusa2 mission.[18] Astroshapper is being developed as an open-source project in the Julia programming language at GitHub[1]. We hereby report on the functionality of thermophysical modeling implemented in the AsteroidThermoPhysicalModels.jl package[2], one of the sub-modules of Astroshapper. Some sample codes for TPM simulation are also available in the repository of Astroshapper-example[3]. We have extended the capabilities of TPM for a single asteroid to apply to a binary asteroid for interpreting the TIRI imagery of Didymos and Dimorphos. ## 2 TPM Functionality of Astroshapper The thermophysical model implemented in AsteroidThermoPhysicalModels.jl is based on a 3-dimensional shape model of an asteroid covered with a triangular mesh. As with other TPMs,[17] it can calculate the temperature distribution of the asteroid considering some fundamental thermophysical processes (See Table 1): the 3D shape of the asteroid, 1-dimensional heat conduction in the depth direction, shadowing by the local horizon (i.e., self-shadowing), and reabsorption of scattered light and thermal radiation by interfacing facets (i.e., self-heating). ### Heat conduction Our TPM code independently solves a 1-dimensional heat conduction equation on each shape model facet. Assuming that the thermal conductivity \(k\) is constant regardless of depth \(z\), the heat conduction equation becomes as follows. \[\frac{\partial T}{\partial t}=\frac{k}{\rho C_{p}}\frac{\partial^{2}T}{ \partial z^{2}} \tag{1}\] The boundary condition at the surface of the asteroid (\(z=0\)) is given by the balance of incident light to the facet, heat flux to the ground, and thermal radiation to space (See Fig. 1). \[F_{\text{total}}+k\left(\frac{\partial T}{\partial z}\right)_{z=0}=\varepsilon \sigma T_{z=0}^{4} \tag{2}\] where \(F_{\text{total}}\) is the total energy the facet absorbs at each time step. \[F_{\text{total}}=(1-A_{\text{B}})F_{\text{sun}}+(1-A_{\text{B}})F_{\text{scat }}+(1-A_{\text{TH}})F_{\text{rad}} \tag{3}\] The solar incident \(F_{\text{sun}}\) is an energy flux that considers the inclination of the facet concerning the sun's direction and the shadow of the surrounding facets. To consider the self-shadowing effect, \(F_{\text{sun}}\) is set to zero when the other facet blocks the solar ray. The facet exchanges the energy flux with other interfacing facets by reabsorbing the scattered light and thermal radiation. \(F_{\text{scat}}\) and \(F_{\text{rad}}\) are the energy fluxes from the interfacing facets to the facet in question in visible and thermal infrared wavelengths, respectively. In our model, single scattering is only considered. The additional flux due to multiple scattering is negligible for a low albedo body. The boundary condition of insulation is given so that the temperature gradient is zero at the bottom cell. \[\left(\frac{\partial T}{\partial z}\right)_{z\to\infty}=0 \tag{4}\] \begin{table} \begin{tabular}{l l} \hline \hline Asteroid 3D shape & Yes. Triangular mesh models can be imported. \\ Heat conduction & Yes. 1D heat conduction in the depth direction is considered. \\ Self-shadowing & Yes. \\ Self-heating & Yes. Only single scattering is considered. \\ Mutual-shadowing & Yes. \\ Mutual-heating & Yes. \\ Surface roughness & Not yet implemented. \\ \hline \hline \end{tabular} \end{table} Table 1: Thermophysics implemented in Astroshapper. Our TPM code solves the above equations by an explicit Euler scheme. The radiative boundary condition involving a nonlinear term at Eq. (2) is solved using the Newton-Raphson method. It is in the process of being implemented to allow users to select implicit and higher-order solvers. ### Non-gravitational force Non-gravitational perturbations on the asteroid can be calculated from the temperature distribution [19]. We assume that a facet of the shape model scatters and radiates isotropically (i.e., Lambertian scatterer and emitter). The total emittance of scattered light and thermal radiation emitted from facet \(i\) is \[E_{i}=A_{\mathrm{B}}F_{\mathrm{sun},i}+A_{\mathrm{B}}F_{\mathrm{scat},i}+A_{ \mathrm{TM}}F_{\mathrm{rad},i}+e\sigma T_{i}^{4} \tag{5}\] The force exerted by the photon pressure on the element can be expressed as follows. \[df_{i}=-\frac{2E_{i}a_{i}}{3c_{0}}\hat{\mathbf{h}}_{i}+\sum_{j\in\,\mathrm{visible \,from\, facet\,}i}\frac{E_{i}a_{i}}{c_{0}}f_{i,j}\,\frac{\mathbf{r}_{j}-\mathbf{r}_{i}}{| \mathbf{r}_{j}-\mathbf{r}_{i}|} \tag{6}\] The first term is a force component normal to the surface element. The coefficient \(-2/3\) is derived from the isotropic emittance. The second term represents the additional component due to the interaction with visible facets. The reabsorbed photons exert a force along the direction from facet \(i\) to facet \(j\) in proportion to the view factor \(f_{i,j}\). The view factor from facet \(i\) to facet \(j\) refers to the fraction of absorption by facet \(j\) to the emittance from facet \(i\)[19, 20]. \[f_{i,j}=\frac{\cos\theta_{i}\cos\theta_{j}}{\pi\,|\mathbf{r}_{j}-\mathbf{r}_{i}|^{2}} a_{j} \tag{7}\] where \(\theta_{i}\) and \(\theta_{j}\) are the angles between each normal vector and the line connecting the two facets, and \(d_{i,j}\) denotes the distance between the two facets. The summation of Eq. (6) should only be taken for facets seen from facet \(i\). In our code, the visible facets from each facet are searched and stored before the TPM is performed. The total force \(\alpha\) and torque \(\tau\) on the asteroid are obtained by integrating the thermal force over the entire surface. \[\alpha=\sum_{i}\left(\frac{\mathbf{r}_{i}}{|\mathbf{r}_{i}|}\cdot df_{i}\right)\frac {\mathbf{r}_{i}}{|\mathbf{r}_{i}|} \tag{8}\] \[\tau=\sum_{i}\mathbf{r}_{i}\times df_{i} \tag{9}\] The perturbation to the motion of the asteroid's center-of-mass causes the Yarkovsky drift in orbit, and the torque causes the YORP spin evolution. ### Binary and additional thermophysics Some additional thermophysics must be considered for a binary asteroid, as in Fig. (2). We utilized the functions of ray tracing for detecting local shadows on a single asteroid to simulate an eclipse by a pair of asteroids (i.e., mutual shadowing). Two types of eclipse events can occur: when the satellite's shadow falls on the primary asteroid and when the satellite enters the shadow of the primary. The primary and secondary asteroids exchange energy by thermal radiation and warm each other. This mutual heating effect is also implemented. The impact of the thermal infrared beaming by small-scaled surface roughness will be implemented in the future. ## 3 TPM for Binary Asteroid Didymos and Dimorphos ### Parameter setting We used the SPICE kernels and 3D shape models provided by the Hera mission for a thermophysical simulation of the binary asteroid Didymos and Dimorphos.11 The shape models used in this study are based on ground-based observations before the DART impact experiment. It should be noted that the shape of Dimorphos is approximated by an ellipsoid. Footnote 11: Shape models used in this study (Version: v140_20230731.001): * g.50677mm_radobj,dida_0000000000,v001.obj for Didymos * g.60655mm_radobj,didb_000000000,v01.obj for Dimorphos Available at [https://s2e2.cosmos.esa.int/bitbucket/projects/SPICE_KERNEL/repos/hera/browse](https://s2e2.cosmos.esa.int/bitbucket/projects/SPICE_KERNEL/repos/hera/browse). A thermal inertia of \(\Gamma=403\) tiu was given, corresponding to a typical value for an S-type asteroid [21]. Running TPM over tens of thermal cycles in advance is necessary to obtain a temperature distribution independent of initial conditions. In this study, TPM was performed for two months (from January 1st to March 1st, 2027) after temperatures of 0K were given at all facets of the shape models and all depth cells, corresponding to \(\sim 627\) rotations for Didymos and \(\sim 119\) mutual orbit cycles for Dimorphos. We confirmed that the calculation sufficiently converged in terms of the balance between the energy input and output on the surface of each asteroid, where \(E_{\mathrm{cons}}\) was greater than 0.98 at the final time step. We used the simulated temperature data for 24 hours on March 1st, 2027, for the later analysis. ### Temperature map of the binary asteroid The upper and middle panels of Fig. 3 show the temperature maps of Didymos and Dimorphos at the epochs of the mutual events, respectively. In the upper panel, Dimorphos cast the shadow around (\(20^{\circ}\)S,\(90^{\circ}\)W) of Didymos at 5:37 a.m. After Figure 1: Basic thermophysical processes on an asteroid. Figure 2: Thermophysics for a binary asteroid. The shape models of Didymos and Dimorphos based on ground-based observations are shown here. 5.96 hours or half of the orbit period of Dimorphos, one can observe Dimorphos hiding in the shadow of Didymos (middle panel). The lower panel shows the temperature changes over time at the points indicated by the blue dots on the above maps. It can be seen that rapid temperature drops of several tens of Ks occurred during the eclipse events. By observing the eclipse events in addition to diurnal thermal cycles, thermophysical properties corresponding to different depths can be investigated by TIRI. Because of the considerable uncertainty in the inclination of the mutual orbit, it will be turned out after Hera's rendezvous how frequently the eclipse events will occur. ### Non-Gravitational Effects on the binary asteroid Based on the above temperature distribution, we also calculated the thermal recoil force on each facet of the shape model. We integrated it over the surface to obtain non-gravitational force and torque on the binary asteroid. By averaging over several rotations, the torque components for rotational acceleration were estimated as \(\tau_{\mathrm{Dlym}}=0.19\) N \(\cdot\) m for Didymos and \(\tau_{\mathrm{Dlymo}}=-1.1\times 10^{-4}\) N \(\cdot\) m for Dimorphos. It suggests that the rotation of Didymos is accelerating at the so-called YORP time scale of \(4.1\times 10^{6}\) years, that is, a time to double the rotation speed. On the other hand, the negative acceleration of Dimorphos decelerates its rotation at a time scale of \(8.6\times 10^{4}\) years, reducing the rotation speed by half. ## 4 Discussion Generally, the resolution of a pre-arrival shape model is insufficient for YORP prediction sensitive to small-scale topography.[22] We must wait for Hera's rendezvous for a more precise prediction of YORP on Didymos and Dimorphos. The shape model of Dimorphos used in this study is an ellipsoid based on ground-based observations. The symmetrical shape should cancel out the thermal torque, but the asymmetry of the temperature distribution results in the non-zero torque. Cooling due to the eclipse is likely the cause of the negative acceleration on the satellite. The drastic temperature change may have the effects of expanding the mutual orbit of the binary asteroid and shortening its dynamical lifetime. ## 5 Conclusion We hereby reported on the asteroid dynamical simulator, Astroshaper. We have developed a thermophysical simulation for the Hera mission applicable to a binary asteroid. This tool is expected to contribute to the operation planning of TIRI and investigate the dynamics of the binary asteroid controlled by the non-gravitational effects. ## Acknowledgments This study was supported by the JSPS KAKENHI No. JP17H06459 (the _Aqua Planetology_ project) and No. JP22J00435/JP22KJ0728. This work was also supported by MEXT Promotion of Distinctive Joint Research Center Program Grant Number JPMXP0622717003. G. Tommei acknowledges the support from the Italian Space Agency (grant 2022-8-HH.0).
2309.14124
Emergence of hydrodynamics in expanding relativistic plasmas
I consider a simple set of equations that govern the expansion of boost-invariant plasmas of massless particles. These equations describe the transition from a collisionless regime at early time to hydrodynamics at late time. Their mathematical structure encompasses all versions of second order hydrodynamics. We emphasize that the apparent success of Israel-Stewart hydrodynamics at early time has little to do with ``hydrodynamics'' proper, but rather with a particular feature of Israel-Stewart equations that allows them to effectively mimic the collisionless regime.
Jean-Paul Blaizot
2023-09-25T13:23:53Z
http://arxiv.org/abs/2309.14124v1
# Emergence of hydrodynamics in expanding relativistic plasmas+ ###### Abstract I consider a simple set of equations that govern the expansion of boost-invariant plasmas of massless particles. These equations describe the transition from a collisionless regime at early time to hydrodynamics at late time. Their mathematical structure encompasses all versions of second order hydrodynamics. We emphasize that the apparent success of Israel-Stewart hydrodynamics at early time has little to do with "hydrodynamics" proper, but rather with a particular feature of Israel-Stewart equations that allows them to effectively mimic the collisionless regime. In this note, I consider an idealization of the early stages of a high-energy heavy-ion collision, where the produced matter expands longitudinally along the collision axis in a boost invariant fashion, undergoing the so-called Bjorken expansion [1]. The matter is supposed to occupy uniformly the plane transverse to the collision axis (the \(z\)-axis). The discussion will be based on the simple kinetic equation [2], \[\left[\partial_{\tau}-\frac{p_{z}}{\tau}\partial_{p_{z}}\right]f(\mathbf{p}, \tau)=-\frac{f(\mathbf{p},\tau)-f_{\mathrm{eq}}(p/T)}{\tau_{R}}, \tag{1}\] where \(f\) denotes a distribution function for massless particles, and the right-hand side is a collision term treated in the relaxation time approximation (\(f_{\mathrm{eq}}(p/T)\) is the local equilibrium distribution function). In the case of massless particles, the energy momentum tensor has two independent components, which can be identified to the energy density \(\varepsilon\) and the difference between the longitudinal and transverse pressures \(\mathcal{P}_{L}-\mathcal{P}_{T}\). These two quantities are special moments of the distribution function, \(\varepsilon=\mathcal{L}_{0}\) and \(\mathcal{P}_{L}-\mathcal{P}_{T}=\mathcal{L}_{1}\), where for any integer \(n\), we define [3] \[\mathcal{L}_{n}\equiv\int\frac{d^{3}\mathbf{p}}{(2\pi)^{3}}\,p^{2}P_{2n}(p_{z}/ p)f_{p}(t,\mathbf{x},\mathbf{p}), \tag{2}\] with \(P_{n}(x)\) a Legendre polynomial and \(p=|\mathbf{p}|\)1. Owing to the symmetries of the Bjorken expansion, the moments \(\mathcal{L}_{n}\) depend only on the proper time \(\tau=\sqrt{t^{2}-z^{2}}\). They obey the coupled equations [6] Footnote 1: These moments \(\mathcal{L}_{n}\), introduced in [3], are distinct from those most commonly used (see e.g. [4]). They also differ slightly from those used in [5]. Note that although the knowledge of the \(\mathcal{L}_{n}\) moments does not allow us to reconstruct from them the distribution function, they provide an exact description of the components of the energy-momentum tensor. \[\frac{\partial\mathcal{L}_{0}}{\partial\tau}= -\frac{1}{\tau}(a_{0}\mathcal{L}_{0}+c_{0}\mathcal{L}_{1})\,, \tag{3a}\] \[\frac{\partial\mathcal{L}_{1}}{\partial\tau}= -\frac{1}{\tau}(a_{1}\mathcal{L}_{1}+b_{1}\mathcal{L}_{0}+c_{1} \mathcal{L}_{2})-\frac{\mathcal{L}_{1}}{\tau_{R}}. \tag{3b}\] The coefficients, \(a_{0}=4/3\), \(a_{1}=38/21\), etc, are pure numbers whose values are fixed by the geometry of the expansion. The last term in Eq. (3b), proportional to the collision rate \(1/\tau_{R}\), isolates in a transparent way the effect of the collisions. Without this term, Eqs. (3) describe free streaming. In this regime, the moments evolve as power laws governed by the eigenvalues of the linear system. The collision term in Eq. (3b) produces a damping of \(\mathcal{L}_{1}\) and drives the system towards isotropy, a prerequisite for local equilibrium. When \(\mathcal{L}_{1}=0\), the system behaves as in ideal hydrodynamics \(\mathcal{L}_{0}\sim\tau^{-a_{0}}\). There is no contribution of the collision term in Eq. (3a) since collisions conserve energy. The \(\mathcal{L}_{n}\) moments have all the same dimension, that of the energy density. Eqs. (3) are the first in an infinite hierarchy of equations that couple \(\mathcal{L}_{n}\) to its nearest neighbours, \(\mathcal{L}_{n+1}\) and \(\mathcal{L}_{n-1}\). Thus, in Eqs. (3) \(\mathcal{L}_{1}\) is coupled to \(\mathcal{L}_{0}\) and \(\mathcal{L}_{2}\). After an appropriate treatment of \(\mathcal{L}_{2}\), Eqs. (3) yield an effective theory for \(\mathcal{L}_{0}\) and \(\mathcal{L}_{1}\), that is for the energy momentum tensor. In particular these equations contain "second order" hydrodynamics as a special limit. To see that, we express the moments in terms of the more familiar hydrodynamical variables. We call \(\mathcal{P}\) the equilibrium pressure (related to the energy density by the equation of state), and set \(\pi=-c_{0}\mathcal{L}_{1}\) with \(\pi\) the viscous pressure. Then, Eq. (3a) takes the form \[\frac{\mathrm{d}\varepsilon}{\mathrm{d}\tau}+\frac{\varepsilon+\mathcal{P}}{ \tau}=\frac{\pi}{\tau}. \tag{4}\] This equation translates the conservation of the energy momentum tensor, \(\partial_{\mu}T^{\mu\nu}=0\), for Bjorken flow. In ideal hydrodynamics, the viscous pressure is neglected (\(\mathcal{L}_{1}\to 0\)), and, for massless particles, \(\mathcal{P}=\varepsilon/3\). The solution of Eq. (4) is then \(\varepsilon(\tau)\sim\tau^{-4/3}\). By taking into account the viscous effects via the leading order constitutive equation \(\pi=4\eta/(3\tau)\), with \(\eta\) the shear viscosity, one obtains the Navier-Stokes equation: \[\frac{\mathrm{d}\varepsilon}{\mathrm{d}\tau}=-\frac{a_{0}}{\tau}\left( \varepsilon-\frac{\eta}{\tau}\right). \tag{5}\] An equation similar to Eq. (3b) was introduced by Israel and Stewart [7] in order to cure problems of the relativistic Navier-Stokes equation. In the present context, it takes the form of a relaxation equation for the viscous pressure \(\pi\), forcing it to relax towards its Navier-Stokes value \(4\eta/(3\tau)\) over a time scale \(\tau_{\pi}\): \[\partial_{\tau}\pi+\frac{a_{1}^{IS}}{\tau}\pi=-\frac{1}{\tau_{\pi}}\left(\pi -\frac{4\eta}{3\tau}\right). \tag{6}\] This equation reduces identically to Eq. (3b) after setting \(\tau_{\pi}=\tau_{R}\) and \(a_{1}^{IS}=a_{1}\). It can be verified that all second order formulations of hydrodynamics for the boost invariant system share the same mathematical structure as that encoded in the linear system (3), modulo the adjustment of the parameters \(b_{1}\) (or \(\eta\)), \(\tau_{R}\to\tau_{\pi}\) and \(a_{1}-a_{0}\to\lambda_{1}\), where \(\tau_{\pi}\) an \(\lambda_{1}\) may be viewed as second order transport coefficients (see [8] for a more complete discussion). To proceed further, it is convenient to define \[g(w)\equiv\frac{\tau}{\mathcal{L}_{0}}\frac{\partial\mathcal{L}_{0}}{ \partial\tau}=-1-\frac{\mathcal{P}_{L}}{\varepsilon}, \tag{7}\] where \(w\equiv\tau/\tau_{R}\). The quantity \(g(w)\) may be viewed as the exponent of the power laws obeyed by the energy density at early or late times (in both cases \(g(w)\) becomes constant). It is also a measure of the pressure asymmetry. In particular, the second relation, which follows easily from Eqs. (3), shows that in the free streaming regime where \(\mathcal{P}_{L}=0\), \(g=-1\), while in the hydrodynamical regime where \(\mathcal{P}_{L}=\varepsilon/3\), \(g=-4/3\). In terms of \(g(w)\) Eqs. (3) become a first order nonlinear ODE2 Footnote 2: An equation very similar to this one was considered in [9] \[w\frac{\mathrm{d}g}{\mathrm{d}\ln w}=\beta(g,w),\] \[-\beta(g,w)=g^{2}+\left(a_{0}+a_{1}+w\right)g+a_{1}a_{0}-c_{0}b_{ 1}+a_{0}w-c_{0}c_{1}\frac{\mathcal{L}_{2}}{\mathcal{L}_{0}}.\] Let us first ignore the term \(\mathcal{L}_{2}\). Then, in the absence of collisions, or for small \(w\), this non linear equation has two fixed points, that we refer to as unstable (\(g_{-}\)) and stable (\(g_{+}\)) free streaming fixed points, whose values coincide with the eigenvalues of the linear system (3) (with \(\mathcal{L}_{2}=0\)). Numerically, \(g_{+}=-0.929\), \(g_{-}=-2.213\). As discussed in [10] this fixed point structure is little affected when higher moments are taken into account, leading eventually to the exact values of the fixed points, respectively -1 and -2. In fact, to obtain an accurate description of the solution in the vicinity of a fixed point, it is enough to inject in Eq. (8) the value of \(\mathcal{L}_{2}\) in the vicinity of the corresponding fixed point, and this is known. For instance, near the stable free streaming fixed point \(\mathcal{L}_{n}/\mathcal{L}_{0}=A_{n}\), where \(A_{n}\) is a known number (e.g. \(\mathcal{L}_{2}/\mathcal{L}_{0}=3/8\)). The effect of the entire tower of higher moments can then be absorbed in a renormalisation of the parameter \(a_{1}\) of the two moment truncation: \[a_{1}\mapsto a_{1}^{\prime}=a_{1}+c_{1}\frac{A_{2}}{A_{1}}=\frac{31}{15}. \tag{9}\] With this value of \(a_{1}\), the stable free streaming fixed point is exactly reproduced, i.e., \(g_{+}=-1\). This fixed point structure continues to play a role when collisions are switched on [10]: The unstable fixed point moves to large negative values, while the stable fixed point \(g_{+}\) evolves adiabatically to the hydrodynamic fixed point, \(g_{*}=-4/3\). The location of this "pseudo fixed point" as \(w\) runs from \(0\) to \(\infty\) corresponds (approximately) to what has been dubbed "attractor" [9]. More precisely, the attractor is to be understood as the particular solution of Eq. (8), \(g_{\mathrm{att}}(w)\), that connects \(g_{+}\) as \(w\to 0\) to \(g_{*}\) as \(w\to\infty\). Such an attractor is made of three parts: the vicinities of the two fixed points, and the transition region. The two fixed points are associated with different, well identified, physics: one corresponds to hydrodynamics, the other to a collisionless regime. The vicinities of these fixed points can be described by viscous hydrodynamics for the first one, and perturbation theory for the second. The transition region requires information on both fixed points to be accurately accounted for. From this perspective, the often used terminology of "hydrodynamic attractor" appears misleading. The gradient expansion is divergent, and the full solution of the kinetic equation can be obtained in terms of trans-series [9]. In such trans-series, the first non trivial correction to the hydrodynamic gradient expansion requires information about the early time dynamics (for an analytic solution of the system with \({\cal L}_{2}=0\) see [11]). This information is necessary to control accurately the transition region between the two fixed points, that is to get a good description of the attractor. We have emphasized earlier the role of the higher moments in the determination of the free streaming fixed points, and indicated that in the vicinity of the stable fixed point this boils down to a renormalisation of the parameter \(a_{1}\). Within Israel-Stewart theory, changing \(a_{1}\) looks like changing a second order transport coefficient. However, in the vicinity of the hydrodynamic fixed point the gradient expansion yields \({\cal L}_{n>1}\simeq 1/\tau^{n}\), so that \({\cal L}_{2}\) does not affect the hydrodynamic fixed point, nor its leading order viscous correction. The correct interpretation of changing \(a_{1}\) is to put the stable free streaming fixed point at its right place, and this has a strong impact on Figure 1: Plot of the attractor solution for the pressure ratio \({\cal P}_{L}/{\cal P}_{T}\) as a function of \(w=\tau/\tau_{R}\). The dashed curve represents the solution of the Navier-Stokes equation. The curves labelled “IS Hydro”, “two moments”, “Kinetic-Hydro” correspond to different values of \(a_{1}\), respectively 4/3, 31/28, 31/15. From [8]. the whole attractor, except in the vicinity of the hydrodynamic fixed point. This is clearly illustrated in Fig. 1. It follows from this analysis that hydrodynamic behavior emerges where it is supposed to do so, namely when the collision rate becomes comparable to the expansion rate (i.e. when \(\tau\gtrsim\tau_{R}\)). The fact that Israel-Stewart equations apparently allow "hydrodynamics" to work at early time has little to do with hydrodynamics proper, but rather with the fact that the structure of Israel-Stewart equations is similar to that of the moments of the kinetic equations. Thus, they capture features of the collisionless regime (but only approximately, unless \(a_{1}\) is carefully adjusted - see in Fig. 1 the negative longitudinal pressure obtained when \(a_{1}\) differs from its proper value).
2309.04337
Superspin Chains Solutions from 4D Chern-Simons Theory
As a generalisation of the correspondence linking 2D integrable systems with 4D Chern-Simons (CS) gauge theory, superspin chains are realized by means of crossing electric and magnetic super line defects in the 4D CS with super gauge symmetry. The oscillator realization of Lax operators solving the RLL relations of integrability is obtained in the gauge theory by extending the notion of Levi decomposition to Lie superalgebras. Based on particular 3-gradings of Lie superalgebras, we obtain graded oscillator Lax matrices for superspin chains with internal symmetries given by $A(m-1\mid n-1)$, $B(m\mid n)$, $C(n)$ and $D(m\mid n)$
Youssra Boujakhrout, El Hassan Saidi, Rachid Ahl Laamara, Lalla Btissam Drissi
2023-09-08T14:05:04Z
http://arxiv.org/abs/2309.04337v2
# Superspin Chains Solutions from 4D Chern-Simons Theory ###### Abstract As a generalisation of the correspondence linking 2D integrable systems with 4D Chern-Simons (CS) gauge theory, superspin chains are realized by means of crossing electric and magnetic super line defects in the 4D CS with super gauge symmetry. The oscillator realization of Lax operators solving the RLL relations of integrability is obtained in the gauge theory by extending the notion of Levi decomposition to Lie superalgebras. Based on particular 3-gradings of Lie superalgebras, we obtain graded oscillator Lax matrices for superspin chains with internal symmetries given by \(A(m-1\mid n-1)\), \(B(m\mid n)\), \(C(n)\) and \(D(m\mid n)\). Keywords: superspin chains, super Lax operator, 4D Chern-Simons theory, Lie superalgebra decompositions, 3-grading ## 1 Introduction A newly discovered shortcut towards the realization and study of integrable systems is yielded by a four dimensional Chern-Simons gauge theory defined on the product of a topological real plane \(\Sigma\) and a holomorphic curve \(C\), by the field action [1]-[5] \[S_{4dCS}=\int_{\Sigma\times C}dz\wedge tr(\mathcal{A}\wedge d\mathcal{A}+\frac{ 2}{3}\mathcal{A}\wedge\mathcal{A}\wedge\mathcal{A}) \tag{1}\] This field theory is characterized by a complexified gauge symmetry \(G\), and a partial gauge connection with three bosonic components as \(\mathcal{A}=dx\mathcal{A}_{x}+dy\mathcal{A}_{y}+d\bar{z}\mathcal{A}_{\bar{z}}\) valued in the Lie algebra \(g\) of the gauge symmetry. Endowing this topological field theory with crossing line defects allows to build two-dimensional solvable lattice models in \(\Sigma\) and recover solutions and conserved quantities of these lower dimensional models [6]-[25]. The R-matrix describing the scattering of two particles' worldlines [26]-[29] is calculated from the 4D CS as the crossing of two Wilson lines characterized by electrical charges given by highest weights of \(G\). In this image, each Wilson line [30] is represented in the topological plane by a real curve assimilated to the worldline of an electrically charged particle whose internal quantum states are valued in some representation of \(g\) characterized by a highest weight \(\lambda\). Positions of these line defects in the complex \(C\) correspond to spectral parameters \(z_{i}\) that play a major role in Yang-Baxter equation and in the RTT realization of Yangian representations [31],[32]. The integrable XXX spin chain [33],[34] emerges in the 4D CS theory defined on \(\mathbb{R}^{2}\times C,\) as a set of parallel (vertical) Wilson lines sitting on the chain nodes and carrying degrees of freedom of the spins. The interaction between these spins is modelled by a horizontal 't Hooft line perpendicularly crossing the Wilson lines [7]. The 't Hooft line defect is a disorder operator [35]-[37] characterized by a magnetic charge equivalent to a coweight \(\mu\) of \(G;\) it acts like an auxiliary oscillatory space such that its intersection with a Wilson line at each node of the spin chain yields a Lax operator [38]. This operator is a basic ingredient of the Bethe Ansatz approach [39]-[42]; it operates on the quantum spaces and is a solution to the RLL equation underlying the integrable spin chain. In the Gauge theory formulation, the Lax operator is computed as the parallel transport of gauge fields past the 't Hooft line. This key result was demonstrated in [7] for the particular case where the magnetic charge is given by a minuscule coweight \(\mu\) of \(G.\) There, the authors linked the Levi decomposition of the Lie algebra \(g\) to the dispersion of the gauge field bundles above and under the 't Hooft line due the Dirac-like singularity induced by the presence of this magnetic operator. The particularity of the minuscule coweight is that it acts on the roots of \(g\) with the eigenvalues \(0,\pm 1\)[43] which decomposes the Lie algebra \(g\) into three subspaces as \(n_{-1}\oplus l_{\mu}\oplus n_{+1}\). This is a Levi decomposition of \(g\) where the Levi subalgebra \(l_{\mu}\) has charge \(0\) with respect to \(\mu,\) and the \(n_{\pm 1}\) are nilpotent subspaces given by modules of \(l_{\mu}\) and having charges \(\pm 1\)[44],[45]. These algebraic features play a major role in this investigation because for any 't Hooft line with minuscule magnetic charge \(\mu\) of \(G,\) the corresponding Lax operator can be simply computed by the general formula \(\mathcal{L}^{\mu}\left(z\right)=e^{X}z^{\mu}e^{Y},\) where \(X=X_{\alpha}b^{\alpha}\) and \(Y=Y^{\alpha}c_{\alpha}\) are elements of \(n_{+}\) and \(n_{-}.\) The adjoint action of the coweight \(\mu\) is defined by the branching rule of a representation \(R\) of \(g,\) carried by a Wilson line, resulting from the Levi decomposition. The oscillator structure of the phase space of this L-operator follows from the Levi decomposition properties. In this regard, we have for the classical coordinates of \(n_{\pm 1},\) the Poisson bracket \(\left\{b^{\alpha},c_{\beta}\right\}_{PB}=\delta_{\beta}^{\alpha}\) becoming commutators at the quantum level [7],[46]. Based on this approach, minuscule Lax operators for the simply laced \(A\) and \(D\) type bosonic spin chains were first realized in the CS theory in [7], and then in [12],[13], in agreement with solutions obtained using Yangian representations [47]-[49]. Lax operators of bosonic spin chains for non simply laced \(B\) and \(C\) type symmetries were recovered in [13] in accord with [50]. The power of this 4D CS/ Integrability correspondence allowed also to build solutions for exceptional spin chains, with internal symmetries described by the simply laced \(\rm e_{6}\) and \(\rm e_{7}\) algebras, which were lacking in the spin chain literature [51]. The missing exceptional \(\rm e_{8}\), \(\rm f_{4}\) and \(\rm g_{2}\) symmetries do not have minuscule coweights [43]. Regarding superspin chains with internal symmetry described by Lie superalgebras, the generalization of the 4D CS/ Integrability correspondence requires the equipment of a 4D Chern-Simons theory having super gauge symmetry with super line defects carrying bosonic and fermionic degrees of freedom. This extension was motivated in [46] by uplifting from the \(SL(m)\) to the \(SL(m|n)\) symmetry and by taking advantage of the resemblance of their algebraic structure. The super extensions of the Lax operators characterizing the \(sl(m|n)\) superspin chain were calculated in the framework of the dual \(SL(m|n)\) 4D CS by using a generalized formula similar to the bosonic \(\mathcal{L}^{\mu}\left(z\right)=e^{X}z^{\mu}e^{Y}\). However, due to the lack of the notion of minuscule coweight and Levi decomposition in the superalgebras literature, a Dynkin diagram's node cutting method was used in order to generate 3-gradings of the \(sl(m|n)\) Lie superalgebra. These decompositions of the \(sl(m|n)\) family have similar properties to the Levi decomposition such that the role of the minuscule coweight is played by the cutted node. This approach allowed to construct explicit super L-operators in terms of bosonic and fermionic oscillators of the phase space, in agreement with the super-spin chain literature using degenerate solutions of the graded Yang-Baxter equation [52]. In this paper, we follow a quite similar approach to [46] in order to build oscillator realizations of super Lax operators for integrable superspin chains classified by the basic \(ABCD\) Lie superalgebras. As for these Lie superalgebras, one has several super Dynkin diagrams depending on the number of fermionic roots and their ordering. Therefore, one distinguishes several varieties of the ABCD superspin chains due to their link to the super Dynkins. By considering a super Wilson line \(W_{\xi_{z}}^{\mathbf{R}}\) in a given super representation \(\mathbf{R}\) and a super 't Hooft \(\rm tH_{\gamma_{0}}^{\mu}\) with magnetic charge \(\mu\), we calculate the super L-operator describing their crossing. For the \(sl(m|n)\) symmetry, we derive the super L-operators for any the coweight \(\mu\) of any super Dynkin diagram of the \((m+n)!/m!n!\) possible graphs. We show that they agree with those calculated by using the super Yangian representations for reference. For the \(B(m|n)\), \(C(n)\) and \(D(m|n)\) superalgebras, we give a family of distinguished super L-operators corresponding to specific coweights that lead to Levi-like 3-gradings. The presentation is as follows: In section 2, we give basic tools of the 4D Chern-Simons theory with \(SL(m|n)\) gauge symmetry, and the realization of the \(sl(m|n)\) superspin chain by means of super line defects. We describe the super Lax operator construction for basic Lie superalgebras. In section 3, we use this construction to build solutions for the RLL equations of the \(sl(m|n)\) superspin chain, and compare with known results of the literature. Sections 4, 5 and 6 are respectively dedicated to the building of super Lax operators for superspin chains with \(B(m|n)\), \(C(n)\) and \(D(m|n)\) symmetries. We end with a conclusion and discussions. \(sl(m|n)\) superspin chain in 4D CS In this section, we consider the standard \(\text{A}_{\text{\tiny BF}}\)-family of superspin chains based on the \(sl(m|n)\) Lie superalgebra (\(m\neq n\)) to first introduce the basics of the present investigation, and second to complete partial results in literature with regards to the \(\text{A}_{\text{\tiny BF}}\) class [52]. The label bf refers to chains with bosonic and fermionic degrees of freedom. This family of integrable super systems is realized in the framework of the 4D Chern Simons gauge theory having the \(SL(m|n)\) super gauge group. The superspin chain families \(\text{B}_{\text{\tiny BF}}\), \(\text{C}_{\text{\tiny BF}}\) and \(\text{D}_{\text{\tiny BF}}\) to be studied in the forthcoming sections are realized in a similar fashion; and as such the algebraic basics of the line defects construction for all superchains are only detailed for the case of the \(\text{A}_{\text{\tiny BF}}\) super chain. To this end, it is interesting to recall that the \(\text{A}_{\text{\tiny BF}}\) special family of superchains generalizes the well known family of \(sl(n)\) spin chains termed below as the \(\text{A}_{\text{\tiny BOSE}}\) family. The generalised \(\text{A}_{\text{\tiny BF}}\) has two basic features: First, it is classified by the set of Lie superalgebras \(g_{\text{\tiny BF}}\) given by the bi-integer series \(\text{A}_{\text{\tiny BF}}^{m,n}\equiv sl(m|n)\) including \(sl(n)\) and \(sl(m)\) as bosonic subsectors. Second, for \(sl(m|n)\) one distinguishes several types of superchains versus one ordinary \(sl(n)\) chain in the bosonic case; this is due to the \(\mathbb{Z}_{2}\)-grading of the BF gauge symmetry to be commented later on. For example, given two positive defined integers \((m,n)\), one has \[N_{n,m}^{\text{\tiny Azr}}=\frac{(m+n)!}{m!n!} \tag{1}\] varieties of \(sl(m|n)\) superchains [53]. To perform this study, we begin by briefly describing the dual four-dimensional gauge theory with \(SL(m|n)\) local symmetry, and its line defects that allow for the superspin chain realization. Then, we introduce a general formula for the computation of super Lax operators directly from the superalgebra 3-gradings. ### 4D CS gauge theory with \(SL(m|n)\) symmetry The field action describing 4D Chern-Simons theory with super \(SL(m|n)\) gauge symmetry, living on the 4D manifold \(M_{4}=\mathbb{R}^{2}\times\mathcal{C}\) with \(\mathcal{C}\) taken as the projective \(\mathbb{CP}^{1}\), is written in terms of the supertrace of the CS 3-form [4] \[\mathcal{S}_{CS}^{sl_{(m|n)}}=\int_{\mathbb{R}^{2}\times\mathbb{CP}^{1}}dz \wedge str\left[\mathcal{A}\wedge d\mathcal{A}+\frac{2}{3}\mathcal{A}\wedge \mathcal{A}\wedge\mathcal{A}\right] \tag{2}\] The 1-form gauge potential \(\mathcal{A}\) is given by \(\mathcal{A}_{x}dx+\mathcal{A}_{y}dy+\mathcal{A}_{\bar{z}}d\bar{z}\) where we have dropped out the component \(\mathcal{A}_{z}dz\) because of the \(dz\) factor in the metric. It is valued in the \(sl(m|n)\) Lie superalgebra; it expands like \[\mathcal{A}=\sum_{\text{\tiny AB}}A^{\text{\tiny AB}}\mathcal{E}_{\text{\tiny AB }} \tag{3}\] where \(\mathcal{E}_{\text{\tiny AB}}\) are graded generators of \(sl(m|n)\) obeying the graded commutation relations \[[\mathcal{E}_{\text{\tiny AB}},\mathcal{E}_{\text{\tiny CD}}\}=\delta_{\text{\tiny BC }}\mathcal{E}_{\text{\tiny AD}}-(-)^{|\mathcal{E}_{\text{\tiny AB}}||\mathcal{E }_{\text{\tiny CD}}|}\,\delta_{\text{\tiny DA}}\mathcal{E}_{\text{\tiny CB}} \tag{4}\] with degree as \[|\mathcal{E}_{\text{\tiny AB}}|\equiv\deg\mathcal{E}_{\text{\tiny AB}}=|\text{ A}|+|\text{B}| \tag{5}\] The supertrace of the Chern-Simons 3-form in (2) is therefore written in terms of the graded metric \(g_{\text{\tiny ABCD}}=str(\mathcal{E}_{\text{\tiny AB}}\mathcal{E}_{\text{ \tiny CD}})\) and the constant structures \(f_{\text{\tiny ABCDEF}}=str(\mathcal{E}_{\text{\tiny AB}}\mathcal{E}_{\text{ \tiny CD}}\mathcal{E}_{\text{\tiny EF}})\) as follows \[str\left(\mathcal{A}\wedge d\mathcal{A}+\frac{2}{3}\mathcal{A}\wedge\mathcal{A }\wedge\mathcal{A}\right)=g_{\text{\tiny ABCD}}A^{\text{\tiny AB}}dA^{\text{ \tiny CD}}+\frac{2}{3}f_{\text{\tiny ABCDEF}}A^{\text{\tiny AB}}A^{\text{\tiny CD }}A^{\text{\tiny EF}} \tag{6}\] In order to realize lower dimensional integrable super systems in this CS theory, we introduce super line defects such as the Wilson \(W^{\boldsymbol{m}|\boldsymbol{n}}_{\xi_{z}}\)[3, 46, 54]. This topological defect is represented by a curve \(\xi_{z}\) in \(\mathbb{R}^{2}\), sitting in the position \(z\) in \(\mathbb{CP}^{1}\) along which propagate quantum super states. So, the \(W^{\boldsymbol{m}|\boldsymbol{n}}_{\xi_{z}}\) can be imagined as an electrically charged line defect, characterized by the fundamental representation \(\boldsymbol{m}|\boldsymbol{n}\) of the superalgebra \(sl(m|n)\) such that the electric charge is given by the corresponding highest weight. The topological \(W^{\boldsymbol{m}|\boldsymbol{n}}_{\xi_{z}}\) is defined as the supertrace of the holonomy of the gauge field around the curve \(\xi_{z}\) like \[W^{\boldsymbol{m}|\boldsymbol{n}}_{\xi_{z}}=str_{\boldsymbol{m}|\boldsymbol{n} }\left[P\exp\left(\oint_{\xi_{z}}\mathcal{A}\right)\right] \tag{7}\] with \(\mathcal{A}\) as in (3). Along with the Wilson \(W^{\boldsymbol{m}|\boldsymbol{n}}_{\xi_{z}}\), we also have a magnetically charged 't Hooft line [35]-[37] denoted here as \(\text{tH}^{\mu}_{\gamma_{z^{\prime}}}\). This is also a topological line defect which is implemented in 4D in term of a curve \(\gamma_{z^{\prime}}\) extending in the space \(\mathbb{R}^{2}\), and living at a point \(z^{\prime}\) in \(\mathbb{CP}^{1}\). The \(\text{tH}^{\mu}_{\gamma_{z^{\prime}}}\) is a disorder operator that carries a magnetic charge given by a coweight \(\mu\) of the \(SL(m|n)\) supergroup; and is identified with the parallel transport of gauge field bundles past the line. By taking \(\gamma_{z^{\prime}}\) as the x-axis in \(\mathbb{R}^{2}\) and \(z^{\prime}=0\), we can write the 't Hooft line observable as \[\mathcal{L}^{\mu}(z)=P\exp\left(\int_{y}\mathcal{A}_{y}(z)\right) \tag{8}\] where the transport of the gauge fields is measured from \(y<0\) to \(y>0.\) Actually, just like in the bosonic case [7], the Dirac singularity induced by the presence of the super \(\text{tH}^{\mu}_{\gamma_{0}}\) leads to a dispersion of the surrounding field bundles in the 4D CS theory. The gauge configuration is given by trivialized bundles in the regions \(U_{I}=\left\{y\leq 0,z=0\right\}\) and \(U_{II}=\left\{y\geq 0,z=0\right\}\) glued by a transition function on the intersection \(U_{I}\cap U_{II}=\gamma_{0}.\) The latter is equal to the local Dirac monopole singularity \(z^{\mu}\) near the line, while the trivial fields are given by holomorphic gauge transformations \(\mathfrak{g}_{I}(z)\), \(\mathfrak{g}_{II}(z)\) in \(G[[z]]\). The L-operator (8) is given by the following formula \[\mathcal{L}^{\mu}(z)=\mathfrak{g}_{I}(z)z^{\mu}\mathfrak{g}_{II}(z) \tag{9}\] Notice that the \(\mathrm{tH}_{\gamma_{0}}^{\mu}\) is in fact sitting at the end of a Dirac string linking it to another 't Hooft line living at \(z=\infty,y=0\) and having the opposite magnetic charge. This detail is omitted in the present inquiry and the 't Hooft line \(\mathrm{tH}_{\gamma_{0}}^{\mu}\) is to be understood as coupled to a \(\mathrm{tH}_{\gamma_{\infty}}^{-\mu}\). The phase space of this double line is given by holomorphic functions in \(G\left((z)\right)\) of the form (9) that verify the appropriate singularity constraints in \(z=0,\infty.\) These features are detailed in [7] and the appendix of [46]. ### Oscillator realization in 4D CS theory We move now to linking the gauge theory presented above to the integrable superchain systems we are concerned about here. We describe the super 4D CS/ superchain correspondence while focussing on \(sl(m|n)\) symmetry. We introduce the super Lax operator as a solution of the RLL equation characterizing these integrable super systems; and explain its interpretation and computation in the dual gauge theory. To begin, the super 4D CS/ superchain correspondence we are considering here is an extension of the well known bosonic 4D CS/ Integrability stipulating that integrable \(sl\left(n\right)\) spin chain is dual to 4D Chern-Simons theory with gauge group \(SL\left(n\right).\) This bosonic correspondence is also valid for other gauge symmetries \(G_{\mathrm{BOS}}\) given by Cartan classification of finite dimensional Lie algebras including \(A_{n}\), \(B_{n}\), \(C_{n}\), \(D_{n}\) and the exceptional ones. Together with this interesting result, it has been also conjectured that the bosonic _duality_ extends to superspin chains where several checks were found successful and suggestive [55],[46]. In this Fermi/Bose generalisation, the superchains are characterised by superspin representations of Lie superalgebras \(g_{\mathrm{BF}}\) splitting like \[g_{\mathrm{BF}}=g_{\bar{0}}\oplus g_{\bar{1}}\qquad,\qquad g_{\mathrm{BF}} \equiv g_{\mathrm{BOSE/FERMI}} \tag{10}\] with \(g_{\bar{0}}\) a bosonic Lie algebra and \(g_{\bar{1}}\) a module of it. Their dual descriptions are given by 4D Chern-Simons theories with super groups \(G_{\mathrm{BF}}=G_{\bar{0}}\times G_{\bar{1}}.\) As such, basic integrable super chains fall into families classified by the basic Lie superalgebras \(g_{\mathrm{BF}}\) as listed below [60],[61] \[\begin{array}{c|cc}g_{\mathrm{BF}}&\text{even part $g_{\bar{0}}$}&\text{ odd part $g_{\bar{1}}$}\\ \hline\hline A(m|n)&A_{m}\oplus A_{n}\oplus U(1)&(\bar{m},n)\oplus(m,\bar{n}) \\ \hline B(m|n)&B_{m}\oplus C_{n}&(2m+1,2n)\\ \hline C(n)&C_{n-1}\oplus U(1)&(2n-2)\oplus(2n-2)\\ \hline D(m|n)&D_{m}\oplus C_{n}&(2m,2n)\\ \hline\hline\end{array} \tag{11}\] In general, an integrable superspin chain with intrinsic structure given by a basic superlagebra \(g_{\mathrm{BF}}\) like \(sl(m|n)\) is dual to the 4D Chern-Simons theory with the corresponding super gauge group \(G_{\mbox{\tiny BF}}\) as \(SL(m|n).\) The field action \({\cal S}_{CS}^{g_{\mbox{\tiny BF}}}[{\cal A}]\) describing the dual integrable theory is given by eq(2) with 4D chern-Simons potential \({\cal A}={\cal A}_{x}dx+{\cal A}_{y}dy+{\cal A}_{\bar{z}}d\bar{z}\) with expansion as in eq(3). The extension of the bosonic gauge/ Integrability duality to graded symmetries \(G_{\mbox{\tiny BF}}\) is permitted by the implementation of the super line defects introduced previously. In this generalisation, the Wilson super line \(W_{\xi_{z}}^{\mbox{\scriptsize\boldmath$m|n$}}\) carries degrees of freedom that allow to describe the quantum states of a super chain atom (below super atom) with \(SL(m|n)\) internal symmetry [53]. The integrable \(sl(m|n)\) superchain with \({\cal N}\) super atoms is realised in the CS theory by placing \((i)\,L\) vertical Wilson lines \(W_{\xi_{z}^{i}}^{\mbox{\scriptsize\boldmath$m|n$}}\) at each node \(\nu_{i}\,(1\leq i\leq L)\) of the superspin chain such that \[\xi_{z}^{i}=\left(z_{i},x_{i},\mathbb{R}\right)\hskip 28.452756pt\mbox{ with}\hskip 28.452756pt\left\{\begin{array}{c}z_{i}=z\\ x_{i}<x_{i+1}\\ -\infty<y<\infty\end{array}\right. \tag{12}\] and \((ii)\) an horizontal 't Hooft line \(\mbox{tH}_{\gamma_{0}}^{\mu}\) with curve \(\gamma_{0}=\left(z_{0},\mathbb{R},y_{0}\right)\) that can be thought of as filling the x-axis (\(y_{0}=0\)) in \(\mathbb{R}^{2}\) and sitting at \(z_{0}=0\) in \(\mathbb{CP}^{1}\). These super line defects intersect in the topological plane \(\mathbb{R}^{2}\left(x,y\right)\) as depicted in **Figure 1**, where the 't Hooft line plays the role of a transfer matrix modeling the interactions between the electrically charged super atoms along the chain. Following the RLL realization, each intersection of an electric \(W_{\xi_{z}}^{\mbox{\scriptsize\boldmath$m|n$}}\) with the magnetic \(\mbox{tH}_{\gamma_{0}}^{\mu}\) in lattice system, yields the super Lax operator for the corresponding node of the superspin chain. This coupling operator acts on the tensor product of \(End(\mathbf{m|n})\) of the Wilson and the algebra A of functions in the phase space of the 't Hooft line; it can be simply labeled by the representation \(\mbox{\bf R}=\mathbf{m|n}\) and the coweight \(\mu\). The quantum integrability of this system is encoded in the RLL equation verified by the L-operator with matrix realisation \(L_{n}^{m}\left(z\right)\) obeying, \[R_{rs}^{ik}\left(z-w\right)L_{j}^{r}\left(z\right)L_{l}^{s}\left(w\right)=L_{ r}^{i}\left(w\right)L_{s}^{k}\left(z\right)R_{jl}^{rs}\left(z-w\right) \tag{13}\] where \(R_{rs}^{ik}\left(z-w\right)\) is the usual R-matrix of the Yang-Baxter equation. This RLL equation has a remarkable graphical representation given by **Figure 2**. Recall that Figure 1: Realization of an \(sl(m|n)\) superspin chain of \(L\) nodes in the fundamental representation using super line defects in the 4D Chern Simons. n the bosonic case, in particular for the integrable \(sl\left(n\right)\) chain, the explicit oscillator realization of the L-operator in the framework of 4D CS theory is worked out by identifying the gauge splitting in the factorization (9) with the Levi decomposition of the Lie algebra with respect to the coweight \(\mu\)[7]. In fact, by choosing a minuscule coweight [43] that acts on algebra elements with eigenvalues \(0,\pm 1\), the subspaces \(\mathbf{n}_{\pm}\) carrying charges \(\pm 1\) are in one to one with the \(\mathfrak{g}_{I}(z)\) and \(\mathfrak{g}_{II}(z)\) in (9), while the action of \(z^{\mu}\) is defined by the branching rule of the representation \(\mathbf{R}\). The Levi constraints \[\left[\mu,\mathbf{n}_{\pm}\right]=\pm\mathbf{n}_{\pm};\qquad\left[\mathbf{n}_{ +},\mathbf{n}_{-}\right]=\mu;\qquad\left[\mathbf{n}_{\pm},\mathbf{n}_{\pm} \right]=0 \tag{14}\] define the harmonic oscillators of the phase space of the L-operator. To investigate (9) for integrable \(sl(m|n)\) super chain with \(\mathcal{N}\) super atoms, one may be tempted to just generalise the construction done for the integrable \(sl\left(n\right)\) chain; but this poses a problem because the notion of Levi decomposition and minuscule coweight are not yet known for Lie superalgebras. We propose here to circumvent this difficulty by using superalgebra 3-gradings generated by the method of (extended) Dynkin diagram [59]. These decompositions \((i)\) have similar properties to Levi decompositions allowing to realize oscillators of the auxiliary phase space, \((ii)\) are motivated by results of the bosonic case where Levi decompositions of the \(ABCDE\) Lie algebras are also deduced by nodes cutting in the associated Dynkin diagram [13], and \((iii)\) are explicitly verified by known super chain results obtained for the \(sl(m|n)\) superchain by using Yangian algebra [46]. The Dynkin diagram cutting method is described in the series of papers [56]-[58] where the results are listed for all nodes of the Dynkin diagrams of basic superalgebras. In fact, given a super Dynkin diagram of a superalgebra \(g_{\textsc{bF}}\), one can determine all regular subalgebras \(g_{0}\) of \(g_{\textsc{bF}}\) by consecutive nodes cutting from the Dynkin (and extended) diagram. This yields decompositions of the superalgebra \(g_{\textsc{bF}}\) having a Figure 2: Graphic representation of the RLL equation in terms of intersecting line defects in \(SL(m|n)\) 4D CS theory. 5-grading form \[{\bf g}_{\mbox{\tiny BF}}={\bf g}_{-2}\oplus{\bf g}_{-1}\oplus{\bf g}_{0}\oplus{ \bf g}_{+1}\oplus{\bf g}_{+2} \tag{15}\] where \({\bf g}_{\pm k}\) with \(k=1,2\) are \(g_{0}\)-modules determined by means of representation techniques and where \[w({\bf g}_{+k})={\bf g}_{-k} \tag{16}\] with \(w\) being the standard antilinear anti-involutive mapping of the Lie superalgebra \(g_{\mbox{\tiny BF}}\). For the purposes of our study, we are only interested in the case where \({\bf g}_{\pm 2}=0\), i.e. in decompositions of Lie superalgebras that look like \[{\bf g}_{\mbox{\tiny BF}}={\bf g}_{-1}\oplus{\bf g}_{0}\oplus{\bf g}_{+1} \tag{17}\] These 3-gradings result from the cutting of specific nodes in the Dynkin diagram, and are analogous to Levi decompositions for bosonic Lie algebras because we have [56]-[58] \[[{\bf g}_{j},{\bf g}_{l}]={\bf g}_{j+l} \tag{18}\] for \(j,l=0,\pm 1.\) In what follows, we will interpret these values as Levi charges associated to the coweight of \({\bf g}_{\mbox{\tiny BF}}\) corresponding to the cutted node and acting as a "minuscule" coweight. We will use this interpretation to propose a generalized super Lax operator formula based on the 3-gradings (17). In fact, the possible 3-gradings of this type are classified for the family of basic Lie superalgebras as follows [56] \[\begin{array}{|c|c|c|}\hline{\bf g}_{\mbox{\tiny BF}}&{\bf g}_{0}&\dim{\bf g }_{-1}=\dim{\bf g}_{+1}\\ \hline A(m|n)&sl(k|l)\oplus sl(m-k|n-l)\oplus U(1)&(k+l)(m-k+n-l)\\ \hline B(m|n)&B(m-1|n)\oplus U(1)&2m+2n-1\\ \hline C(n)&C_{n-1}\oplus U(1)&2\,(n-1)\\ \hline&sl(1|n-1)\oplus U(1)&\frac{n(n+1)}{2}-1\\ \hline D(m|n)&D(m-1|n)\oplus U(1)&2\,(m+n-1)\\ \hline&sl(m|n)\oplus U(1)&\frac{(m+n)(m+n+1)}{2}-m\\ \hline\end{array} \tag{19}\] The general super Lax operator construction for any Lie superalgebra \({\bf g}_{\mbox{\tiny BF}}\) is described through the following steps : **A) 3-grading of the superalgebra \({\bf g}_{\mbox{\tiny BF}}\)** We consider the Lie superalgebra \({\bf g}_{\mbox{\tiny BF}}\) with a 3-grading (17) written as \[{\bf g}_{\mbox{\tiny BF}}={\bf N}_{+}\oplus{\boldsymbol{l}}_{\mu}\oplus{\bf N} _{-} \tag{20}\] and obtained by a node cutting from the Dynkin diagram \({\mathfrak{D}}[{\bf g}_{\mbox{\tiny BF}}]^{(\kappa)}\) such that \(\mu\) is the coweight of \({\bf g}_{\mbox{\tiny BF}}\) associated to the deleted node. This coweight acts as a "super minuscule" coweight and the three algebraic blocks in (20) are as described below: * The \({\boldsymbol{l}}_{\mu}\) is a regular Lie sub-(super)algebra of \(g_{\mbox{\tiny BF}}\) with elements carrying charge 0 with respect to the coweight \(\mu\); it plays the role of a Levi subalgebra. \[[\mu,{\boldsymbol{l}}_{\mu}]=0\] (21) In fact, the \(\boldsymbol{l}_{\mu}\) is always given by a direct sum \[\boldsymbol{l}_{\mu}=\mathfrak{l}_{\mu}\oplus\mathbb{C}\mu\] (2.22) where \(\mathbb{C}\mu\) is associated to the cutted node. This can be visualized in the example of the bosonic \(sl(p)\), where the cutting of the last node \(\alpha_{p-1}\) from \(\mathfrak{D}[sl(p)]\) having \((p-1)\) nodes leads to two pieces: \((i)\) the \(\mathfrak{D}[sl(p-1)]\) of the Lie algebra \(sl(p-1)\) having \(p-2\) nodes thought of as \(\boldsymbol{l}_{\mu_{p-1}}\), and \((ii)\) an isolated node \(\{\alpha_{p-1}\}\) corresponding to \(\mathbb{C}\mu_{p-1}\) due to \(<\mu_{p-1},\alpha_{p-1}>=1\). This \(\mathbb{C}\mu_{p-1}\) is given by the abelian \(sl\left(1,\mathbb{C}\right)\) in the resulting Levi decomposition reading as \(sl(p)=\mathbf{n}_{+}+\boldsymbol{l}_{\mu_{p-1}}+\mathbf{n}_{-}\) with \(\boldsymbol{l}_{\mu_{p-1}}=sl(p-1)\oplus sl\left(1\right)\) and \(\mathbf{n}_{\pm}=p-1\). * The remaining elements of the decomposition (2.20) (i.e: elements in \(\mathbf{N}=\mathbf{g}_{\text{\tiny{BF}}}\backslash\boldsymbol{l}_{\mu}\)) are nilpotent; they are given by \(\boldsymbol{l}_{\mu}\)-modules that carry charges \(\pm 1\) with respect to the \(\mu\). \[\left[\mu,\mathbf{N}_{\pm}\right]=\pm\mathbf{N}_{\pm}\] (2.23) The two graded subspaces \(\mathbf{N}_{\pm}\) mutually supercommute \[\left[\mathbf{N}_{+},\mathbf{N}_{+}\right\}=0\qquad,\qquad\left[\mathbf{N}_{-},\mathbf{N}_{-}\right\}=0\] (2.24) They moreover verify the generalized Levi-like constraint \[\left[\mathbf{N}_{+},\mathbf{N}_{-}\right\}\subset\boldsymbol{l}_{\mu}\] (2.25) **B)**_Branching of representations of_ \(\mathbf{g}_{\text{\tiny{BF}}}\) Under the decomposition (2.20), a representation \(\mathbf{R}\) of the \(\mathbf{g}_{\text{\tiny{BF}}}\) splits into a direct sum of irreducible representations \(\mathfrak{R}_{q_{i}}\) of the superalgebras in \(\boldsymbol{l}_{\mu}=\mathfrak{l}_{\mu}+\mathbb{C}\mu\). These subspaces \(\mathfrak{R}_{q_{i}}\) carry charges \(q_{i}\) with respect to \(\mu\), that can be identified in the bosonic case from known branching rules. In general, we write \[\mathbf{R}=\sum_{i}\mathfrak{R}_{q_{i}}\qquad,\qquad\left[\mu,\mathfrak{R}_{q _{i}}\right]=q_{i}\mathfrak{R}_{q_{i}} \tag{2.26}\] The adjoint action of the coweight \(\mu\) on the representation \(\mathbf{R}\) can be therefore defined as follows \[\mu=\sum_{i}q_{i}\Pi_{i}\qquad,\qquad\sum_{i}\Pi_{i}=I_{id} \tag{2.27}\] where \(\Pi_{i}\) is the projector on the subspace \(\mathfrak{R}_{q_{i}}\) and \(q_{i}\) is often termed as the Levi-charge. In the superalgebra case, the charges \(q_{i}\) obey in addition to \[q_{i}-q_{i+1}=\pm 1 \tag{2.28}\] the super-traceless condition, \[str\left(\mu\right)=\sum_{i}q_{i}str\left(\Pi_{i}\right)=0 \tag{2.29}\] These constraints allow us to compute these charges in the absence of branching rules for superalgebras in the literature. **C)**_Super Lax operator_\(\mathcal{L}_{\mathbf{R}}^{\mu}\) The super L-operator describing the coupling of the representation \(\mathbf{R}\) and the magnetic coweight \(\mu\) acting on the superalgebra \(\mathbf{g}_{\textsc{hf}}\) as (2.20), is equal to \[\mathcal{L}_{\mathbf{R}}^{\mu}=e^{X}z^{\mu}e^{Y} \tag{2.30}\] Later on, it will be simply labeled as \(\mathcal{L}^{\mu}\) since we will take \(\mathbf{R}\) as the fundamental representation for every symmetry type. The \(z^{\mu}\) follows from the action of \(\mu\) on \(\mathbf{R}\) as given by (2.27). The \(X\) and \(Y\) are elements of \(\mathbf{N}_{+}\) and \(\mathbf{N}_{-}\) expanding as \[X=\sum_{i=1}^{\dim\mathbf{N}_{+}}b^{i}X_{i}\qquad,\qquad Y=\sum_{i=1}^{\dim \mathbf{N}_{-}}c_{i}Y^{i} \tag{2.31}\] with graded generators \(X_{i}\) and \(Y^{i}\) corresponding to graded root generators of \(\mathbf{g}_{\textsc{hf}}\) that are not contained in \(\boldsymbol{l}_{\mu}=\mathfrak{l}_{\mu}\oplus\mathbb{C}\mu\). They can be realized using the following property; for a cutted node corresponding to a graded simple root \(\beta\), the root system \(\Phi_{\mathbf{g}_{\textsc{hf}}}\) splits as \[\Phi_{\mathbf{g}_{\textsc{hf}}}=\{\pm\beta\}\cup\Phi_{\boldsymbol{l}_{\mu}} \cup\Phi_{\mathbf{N}_{\pm}}\qquad,\qquad\Phi^{\prime}_{\mathbf{g}_{\textsc{hf} }}=\Phi_{\mathfrak{l}_{\mu}}\cup\Phi_{\mathbf{N}_{\pm}} \tag{2.32}\] where \(\Phi_{\mathbf{N}_{\pm}}\) contains graded roots in \(\Phi^{\prime}_{\mathbf{g}_{\textsc{hf}}}\) that depend on \(\beta\), i.e. \[\Phi_{\mathbf{N}_{\pm}} = \left\{\pm\alpha_{\textsc{hf}}\in\Phi^{\prime}_{\mathbf{g}_{ \textsc{hf}}},\quad\frac{\partial\alpha_{\textsc{hf}}}{\partial\beta}\neq 0\right\} \tag{2.33}\] \[\Phi_{\mathfrak{l}_{\mu}} = \left\{\pm\alpha_{\textsc{hf}}\in\Phi^{\prime}_{\mathbf{g}_{ \textsc{hf}}},\quad\frac{\partial\alpha_{\textsc{hf}}}{\partial\beta}=0\right\} \tag{2.34}\] The sign of roots in each nilpotent is defined by the condition (2.23), such that \(\mu\) acts on roots of \(\mathbf{N}_{\pm}\) with \(\pm 1\). The \((b^{i},c_{i})\) are graded Darboux coordinates of the phase space of the L-operator. They are given by bosonic and fermionic oscillators with Poisson bracket as \[\left\{b^{i},c_{j}\right\}_{PB}=\delta^{i}_{j} \tag{2.35}\] which lifts to the super-commutator in the quantum level \[\left[b^{i},c_{j}\right\}=\delta^{i}_{j} \tag{2.36}\] These steps will be used below to complete missing results in literature concerning integrable superspin chains with underlying symmetries given by the Lie superalgebras like \(B_{\textsc{hf}}\), \(C_{\textsc{hf}}\) and \(D_{\textsc{hf}}\). But before that, we begin by testing this generalized formula by computing the super L-operators for the \(sl(m|n)\) superspin chain and comparing with equivalent super matrices in the literature. Super L-operators for all \(sl(m|n)\) superspin chains In this section, we apply the formula (2.30) introduced in the previous section in order to build super L-operators for the \(sl(m|n)\) superspin chain. Thanks to the richness of this A-type supersymmetry, we will be able to generate all families of solutions \({\cal L}^{\mu}\) labeled by magnetic charges \(\mu\) of \(SL(m|n).\) These coweights are in one to one with different nodes of different Dynkin diagrams \({\mathfrak{D}}[sl(m|n)]^{(\kappa)}\) of \(sl(m|n)\) labeled by positive integers \(\kappa.\)In these regards, recall that contrary to bosonic Lie algebras \({\bf g}_{\mbox{\tiny BOSE}},\) a superalgebra \({\bf g}_{\mbox{\tiny BF}}\) has several Dynkin diagrams \({\mathfrak{D}}[{\bf g}_{\mbox{\tiny BF}}]^{(\kappa)}.\) This is due to the existence of two kinds of fundamental unit weight vectors : bosonic unit weights \(\varepsilon_{i}\) with metric \(\langle\varepsilon_{i},\varepsilon_{j}\rangle=\delta_{ij},\) and fermionic \(\delta_{\mbox{\tiny A}}\)s with \(\langle\delta_{\mbox{\tiny A}},\delta_{\mbox{\tiny B}}\rangle=-\delta_{\mbox {\tiny AB}}.\) Hence, the graded simple roots \(\alpha_{i}\) have special properties depending on their realisations, which for \(sl(m|n)\) may be \((i)\) fermionic of the form \[\alpha_{l}=\delta_{l}-\varepsilon_{l+1}\qquad,\qquad\alpha_{l}^{2}=0 \tag{3.1}\] or \((ii)\) bosonic having two possible forms like \[\begin{array}{rclrclrcl}\alpha_{i}&=&\delta_{i}-\delta_{i+1}&&,&&\alpha^{2} &=&-2\\ \alpha_{a}&=&\varepsilon_{a}-\varepsilon_{a+1}&&,&&\alpha^{2}&=&+2\end{array} \tag{3.2}\] Recall also that, as for bosonic \({\bf g}_{\mbox{\tiny BOSE}},\) the set of the simple roots generate the graded root system \(\Phi_{{\bf g}_{\mbox{\tiny BF}}}\equiv\{\pm\alpha_{\mbox{\tiny BF}}\};\) and because of the three possibilities, we distinguish different types of root systems for \({\bf g}_{\mbox{\tiny BF}}\) labeled by \(\kappa\) and denoted like \(\Phi_{{\bf g}_{\mbox{\tiny BF}}}^{(\kappa)}\). Generally, a superalgebra \({\bf g}_{\mbox{\tiny BF}}\) has several Dynkin diagrams \({\mathfrak{D}}[{\bf g}_{\mbox{\tiny BF}}]^{(\kappa)}\) \[{\mathfrak{D}}[{\bf g}_{\mbox{\tiny BF}}]^{(1)},\quad{\mathfrak{D}}[{\bf g}_{ \mbox{\tiny BF}}]^{(2)},\quad{\mathfrak{D}}[{\bf g}_{\mbox{\tiny BF}}]^{(3)}, \quad....,\quad{\mathfrak{D}}[{\bf g}_{\mbox{\tiny BF}}]^{(n_{\mbox{\tiny BF}})} \tag{3.3}\] As an illustration, we give in **Figure 3** examples of super Dynkin diagrams \({\mathfrak{D}}[sl_{(3|4)}]^{(\kappa)}\) concerning the \(sl(3|4)\) superalgebra. In this Figure, the fermionic simple roots are represented by green nodes, the bosonic simple roots with \(\alpha^{2}=-2\) are represented in blue, and the bosonic simple roots with \(\alpha^{2}=2\) in red. Recall that for this Lie superalgebra, we actually have \[\frac{7!}{4!\times 3!}=35 \tag{3.4}\] possible super Dynkin diagrams \({\mathfrak{D}}[sl_{(3|4)}]^{(1)},...,{\mathfrak{D}}[sl_{(3|4)}]^{(35)}\) depending on the ordering of the fundamental unit weights \(\delta_{i},\varepsilon_{a}\). The distinguished super Dynkin diagram having only one fermionic node is drawn for a general Lie superalgebra \(sl(m|n)\) in **Figure 4**; the weight basis associated to such diagram is also called distinguished. This multiplicity of Dynkin diagrams for a superalgebra results in different possible varieties of superspin chains for each symmetry \(g_{\mbox{\tiny BF}}.\) For the \(sl(m|n)\) symmetry, we will consider all the possible superspin chain systems and build the general super Lax operator \({\cal L}^{\mu}_{sl(m|n)}\) associated to a generic node of one of the \((m+n)!/(m!n!)\) super Dynkin diagrams \({\mathfrak{D}}[sl_{(m|n)}]^{(\kappa)}\). Notice that these general solutions include those derived in [46] using the same approach but only for the distinguished Dynkin diagram having one fermionic node given by \(\alpha_{m}=\varepsilon_{m}-\delta_{1}\). In order to proceed for the calculation, we begin by recalling that the Lie superalgebra \(A(m-1|n-1)=sl(m|n)\) with \(n\neq m\) has rank \(m+n-1\) and \(\left(m+n\right)^{2}-1\) dimensions. It has two sectors: an even sector \[sl(m|n)_{\bar{0}}=sl(m)\oplus sl(n) \tag{3.5}\] describing bosons; and an odd sector \(sl(m|n)_{\bar{1}}\) given by the module \(m|\bar{n}\oplus\bar{m}|n\) describing fermions; for short this odd sector will be denoted below as \(2mn\). The general 3-grading for the \(sl(m|n)\) superalgebra is written as \[\begin{array}{ccc}sl(m|n)&\rightarrow&l_{\mu}\oplus sl(1)\oplus{\bf N}_{+} \oplus{\bf N}_{-}\\ \boldsymbol{l}_{\mu}&=&sl(k|l)\oplus sl(m-k|n-l)\end{array} \tag{3.6}\] Figure 4: Distinguished Dynkin diagrams for the \(sl(m|n)\) superalgebra. The green node represents the only fermionic node, blue and red nodes are bosonic. Figure 3: Graded Dynkin diagrams of the \(sl(3|4)\) superalgebra. The green nodes represent fermionic simple roots, blue nodes represent bosonic roots with \(\alpha^{2}=-2\) and red nodes represent bosonic roots with \(\alpha^{2}=2\). with \(0\leq k\leq m\), \(0\leq l\leq n\) and \[\dim\mathbf{N}_{+}=\dim\mathbf{N}_{-}=\left(k+l\right)\left(m-k+n-l\right) \tag{3.7}\] This grading can correspond on the graphical level to the cutting of any of the m+n-1 nodes (bosonic or fermionic) of an arbitrary super Dynkin diagram of the \((m+n)!/(m!n!)\) possible diagrams \(\mathfrak{D}[sl_{(m|n)}]^{(\kappa)}.\) This is a special property of the linear symmetry where all nodes act like minuscule coweights. Following this decomposition, the representations of \(sl(m|n)\) also get partitioned; in what concerns us, the fundamental representation \(\mathbf{m}|\mathbf{n}\) of \(sl(m|n)\) decomposes into irreps of \(sl(k|l)\) and \(sl(m-k|n-l)\) as follows \[\mathbf{m}|\mathbf{n}\rightarrow\mathbf{k}|\mathbf{l}_{1+\frac{l-k}{m-n}} \oplus\left(\mathbf{m}-\mathbf{k}\right)|\left(\mathbf{n}-\mathbf{l}\right)_{ \frac{l-k}{m-n}} \tag{3.8}\] Eq(3.8) can be further decomposed into four blocks containing (1) the fundamental \(\underline{\mathbf{k}}\) of \(sl(k)\), (2) the fundamental \(\underline{\mathbf{l}}\) of \(sl(l)\), (3) the \(\underline{\mathbf{m}-\mathbf{k}}\) of \(sl(m-k)\); and (4) the \(\underline{\mathbf{n}-\mathbf{l}}\) of \(sl(n-l)\). These are respectively represented by the basis states \(\left|a\right\rangle,\)\(\left|i\right\rangle,\)\(\left|\alpha\right\rangle,\)\(\left|\lambda\right\rangle\) where \[1\leq a\leq k,\qquad k+1\leq i\leq m,\qquad m+1\leq\alpha\leq m+l,\qquad m+l+1 \leq\lambda\leq m+n \tag{3.9}\] We can now write the adjoint action of the coweight in terms of the four corresponding projectors as \[\mu=\left(1+\frac{l-k}{m-n}\right)\left(\Pi_{\mathbf{k}}+\Pi_{\mathbf{l}} \right)+\left(\frac{l-k}{m-n}\right)\left(\Pi_{\mathbf{m}-\mathbf{k}}+\Pi_{ \mathbf{n}-\mathbf{l}}\right) \tag{3.10}\] with vanishing super trace \[str\left(\mu\right)=\left(\frac{m-n+l-k}{m-n}\right)\left(k-l\right)+\frac{l-k }{m-n}\left(m-n+l-k\right)=0 \tag{3.11}\] The nilpotent operators \(X\) and \(Y\) belonging to \(N_{+}\) and \(N_{-}\) (3.7) are realized by \[X = \tag{3.12}\] \[Y = \tag{3.13}\] where summation on repeated indices is omitted. The \(\left(b^{ai},c_{ia}\right)\) and \(\left(b^{\alpha\lambda},c_{\lambda\alpha}\right)\) are couples of bosonic harmonic oscillators while \(\left(\beta^{a\lambda},\gamma_{\lambda a}\right)\) and \(\left(\beta^{\alpha i},\gamma_{i\alpha}\right)\) form fermioinc oscillators. The L-operator is computed using the nilpotency properties \(X^{2}=0\), \(Y^{2}=0\) as well as \[\begin{array}{lcl}X\Pi_{\mathbf{k}}=X\Pi_{\mathbf{l}}=0&,&\Pi_{\mathbf{k}}Y= \Pi_{\mathbf{l}}Y=0\\ X\Pi_{\mathbf{m}-\mathbf{k}}=X\Pi_{\mathbf{n}-\mathbf{l}}=X&,&\Pi_{\mathbf{m} -\mathbf{k}}Y=\Pi_{\mathbf{n}-\mathbf{l}}Y=Y\end{array} \tag{3.14}\] It expands as \[\mathcal{L}^{\mu} = z^{1+\frac{l-k}{m-n}}\Pi_{\mathbf{k}}+z^{1+\frac{l-k}{m-n}}\Pi_{ \mathbf{l}}+z^{\frac{l-k}{m-n}}\Pi_{\mathbf{m-k}}+z^{\frac{l-k}{m-n}}\Pi_{ \mathbf{n-l}}\] \[+X\left(z^{\frac{l-k}{m-n}}\Pi_{\mathbf{m-k}}+z^{\frac{l-k}{m-n}} \Pi_{\mathbf{n-l}}\right)+\left(z^{\frac{l-k}{m-n}}\Pi_{\mathbf{m-k}}+z^{\frac {l-k}{m-n}}\Pi_{\mathbf{n-l}}\right)Y\] \[+X\left(z^{\frac{l-k}{m-n}}\Pi_{\mathbf{m-k}}+z^{\frac{l-k}{m-n}} \Pi_{\mathbf{n-l}}\right)Y\] yielding \[\mathcal{L}^{\mu}_{sl_{m|n}}=z^{h}\left(\begin{array}{ccc}z\delta^{a}_{b}+ \left(b^{ai}c_{ib}+\beta^{a\lambda}\gamma_{\lambda b}\right)&\left(b^{ai} \gamma_{i\alpha}+\beta^{a\lambda}c_{\lambda a}\right)&b^{aj}\delta_{ji}&\beta ^{a\rho}\delta_{\rho\lambda}\\ \left(\beta^{\alpha i}c_{ib}+b^{\alpha\lambda}\gamma_{\lambda b}\right)&z \delta^{\alpha}_{\eta}+\left(\beta^{\alpha i}\gamma_{i\eta}+b^{\alpha\lambda}c _{\lambda\eta}\right)&\beta^{\alpha j}\delta_{ji}&b^{\rho\alpha}\delta_{\rho \lambda}\\ c_{ia}&\gamma_{i\alpha}&\delta^{j}_{i}&0\\ \gamma_{\lambda a}&c_{\lambda\alpha}&0&\delta^{\rho}_{\lambda}\end{array}\right) \tag{3.16}\] where we have set \(h=\frac{l-k}{m-n}\). This matrix is in agreement with the general solution obtained in the superspin chain literature; see eq(2.20) in [52]. The special families of solutions corresponding to the nodes of the distinguished Dynkin diagram are calculated in details in [46] where the particular Lax matrix with purely fermionic oscillators is obtained for the only fermionic node of the distinguished diagram. ## 4 Super L-operators of \(B(m|n)\) type In this section, we study the family of orthosymplectic integrable superspin chains with internal symmetry given by the Lie superalgebra series \[B(m|n)=osp(2m+1|2n),\qquad m,n>0 \tag{4.1}\] We focus on the family of distinguished superspin B-chain associated to the Distinguished Dynkin diagram, and calculate the corresponding Lax operator \(\mathcal{L}^{\mu}_{B_{m|n}}\) by using the 3-grading of the orthosymplectic \(B(m|n)\) in the formula (2.30). To begin, recall that the \(B(m|n)\) superalgebra is a \(\mathbb{Z}_{2}\)- graded Lie algebra of rank \(r\left(B_{m|n}\right)=m+n\) and dimension \(\dim B_{m|n}=2(m+n)^{2}+m+3n\); it splits like \(B(m|n)_{\bar{0}}\oplus B(m|n)_{\bar{1}}\) with even part as \[\begin{array}{rcl}B(m|n)_{\bar{0}}&=&B_{m}\oplus C_{n}\\ &\simeq&so(2m+1)\oplus sp\left(2n\right)\end{array} \tag{4.2}\] and odd part \(B(m|n)_{\bar{1}}\) generated by the bi-fundamental representation \((2m+1,2n)\) of \(so(2m+1)\oplus sp\left(2n\right).\) The root system \(\Phi_{B_{m|n}}\) of the Lie superalgebra \(B(m|n)\) has \(2(m+n)^{2}+2n\) elements; it does also split into an even part \(\Phi_{\bar{0}}\) and an odd part \(\Phi_{\bar{1}}\). By using the unit bosonic weight vectors \(\left\{\varepsilon_{a}\right\}_{1\leq a\leq m}\) and the fermionic \(\left\{\delta_{\mbox{\tiny A}}\right\}_{1\leq\lambda\leq n},\) the content of \(\Phi_{\bar{0}}\) reads as \[\begin{array}{rcl}\Phi_{\bar{0}}&:&\pm\left(\varepsilon_{a}\pm\varepsilon_{b }\right)&,&\pm\varepsilon_{a}&,&a\neq b=1,...,m\\ &&\pm\left(\delta_{\mbox{\tiny A}}\pm\delta_{\mbox{\tiny B}}\right)&,&\pm 2 \delta_{\mbox{\tiny A}}&,&\mbox{\tiny A}\neq\mbox{\tiny B}=1,...,n\end{array} \tag{4.3}\] with cardinal \(|\Phi_{\bar{0}}|=2m^{2}+2n^{2}\); and the roots of \(\Phi_{\bar{1}}\) read like \[\Phi_{\bar{1}}:\pm\delta_{{}_{\rm A}},\qquad\pm\left(\varepsilon_{a}\pm\delta_{{ }_{\rm A}}\right) \tag{4.4}\] with \(|\Phi_{\bar{1}}|=2n+2mn\). Given the set \(\Phi_{B_{m|n}}\), a remarkable simple root basis generating it is given by the distinguished basis \((\beta_{{}_{\rm A}},\gamma,\)\(\alpha_{{}_{\rm A}})\) having one fermionic root \(\gamma=\delta_{n}-\varepsilon_{1}\) with length \(\gamma^{2}=0\) ; and \(m+n-1\) bosonic ones as \[\beta_{{}_{\rm A}}=\delta_{{}_{\rm A}}-\delta_{{}_{\rm A+1}}\qquad,\qquad \alpha_{{}_{\rm A}}=\varepsilon_{a}-\varepsilon_{a+1}\qquad,\qquad\alpha_{m}= \varepsilon_{m} \tag{4.5}\] with \(\beta_{{}_{\rm A}}^{2}=-2\) and \(\alpha_{a}^{2}=2\). The distinguished basis is characterised by the following ordering of the fundamental unit weight vectors \[\delta_{1},\quad\delta_{2},\quad...\quad\delta_{n-1},\quad\delta_{n};\quad \varepsilon_{1},\quad\varepsilon_{2},\quad...\quad\varepsilon_{m-1},\quad \varepsilon_{m} \tag{4.6}\] for which the super Cartan matrix \(\tilde{\alpha}_{i}.\tilde{\alpha}_{j}\) has the entries \[\tilde{\alpha}_{i}.\tilde{\alpha}_{j}=\left(\begin{array}{ccccccccc}-2&1& \cdots&&&&\\ 1&-2&1&&&&\\ &\ddots&\ddots&&&&\\ &&1&-2&1&&&\\ &&&1&0&-1&&\\ &&\cdots&-1&2&-1&\\ &&&&\ddots&\ddots&\ddots&\\ &&&&-1&2&-1\\ &&&&-1&1\end{array}\right) \tag{4.7}\] with only one zero on the diagonal. The distinguished Dynkin diagram corresponding to this matrix is given by **Figure 5**. Notice that due to the \(Z_{2}\)- grading, we can actually distinguish \[N^{{\rm B}_{\rm B}}_{n,m}=\frac{(n+m-1)!}{n!\times(m-1)!} \tag{4.8}\] types of super Dynkin diagrams \(\mathfrak{D}[B_{m|n}]\) having \(m+n\) nodes represented by different forms of graded simple roots \(\left\{\tilde{\alpha}_{i}\right\}_{1\leq i\leq m+n}.\) Figure 5: Distinguished Dynkin diagram of the \(B(m|n)\) superalgebra having one fermionic simple root in Green color. In what follows, we will consider the distinguished basis and construct the Lax operator for the corresponding super \(B\)- chain. In this regard, recall that by distinguished orthosymplectic super chain of \(B\)- type, we mean the two following: \(\mathbf{(1)}\) an integrable superspin \(B_{m|n}\) chain made of "super atoms" arranged along a straight line; and realised in the CS theory (2) in terms of a set of parallel super Wilson lines traversed by a horizontal 't Hooft line. Such a realisation by topological defects looks like the one investigated in [46] for the case of \(sl(m|n)\) superspin chain; the main difference is that here the super atoms are B-type instead of A-type. \(\mathbf{(2)}\) The vertical super Wilson lines are run by graded quantum states \((\delta_{\lambda}|\varepsilon_{a})\) ordered as in eq(4.6) and interpreted in terms of the distinguished Dynkin diagram \(\mathfrak{D}\mathfrak{D}\mathfrak{D}_{B_{m|n}}\) given by **Figure 5**. So, given this orthosymplectic superspin chain configuration and the associated super Dynkin diagram characterising each super Wilson line (a super-atom), we can calculate the distinguished super L-operator \(\mathcal{L}_{B_{n|m}}^{\mu_{1}}\) following from the 3-grading [56] \[B(m|n)\to A_{1}\oplus B(m-1|n)\oplus\mathbf{N}_{+}\oplus\mathbf{N}_{-} \tag{4.9}\] or equivalently \[osp(2m+1|2n)\to so(2)\oplus osp(2m-1|2n)\oplus\mathbf{N}_{+}\oplus\mathbf{N} _{-} \tag{4.10}\] with nilpotents \[\mathbf{N}_{\pm}=2m+2n-1 \tag{4.11}\] The fundamental representation decomposes in this case as \[\mathbf{2m}+\mathbf{1}|\mathbf{2n}\rightarrow\mathbf{2}\oplus(\mathbf{2m}- \mathbf{1}|\mathbf{2n}) \tag{4.12}\] We further use the reducibility \(\mathbf{2}_{0}=\mathbf{1}_{+}\oplus\mathbf{1}_{-}\) to reveal the Levi-like charges under the \(SO(2)\) of the cutted node. Thus, we can rewrite the above decomposition as follows \[\left(\mathbf{2m}+\mathbf{1}|\mathbf{2n}\right)_{0}\rightarrow\mathbf{1}_{+} \oplus(\mathbf{2m}-\mathbf{1}|\mathbf{2n})_{0}\oplus\mathbf{1}_{-} \tag{4.13}\] Now, in order to realize the components of the super L-operator, we work in the graded basis \[\left\{\begin{array}{c}|+\rangle\\ |i\rangle\\ |-\rangle\\ |\alpha\rangle\end{array}\right\} \tag{4.14}\] where \(|\pm\rangle\) refer to the two singlets \(\mathbf{1}_{\pm}\), the states \(|i\rangle\) with \(1\leq i\leq 2m-1\) correspond to the \(\mathbf{2m}-\mathbf{1}\) and the fermionic \(|\alpha\rangle\) with \(1\leq\alpha\leq 2n\) to the symplectic vector \(\mathbf{2n}\). In this basis, the coweight \(\mu_{n+1}\) associated to the decomposition (4.10) is written as \[\mu_{n+1}=\varrho_{+}+q_{1}\Pi_{1}-\varrho_{-}+q_{2}\Pi_{2} \tag{4.15}\] where \(q_{1}=q_{2}=0\) and the projectors on the subspaces of the fundamental representation are defined by \[\varrho_{\pm}=\left|\pm\right\rangle\left\langle\pm\right|,\qquad\Pi_{1}=\sum_{i =1}^{2m+1}\left|i\right\rangle\left\langle i\right|,\qquad\Pi_{2}=\sum_{\alpha= 1}^{2n}\left|\alpha\right\rangle\left\langle\alpha\right| \tag{4.16}\] The \(2(2m+2n-1)\) elements of the \(N_{\pm}\) (eq.4.11) are realized here as \[X=b^{i}X_{i}+\beta^{\alpha}X_{\alpha}\in N_{+}\qquad,\qquad Y=c_{i}Y^{i}+ \gamma_{\alpha}Y^{\alpha}\in N_{-} \tag{4.17}\] where \((b^{i},c_{i})\) are bosonic oscillators and \((\beta^{\alpha},\gamma_{\alpha})\) are fermionic ones; the generators \[X_{i} = \tag{4.18}\] \[Y^{i} =\] verify the Levi-like constraints (2.23) \([\mu\),\(X]=X\), \([\mu\),\(Y]=-Y\) and \([X_{i}\),\(Y^{i}]=[\mathcal{X}_{\alpha}\),\(\mathcal{Y}^{\alpha}]=\mu\). To substitute these realizations into the L-operator formula, we calculate \[X^{2} = -\mathbf{b}^{2}\left|+\right\rangle\left\langle-\right|-\boldsymbol {\beta}^{2}\left|+\right\rangle\left\langle-\right| \tag{4.19}\] \[Y^{2} = -\mathbf{c}^{2}\left|-\right\rangle\left\langle+\right|- \boldsymbol{\gamma}^{2}\left|-\right\rangle\left\langle+\right|\] where we set \[\mathbf{b}^{2} = b^{i}\delta_{ij}b^{j} \tag{4.20}\] \[\mathbf{c}^{2} = c_{i}\delta^{ij}c_{j}\] and \[\boldsymbol{\beta}^{2} = \beta^{\alpha}\delta_{\alpha\beta}\beta^{\beta} \tag{4.21}\] \[\boldsymbol{\gamma}^{2} = \gamma_{\alpha}\delta^{\alpha\beta}\gamma_{\beta}\] The nilpotency properties \(X^{3}=0\) and \(Y^{3}=0\) yield \(e^{X}=1+X+\frac{1}{2}X^{2}\) and \(e^{Y}=1+Y+\frac{1}{2}Y^{2}\). By substituting (4.15) into \(z^{\mu_{n+1}}\), we also have \[z^{\mu_{n+1}}=z\varrho_{+}+\Pi_{1}+z^{-1}\varrho_{-}+\Pi_{2} \tag{4.22}\] So, the expression of the super L-operator reads as follows \[\mathcal{L}^{\mu_{n+1}}=\left(1+X+\frac{1}{2}X^{2}\right)z^{\mu_{n+1}}\left(1 +Y+\frac{1}{2}Y^{2}\right) \tag{4.23}\] and expands like \[\begin{array}{lll}\mathcal{L}^{\mu_{n+1}}&=&z\varrho_{+}+\Pi_{1}+z^{-1} \varrho_{-}+\Pi_{2}+X\left(\Pi_{1}+z^{-1}\varrho_{-}+\Pi_{2}\right)+\\ &&\left(\Pi_{1}+z^{-1}\varrho_{-}+\Pi_{2}\right)Y+X\left(\Pi_{1}+z^{-1}\varrho _{-}+\Pi_{2}\right)Y\\ &&+\frac{1}{2}X^{2}z^{-1}\varrho_{-}+\frac{1}{2}z^{-1}\varrho_{-}Y^{2}+\frac{1 }{2}X^{2}z^{-1}\varrho_{-}Y\\ &&+\frac{1}{2}Xz^{-1}\varrho_{-}Y^{2}+\frac{1}{4}X^{2}z^{-1}\varrho_{-}Y^{2} \end{array} \tag{4.24}\] where we have used the properties \[\begin{array}{llll}X\varrho_{+}&=\varrho_{+}Y&=0\\ X^{2}\varrho_{+}&=X^{2}\Pi_{1}&=0\\ \varrho_{+}Y^{2}&=\Pi_{1}Y^{2}&=0\\ X^{2}\Pi_{2}&=\Pi_{2}Y^{2}&=0\end{array} \tag{4.25}\] In the projector basis introduced before, the super L-operator can be written in matrix language as \[\mathcal{L}^{\mu_{n+1}}_{B_{m|n}}=\left(\begin{array}{llll}\varrho_{+}L \varrho_{+}&\varrho_{+}L\Pi_{1}&\varrho_{+}L\varrho_{-}&\varrho_{+}L\Pi_{2} \\ \Pi_{1}L\varrho_{+}&\Pi_{1}L\Pi_{1}&\Pi_{1}L\varrho_{-}&\Pi_{1}L\Pi_{2}\\ \varrho_{-}L\varrho_{+}&\varrho_{-}L\Pi_{1}&\varrho_{-}L\varrho_{-}&\varrho_{ -}L\Pi_{2}\\ \Pi_{2}L\varrho_{+}&\Pi_{2}L\Pi_{1}&\Pi_{2}L\varrho_{-}&\Pi_{2}L\Pi_{2}\\ \end{array}\right) \tag{4.26}\] where the various blocks are given in terms of oscillators of the 't Hooft line phase space by \[\begin{array}{llll}\varrho_{+}L\varrho_{+}&=&z+(b^{i}\delta_{i}^{j}c_{j}+ \beta^{\alpha}\delta_{\alpha}^{\beta}\gamma_{\beta})+\frac{1}{4}z^{-1}\left( \mathbf{b}^{2}+\beta^{2}\right)\left(\mathbf{c}^{2}+\gamma^{2}\right)\\ \varrho_{+}L\Pi_{1}&=&b^{i}+\frac{1}{2}z^{-1}\left(\mathbf{b}^{2}+\beta^{2} \right)c_{i}\\ \varrho_{+}L\varrho_{-}&=&-\frac{1}{2}z^{-1}\left(\mathbf{b}^{2}+\beta^{2} \right)\\ \varrho_{+}L\Pi_{2}&=&b^{\alpha}+\frac{1}{2}z^{-1}\left(\mathbf{b}^{2}+\beta^{ 2}\right)\gamma_{\alpha}\end{array} \tag{4.27}\] and \[\begin{array}{llll}\Pi_{1}L\varrho_{+}&=&c_{i}+\frac{1}{2}z^{-1}b^{i}\left( \mathbf{c}^{2}+\gamma^{2}\right)\\ \Pi_{1}L\Pi_{1}&=&\delta_{j}^{i}+z^{-1}b^{i}c_{j}\\ \Pi_{1}L\varrho_{-}&=&-z^{-1}b^{i}\\ \Pi_{1}L\Pi_{2}&=&z^{-1}b^{i}\gamma_{\beta}\end{array} \tag{4.28}\] and \[\begin{array}{llll}\varrho_{-}L\varrho_{+}&=&-\frac{1}{2}z^{-1}\left( \mathbf{c}^{2}+\gamma^{2}\right)\\ \varrho_{-}L\Pi_{1}&=&-z^{-1}c_{i}\\ \varrho_{-}L\varrho_{-}&=&z^{-1}\\ \varrho_{-}L\Pi_{2}&=&-z^{-1}\gamma_{\alpha}\end{array} \tag{4.29}\] as well as \[\begin{array}{llll}\Pi_{2}L\varrho_{+}&=&\gamma_{\alpha}+\frac{1}{2}z^{-1} \beta^{\alpha}\left(\mathbf{c}^{2}+\gamma^{2}\right)\\ \Pi_{2}L\Pi_{1}&=&z^{-1}\beta^{\alpha}c_{j}\\ \Pi_{2}L\varrho_{-}&=&-z^{-1}\beta^{\alpha}\\ \Pi_{2}L\Pi_{2}&=&\delta_{\alpha}^{\beta}+\beta^{\alpha}\delta_{\alpha}^{ \beta}\gamma_{\beta}\end{array} \tag{4.30}\] ## 5 Super L-operators of \(C(n)\) type Now, by considering the 4D Chern-Simons gauge theory with \(OSP(2|2n-2)\) symmetry and super Wilson and 't Hooft line defects implemented, we can build the corresponding \(osp(2|2n-2)\) superspin chain and compute the RLL solutions in a similar way as before. The superalgebra \(D(1|n-1)=osp(2|2n-2)\) with \(n>1\) is labelled as \(C(n)\) due to its even part equal to \[u(1)\oplus C_{n-1}=so(2)\oplus sp(2n-2) \tag{5.1}\] its odd part is given by \((2n-2)\oplus(2n-2).\) It has rank \(n\) and its dimension is equal to \(2n^{2}+n-2.\) The super Cartan matrix reads for the distinguished basis as follows \[\tilde{\alpha}_{i}.\tilde{\alpha}_{j}=\left(\begin{array}{cccc}0&-1&&&\\ -1&2&-1&&\\ &\ddots&\ddots&\ddots&\\ &&-1&2&-2\\ &&&-2&4\end{array}\right) \tag{5.2}\] The associated distinguished super Dynkin diagram is depicted in **Figure 6**. For this supersymmetry, we have two possible 3-gradings of the form (2.20); we can therefore construct two super Lax matrices solving the RLL equation for the distinguished superspin chain of C-type. In fact, the first one is associated to the only fermionic node \(\alpha_{1}=\delta-\varepsilon_{1}\) and the second one to the bosonic node \(\alpha_{n}=2\varepsilon_{n-1}.\) We will begin by working out the first L-operator \({\cal L}_{C_{n}}^{\mu_{1}}\) that we will label as fermionic since it only contains fermionic oscillators. ### The super L-operator \({\cal L}_{C(n)}^{\mu_{1}}\) The first 3-grading for the Lie superalgebra \(C(n)\) is given by \[osp(2|2n-2)\quad\rightarrow\quad so(2)\oplus sp(2n-2)\oplus N_{+}\oplus N_{-} \tag{5.3}\] where the \(\boldsymbol{l}_{\mu_{1}}\) is identified with the bosonic subalgebra \(sp(2n-2)\) and \[N_{\pm}=2\left(n-1\right) \tag{5.4}\] The dimensions of the \(osp(2|2n-2)\) split like \[\begin{array}{ccc}2n^{2}+n-2&\rightarrow&1\ \oplus\ \left(n-1\right)\left[2(n-1)+1 \right]&\oplus\\ &&N_{+}\ \oplus\ \ N_{-}\end{array} \tag{5.5}\] Figure 6: Distinguished Dynkin diagram of the \(C(n)\) superalgebra, the only fermionic node is represented in green. This decomposition is actually obtained from the distinguished Dynkin diagram by the cutting of the fermionic node \(\mu_{1}\) dual to \(\alpha_{1}\). In what concerns us here, the fundamental representation is decomposed to representations of the \(so(2)\oplus sp(2n-2)\) as \[\left(\mathbf{2}|\mathbf{2n-2}\right)\quad\rightarrow\quad\mathbf{2}_{0}\oplus \left(\mathbf{2n-2}\right)_{0} \tag{5.6}\] Here as well, we use the homomorphism \(so(2)\simeq U\left(1\right)\) to split the representation \(\mathbf{2}_{0}\) into two singlets \(\mathbf{1}_{+}\oplus\mathbf{1}_{-}\) carrying opposite charges under the coweight, we work in the basis decomposed as \[\left|2n\right>\rightarrow\left|+\right>\oplus\left|i\right>\oplus\left|-\right> \tag{5.7}\] where the states \(\left|i\right>\) with \(1\leq i\leq 2n-2\) correspond to the subspace \(\mathbf{2n-2}\). In the same way as before, we write \[\mu_{1}=\varrho_{+}+q\Pi-\varrho_{-} \tag{5.8}\] where \(q=0\), \(\varrho_{\pm}=\left|\pm\right>\left<\pm\right|\) and \(\Pi=\sum_{i=1}^{2n-2}\left|i\right>\left<i\right|.\) The \(X\) and \(Y\) matrices are realized here as \[\begin{array}{ccccc}X&=&b^{i}X_{i}&\in&N_{+}\\ Y&=&c_{i}Y^{i}&\in&N_{-}\end{array} \tag{5.9}\] with generators \[\begin{array}{ccccc}X_{i}&=&\left|+\right>\left<i\right|-\left|i\right> \left<-\right|\\ Y^{i}&=&\left|i\right>\left<+\right|-\left|-\right>\left<i\right|\end{array} \tag{5.10}\] such that \[\left[\mu_{1},X_{i}\right]=X_{i},\qquad\left[\mu_{1},Y^{i}\right]=-Y^{i}, \qquad\left[X_{i},Y^{j}\right]=\delta_{i}^{j}\mu_{1} \tag{5.11}\] We calculate \[\begin{array}{ccccc}X^{2}&=&-\mathbf{b}^{2}\left|+\right>\left<-\right|&,&X ^{3}&=&0\\ Y^{2}&=&-\mathbf{c}^{2}\left|-\right>\left<+\right|&,&Y^{3}&=&0\end{array} \tag{5.12}\] where we have set \[\mathbf{b}^{2}=b^{i}\delta_{ij}b^{j}\qquad,\qquad\mathbf{c}^{2}=c_{i}\delta^ {ij}c_{j} \tag{5.13}\] Eventually, we have \[e^{X}=1+X+\frac{1}{2}X^{2}\qquad,\qquad e^{Y}=1+Y+\frac{1}{2}Y^{2} \tag{5.14}\] The L operator formula (2.30) along with properties \(X\varrho_{+}=\varrho_{+}Y=0\), \(X^{2}\varrho_{+}=X^{2}\Pi=\varrho_{+}Y^{2}=\Pi Y^{2}=0\) lead to \[\mathcal{L}^{\mu_{1}}=\left(1+X+\frac{1}{2}X^{2}\right)\left(z\varrho_{+}+\Pi +z^{-1}\varrho_{-}\right)\left(1+Y+\frac{1}{2}Y^{2}\right) \tag{5.15}\] expanding like \[\begin{array}{rcl}{\cal L}^{\mu_{1}}&=&z\varrho_{+}+\Pi+z^{-1}\varrho_{-}+X\left( \Pi+z^{-1}\varrho_{-}\right)+\\ &&\left(\Pi+z^{-1}\varrho_{-}\right)Y+X\left(\Pi+z^{-1}\varrho_{-}\right)Y+\\ &&\frac{1}{2}X^{2}z^{-1}\varrho_{-}+\frac{1}{2}z^{-1}\varrho_{-}Y^{2}+\frac{1 }{2}X^{2}z^{-1}\varrho_{-}Y+\\ &&\frac{1}{2}Xz^{-1}\varrho_{-}Y^{2}+\frac{1}{4}X^{2}z^{-1}\varrho_{-}Y^{2} \end{array} \tag{5.16}\] and yielding \[{\cal L}^{\mu_{1}}=\left(\begin{array}{ccc}z+b^{i}c_{i}+\frac{1}{4}z^{-1}{ \bf b}^{2}{\bf c}^{2}&b^{i}+\frac{1}{2}z^{-1}{\bf b}^{2}c_{i}&-\frac{1}{2}z^{- 1}{\bf b}^{2}\\ c_{i}+\frac{1}{2}z^{-1}{\bf c}^{2}b^{i}&\delta^{i}_{j}+z^{-1}b^{i}c_{i}&-z^{-1 }b^{i}\\ -\frac{1}{2}z^{-1}{\bf c}^{2}&-z^{-1}c_{i}&z^{-1}\end{array}\right) \tag{5.17}\] This L-operator carries only bosonic oscillator degrees of freedom, which is expected since the node cutted corresponds to the fermionic node of the distinguished diagram. ### The super L-operator \({\cal L}^{\mu_{n}}_{C(n)}\) The second possible 3-grading for the orthosymplectic \(C(n)\) is associated to the node \(2\delta_{n-1}\) dual to the coweight \(\mu_{n}\). It reads in Lie superalgebra language as \[osp(2|2n-2)\quad\rightarrow\quad A_{1}\oplus sl(1|n-1)\oplus N_{+}\oplus N_{-} \tag{5.18}\] with \[N_{\pm}=\frac{n(n-1)}{2}+(n-1) \tag{5.19}\] in agreement with the dimensions splitting \[2n^{2}+n-2\quad\rightarrow\quad 1+\left(n^{2}-1\right)+N_{+}+N_{-} \tag{5.20}\] In this case, the fundamental representation splits as \[\left({\bf 2|2n-2}\right)\quad\rightarrow\quad\left({\bf 1|n-1}\right)_{+ \frac{1}{2}}\oplus\left({\bf 1|n-1}\right)_{-\frac{1}{2}} \tag{5.21}\] which is thought of as, \[\left({\bf 2|2n-2}\right)\quad\rightarrow\quad\left({\bf 1|0}\right)_{+ \frac{1}{2}}\oplus\left({\bf 0|n-1}\right)_{+\frac{1}{2}}\oplus\left({\bf 0|n-1} \right)_{-\frac{1}{2}}\oplus\left({\bf 1|0}\right)_{-\frac{1}{2}} \tag{5.22}\] meaning that we can work in a basis of the form \[|2n\rangle\quad\rightarrow\quad|0\rangle\oplus|i\rangle\oplus|\bar{i}\rangle \oplus|\bar{0}\rangle \tag{5.23}\] where \(1\leq i\leq n-1\) and \(\overline{n-1}\leq\bar{\imath}\leq\bar{1}\) with \(\bar{\imath}=2n-1-i\). We define projectors on these four subspaces as follows \[\varrho=\left|0\right\rangle\left\langle 0\right|,\qquad\Pi=\sum_{i=1}^{n-1} \left|i\right\rangle\left\langle i\right|,\qquad\bar{\Pi}=\sum_{\bar{\imath}=2 n-2}^{n}\left|\bar{i}\right\rangle\left\langle\bar{\imath}\right|,\qquad\bar{ \varrho}=\left|\bar{0}\right\rangle\left\langle\bar{0}\right| \tag{5.24}\] and therefore the adjoint action of the coweight reads as \[\mu_{n}{=}\frac{1}{2}\varrho+\frac{1}{2}\Pi-\frac{1}{2}\bar{\Pi}-\frac{1}{2}\bar{\varrho} \tag{5.25}\] Each of the nilpotents \(N_{+}\) and \(N_{-}\) split as \((n-1)+(n-1)+(n-1)(n-2)/2\), they are generated by couples \(\left(X_{\bar{\imath}},X_{\{i\bar{j}\}}\right)\) and \(\left(Y^{i},Y^{\{ij\}}\right)\) where \(X_{\bar{\imath}}\) and \(Y^{i}\) are simply realized as \[\begin{array}{rcl}X_{i}&=&\left|0\right\rangle\left\langle\bar{\imath}\right| -\left|i\right\rangle\left\langle\bar{0}\right|\\ Y^{\bar{\imath}}&=&\left|\bar{0}\right\rangle\left\langle i\right|-\left|\bar{ \imath}\right\rangle\left\langle 0\right|\end{array} \tag{5.26}\] while the \(X_{\{i\bar{j}\}}\) and \(Y^{\{\bar{\imath}j\}}\) are symmetric in \(i\) and \(j\) and are given by \[\begin{array}{rcl}X_{\{i\bar{j}\}}&=&\left|i\right\rangle\left\langle\bar{j} \right|-\left|j\right\rangle\left\langle\bar{\imath}\right|\\ Y^{\{ij\}}&=&\left|\bar{\imath}\right\rangle\left\langle j\right|-\left|\bar{ j}\right\rangle\left\langle i\right|\end{array} \tag{5.27}\] We eventually have \[\begin{array}{rcl}X&=&b^{i}X_{i}+b^{\{i\bar{j}\}}X_{\{i\bar{j}\}}\\ Y&=&c_{\bar{\imath}}Y^{\bar{\imath}}+c_{\{i\bar{\imath}j\}}Y^{\{i\bar{\imath}j\}} \end{array} \tag{5.28}\] which verify \[[\mu_{n},X]=X\qquad,\qquad[\mu_{n},Y]=-Y \tag{5.29}\] as well as the nilpotency properties \(X^{2}=Y^{2}=0\) leading to \(e^{X}=1+X\) and \(e^{Y}=1+Y.\) Notice that the oscillators \((b^{i},c_{\bar{\imath}})\) are of fermionic nature, while \(\left(b^{\{i\bar{j}\}},c_{\{\bar{\imath}j\}}\right)\) are bosonic. We further have \[X\Pi=X\varrho=0\qquad,\qquad\varrho Y=\Pi Y=0 \tag{5.30}\] and \[z^{\mu_{n}}=z^{\frac{1}{2}}\varrho+z^{\frac{1}{2}}\Pi+z^{-\frac{1}{2}}\bar{ \Pi}+z^{-\frac{1}{2}}\bar{\varrho} \tag{5.31}\] This simplifies the expression of the L-operator as follows \[\begin{array}{rcl}\mathcal{L}^{\mu_{n}}&=&(1+X)\left(z^{\frac{1}{2}}\varrho +z^{\frac{1}{2}}\Pi+z^{-\frac{1}{2}}\bar{\Pi}+z^{-\frac{1}{2}}\bar{\varrho} \right)(1+Y)\\ &=&z^{\frac{1}{2}}\varrho+z^{-\frac{1}{2}}\Pi+z^{\frac{1}{2}}\bar{\Pi}+z^{- \frac{1}{2}}\bar{\varrho}+z^{-\frac{1}{2}}X\left(\bar{\varrho}+\bar{\Pi} \right)+\\ &&z^{-\frac{1}{2}}\left(\bar{\varrho}+\bar{\Pi}\right)Y+z^{-\frac{1}{2}}X\bar {\Pi}Y\end{array} \tag{5.32}\] This \(\mathcal{L}^{\mu_{n}}\) is represented in the projector basis \(\left(\varrho,\Pi,\bar{\Pi},\bar{\varrho}\right)\) as follows \[\mathcal{L}^{\mu_{n}}=\left(\begin{array}{cccc}z^{\frac{1}{2}}-z^{-\frac{1}{ 2}}b^{i}c_{i}&z^{-\frac{1}{2}}b^{i}c_{ij}&z^{-\frac{1}{2}}b^{i}&0\\ -z^{-\frac{1}{2}}b^{ij}c_{i}&z^{\frac{1}{2}}\mathbf{1}_{n-1}+z^{-\frac{1}{2}} \Phi\Psi&z^{-\frac{1}{2}}\Phi&-z^{-\frac{1}{2}}b^{i}\\ z^{-\frac{1}{2}}c_{i}&z^{-\frac{1}{2}}\Psi&z^{-\frac{1}{2}}\mathbf{1}_{n-1}&0\\ 0&-z^{-\frac{1}{2}}c_{i}&0&z^{-\frac{1}{2}}\end{array}\right) \tag{5.33}\] where \(\Phi=b^{\{i\bar{j}\}}X_{\{i\bar{j}\}}\) and \(\Psi=c_{\{i\bar{j}\}}Y^{\{i\}}\) are symmetric \((n-1)(n-1)\) matrices given by \[\Phi=\left(\begin{array}{ccc}b^{1\bar{1}}&\ldots&b^{1\overline{(n-1)}}\\ \vdots&\ddots&\vdots\\ b^{(n-1)\bar{1}}&\ldots&b^{(n-1)\overline{(n-1)}}\end{array}\right) \tag{5.34}\] and \[\Psi=\left(\begin{array}{ccc}c_{\bar{1}1}&\ldots&c_{\bar{1}(n-1)}\\ \vdots&\ddots&\vdots\\ c_{\overline{(n-1)}1}&\ldots&c_{\overline{(n-1)}(n-1)}\end{array}\right) \tag{5.35}\] as well as \[\Phi\Psi=\left(\begin{array}{ccc}b^{1\bar{1}}c_{\bar{1}1}&\ldots&b^{1\bar{ 1}}c_{\bar{1}(n-1)}\\ \vdots&\ddots&\vdots\\ b^{(n-1)\bar{1}}c_{\bar{1}1}&\ldots&b^{(n-1)\bar{1}}c_{\bar{1}(n-1)}\end{array}\right) \tag{5.36}\] ## 6 Super L-operators of D-type In this section, we focus on the basic Lie superalgebra of \(D(m|n)\) type in order to compute its corresponding super L-operators characterizing \(D\)-type superspin chains with the internal symmetry \[D(m|n)=osp(2m|2n),\qquad m\geq 3,n\geq 1 \tag{6.1}\] having \(2(m+n)^{2}-m+n\) dimensions and the rank \(r_{D(m|n)}=m+n\). It is defined by an even part \(D(m|n)_{\bar{0}}\) reading as \[D_{m}\oplus C_{n}=so(2m)\oplus sp(2n) \tag{6.2}\] and an odd part \(D(m|n)_{\bar{1}}\) generated by the bi-fundamental \((2m,2n)\) representation of \(D(m|n)_{\bar{0}}.\) The \(D(m|n)\) superalgebra has multiple graphical descriptions with \(m+n\) nodes represented by graded simple roots \(\{\tilde{\alpha}_{i}\}_{1\leq i\leq m+n}\). These are generated in terms of \(m+n\) fundamental unit weights given by the bosonic \(\{\varepsilon_{a}\}_{1\leq a\leq m}\) realising the roots of \(so(2m)\), and the fermionic \(\{\delta_{{}_{\rm A}}\}_{1\leq a\leq n}\) realising the roots of \(sp(2n)\); their mixing gives fermionic roots of \(D(m|n)_{\bar{1}}\). In these regards, recall that the super root system \(\Phi_{D(m|n)}\) of the Lie superalgebra \(D(m|n)\) has \(2(m+n)^{2}-2m\) roots that split into an even set \(\Phi_{\bar{0}}\) and an odd part \(\Phi_{\bar{1}}\) with content splitting as follows \[\begin{array}{c|c|c}\Phi_{D(m|n)}&\mbox{roots}&\mbox{number}&\mbox{range of Labels}\\ \hline\hline&\pm\alpha_{ab}^{\pm}&2m^{2}-2m&\\ \Phi_{\bar{0}}&\pm\beta_{{}_{\rm A}B}^{\pm}&2n^{2}-2n&\\ &\pm 2\delta_{{}_{\rm A}}&2n&\\ \hline&\gamma_{{}_{\rm A}B}^{\pm}&\\ \Phi_{\bar{1}}&-\gamma_{{}_{\rm A}B}^{\pm}&2mn&a=1,...,m\\ &2mn&\mbox{${}_{\rm A}=1,...,n$}\end{array} \tag{6.3}\] where we have set \[\alpha^{\pm}_{ab}=\varepsilon_{a}\pm\varepsilon_{b},\qquad\beta^{\pm}_{\mbox{\tiny AB }}=\delta_{\mbox{\tiny A}}\pm\delta_{\mbox{\tiny B}},\qquad\gamma^{\pm}_{ab}= \varepsilon_{a}\pm\delta_{\mbox{\tiny B}} \tag{6.4}\] A remarkable simple root basis generating the super root system \(\Phi_{D(m|n)}\) is given by the distinguished basis (\(\beta_{\mbox{\tiny A}},\gamma\),\(\alpha_{a}\)) having one fermionic root \(\gamma\) as given here below \[\begin{array}{lclcl}\beta_{\mbox{\tiny A}}&=&\delta_{\mbox{\tiny A}}-\delta _{\mbox{\tiny A}+1}&,&\mbox{\tiny A}=1,...,n-1\\ \alpha_{a}&=&\varepsilon_{a}-\varepsilon_{a+1}&,&a=1,...,m-1\\ \alpha_{m}&=&\varepsilon_{m-1}+\varepsilon_{m}&\\ \gamma&=&\delta_{n}-\varepsilon_{1}&\end{array} \tag{6.5}\] This simple root basis can be collectively denoted shortly as \(\tilde{\alpha}_{i}=(\beta_{\mbox{\tiny A}},\gamma_{n}\),\(\alpha_{a})\) with super label \(i=1,...,m+n\). Notice that this distinguished basis is characterised by the ordering of the set \(\{\delta_{\mbox{\tiny A}},\varepsilon_{a}\}\) of the fundamental unit weight vectors as follows \[\delta_{1},\quad\delta_{2},\quad...\quad\delta_{n-1},\quad\delta_{n};\quad \varepsilon_{1},\quad\varepsilon_{2},\quad...\quad\varepsilon_{m-1},\quad \varepsilon_{m} \tag{6.6}\] leading in turn to an ordering of the set of graded simple root as (\(\beta_{\mbox{\tiny A}},\gamma_{n}\),\(\alpha_{a}\)) and consequently to the Distinguished Dynkin diagram depicted in **Figure 7** where the graded simple roots are also reported. Notice that the basis in eq(6.6) gives a very particular ordering of the set \(\{\delta_{\mbox{\tiny A}},\varepsilon_{a}\}\) where all the \(\delta_{\mbox{\tiny A}}\)'s are put on the left and all the \(\varepsilon_{a}\)'s are on the right. In the general case where the \(\delta_{\mbox{\tiny A}}\)'s and the \(\varepsilon_{a}\)'s are mixed, there are \[N_{D(m|n)}=\frac{(n+m)!}{n!\times m!} \tag{6.7}\] possibilities of orderings of the type (6.6). This variety of orderings indicates that generally speaking the Lie superalgebra \(D(m|n)\) has \(N_{D(m|n)}\) possible super Dynkin diagrams \[\mathfrak{D}[D(m|n)]^{(\kappa)}\qquad,\qquad\kappa=1,...,N_{D(m|n)} \tag{6.8}\] and eventually \(N_{D(m|n)}\) of varieties of superspin chains of D-type. For the present study, we use the 3-gradings in Table eq(2.19) in order to generate two types of Lax operators for the distinguished superspin chain with \(D(m|n)\) symmetry. The first one is referred to as the spinorial L-operator \({\cal L}^{\mu_{m+n}}\) because it is linked to the (co)spinorial nodes of the super diagram **7.** The second one is labeled as \({\cal L}^{\mu_{n+1}}\) since it concerns the coweight \(\mu_{n+1}\) associated to \(\varepsilon_{1}-\varepsilon_{2}\) in **7.** Figure 7: Distinguished Dynkin diagram of the \(D(m|n)\) superalgebra having one fermionic simple root in Green color. Here, we have \(\varepsilon_{i}^{2}=1\) and \(\delta_{i}^{2}=-1\). ### The super L-operator \({\cal L}^{\mu_{m+n}}_{D_{m|n}}\) Due to the \(\mathbb{Z}_{2}\) automorphism symmetry of the distinguished super Dynkin diagram of the **Figure 7** which permutes the spinor and cospinor roots \(\alpha_{m+n}\) and \(\alpha_{m+n-1}\), we can deduce that the coweights \(\mu_{m+n}\) and \(\mu_{m+n-1}\) act in the same way on the Lie superalgebra \(osp(2m|2n)\) and therefore we have similar super L-operators. In fact, the 3-grading obtained by the cutting of one of the these nodes in the distinguished super Dynkin diagram is the same. We have \[osp(2m|2n)\quad\longrightarrow\quad so\,(2)\oplus sl(m|n)\oplus N_{+}\oplus N _{-} \tag{6.9}\] with the super special linear \(sl(m|n)\) as a sub- superalgebra. For this breaking pattern, we have the dimensions \begin{tabular}{|c|c|c|c|c|c|} \hline algebra & \(osp_{2m|2n}\) & \(so_{2}\) & \(sl(m|n)\) & \(N_{+}\) & \(N_{-}\) \\ \hline dim & \(2(m+n)^{2}{-}m+n\) & 1 & \((m+n)^{2}{-}1\) & \(\frac{(m+n)^{2}+n-m}{2}\) & \(\frac{(m+n)^{2}+n-m}{2}\) \\ \hline \end{tabular} (6.10) The fundamental representation \(\left({\bf 2m|2n}\right)\) of the superalgebra \(osp(2m|2n)\) splits in terms of the fundamental \(\left({\bf m|n}\right)_{\pm q}\) representations of \(sl(m|n)\) as follows \[\left({\bf 2m|2n}\right)=\left({\bf m|n}\right)_{+\frac{1}{2}}\oplus\left( \bar{\bf m}|\bar{\bf n}\right)_{-\frac{1}{2}} \tag{6.11}\] This decomposition means that the super L-operator \({\cal L}^{\mu_{m+n}}\) (and equivalently \({\cal L}^{\mu_{m+n-1}}\)) will be represented by a \(2(m+n)\times 2(m+n)\) matrix \[\left(\begin{array}{cc}L_{\rm I\kern-1.5ptI\kern-1.5ptI\kern-1.5ptI}&L_{ \rm I\kern-1.5ptI\kern-1.5ptI}\\ L_{\rm I\kern-1.5ptI}&L_{\rm I\kern-1.5ptI}\end{array}\right) \tag{6.12}\] with basis vector of the form \[\begin{array}{llll}\left|{\rm I\kern-1.5ptI}\right\rangle&\equiv\left|i \right\rangle\oplus\left|\alpha\right\rangle&\quad\mbox{for}\quad\left({\bf m| n}\right)_{+\frac{1}{2}}\\ \left|\bar{\rm I\kern-1.5ptI}\right\rangle&\equiv\left|\bar{i}\right\rangle \oplus\left|\bar{\alpha}\right\rangle&\quad\mbox{for}\quad\left(\bar{\bf m}| \bar{\bf n}\right)_{-\frac{1}{2}}\end{array} \tag{6.13}\] In this basis, the subscripts \(\pm 1/2\) are charges of \(so(2)\). The \(\left|i\right\rangle\) and \(\left|\bar{i}\right\rangle\) generate the (anti) fundamental representations \({\bf m}\) and \(\bar{\bf m}\) of \(sl(m)\); while the \(\left|\alpha\right\rangle\) and \(\left|\bar{\alpha}\right\rangle\) generate the (anti) fundamental representations \({\bf n}\) and \(\bar{\bf n}\) of \(sl(n)\). Notice that by ordering the basis vectors of \(osp(2m|2n)\) like \(\left(\left|i\right\rangle,\left|\alpha\right\rangle,\left|\bar{i}\right\rangle, \left|\bar{\alpha}\right\rangle\right),\) the labels takes the values \[\begin{array}{llll}1\leq i\leq m&,&n+m+1\leq\bar{\imath}\leq n+2m\\ 1\leq\alpha\leq n&,&2m+n+1\leq\bar{\alpha}\leq 2n+2m\end{array} \tag{6.14}\] In terms of the super labels \({\rm I\kern-1.5ptI}\) and \(\bar{\rm I\kern-1.5ptI}\) introduced in (6.13), we can rewrite these intervals in a short way like \[1\leq{\rm I\kern-1.5ptI}\leq m+n,\qquad n+m+1\leq\bar{\imath}\leq 2n+2m \tag{6.15}\] Using these super labels, we can construct the operators involves into the expression of the super Lax operators \(\mathcal{L}^{\mu_{m+n}}=e^{X}z^{\mu_{m+n}}e^{Y}\) associated with the breaking pattern (6.9) and the super diagram of the **Figure 7**. First, from eq(6.11) we learn that the adjoint action of the coweight \(\mu_{m+n}\) is given by \[\mu_{m+n}{=}\frac{1}{2}\sum_{\mathfrak{l}=1}^{m+n}\ket{\mathfrak{l}}\bra{ \mathfrak{l}}-\frac{1}{2}\sum_{\mathfrak{l}=m+n+1}^{2m+2n}\ket{\bar{\mathfrak{l }}}\bra{\bar{\mathfrak{l}}} \tag{6.16}\] which, by setting \(\mathbf{\Pi}=\sum_{\mathfrak{l}=1}^{m+n}\ket{\mathfrak{l}}\bra{\mathfrak{l}}\) and \(\bar{\Pi}=\sum_{\mathfrak{l}=m+n+1}^{2m+2n}\ket{\bar{\mathfrak{l}}}\bra{\bar{ \mathfrak{l}}}\) reads also as \(\mu_{m+n}{=}\frac{1}{2}\Pi-\frac{1}{2}\bar{\Pi}.\) The \(\mathbf{\Pi}\) and \(\bar{\Pi}\) are projectors of the fundamental representations of \(sl(m|n)\). In terms of \(sl(m)\oplus sl(n)\) vector basis, the \(\mu_{m+n}\) splits as follows \[\mu_{m+n}{=}\frac{1}{2}\left(\sum_{i=1}^{m}\ket{i}\bra{i}+\sum_{\alpha=m+1}^{ m+n}\ket{\alpha}\bra{\alpha}\right)-\frac{1}{2}\left(\sum_{\mathfrak{i}=m+n+1}^{2 m+n}\ket{\bar{\mathfrak{l}}}\bra{\bar{\mathfrak{l}}}+\sum_{\bar{\alpha}=2m+n+1}^{2 m+2n}\ket{\bar{\alpha}}\bra{\bar{\alpha}}\right) \tag{6.17}\] Second, the matrix operators \(X\) and \(Y\) in the expression \(e^{X}z^{\mu_{m+n}}e^{Y}\) belong respectively to the nilpotents \(N_{+}\) and \(N_{-}\); they can be expanded in terms of representations of \(sl_{m}\oplus sl\left(n\right).\) This feature follows from the decomposition of \(\dim N_{\pm}\) (6.10) as follows \[\dim N_{\pm}=\frac{m(m-1)}{2}+\frac{n(n+1)}{2}+mn \tag{6.18}\] involving the antisymmetric representation of \(\mathrm{sl(m)}\), the symmetric representation of \(\mathrm{sl(n)}\) and the bi-fundamental representation. So an explicit realization of \(X\) is given by using the generators \((X_{[i\bar{j}]},X_{(\alpha\bar{\beta})},\mathcal{X}_{i\bar{\alpha}})\) and the graded Darboux coordinates \((b^{[i\bar{j}]},\mathfrak{f}^{(\alpha\bar{\beta})},\beta^{i\bar{\alpha}})\) as follows \[X=b^{[i\bar{j}]}X_{[i\bar{j}]}+\mathfrak{f}^{(\alpha\bar{\beta})}X_{(\alpha \bar{\beta})}+\beta^{i\bar{\alpha}}\mathcal{X}_{i\bar{\alpha}} \tag{6.19}\] with \[\begin{array}{lll}X_{[i\bar{j}]}&=&\ket{i}\bra{\bar{j}}-\ket{j}\bra{\bar{ \mathfrak{l}}}\\ X_{(\alpha\bar{\beta})}&=&\ket{\alpha}\bra{\bar{\beta}}+\ket{\beta}\bra{\bar {\alpha}}\\ \mathcal{X}_{i\bar{\alpha}}&=&\ket{i}\bra{\bar{\alpha}}\end{array} \tag{6.20}\] verifying the nilpotency property \(X^{2}=0\) and indicating that \(e^{x}=I+X\). Similarly for the Y operator belonging to the nilpotent \(N_{-}\), we have \(Y^{2}=0\) with the expansion \[Y=c_{[\bar{j}i]}Y^{[\bar{j}i]}+\mathrm{g}_{(\beta\alpha)}Y^{(\alpha\bar{\beta} )}+\gamma_{\bar{\alpha}i}\mathcal{Y}^{\bar{\alpha}i} \tag{6.21}\] where \((c_{[\bar{j}i]},\mathrm{g}_{(\bar{\beta}\alpha)},\gamma_{\bar{\alpha}i})\) are graded Darboux coordinates that are conjugate to \((b^{[i\bar{j}]},\mathfrak{f}^{(\alpha\bar{\beta})},\beta^{i\bar{\alpha}})\) and where \[\begin{array}{lll}Y^{[\bar{j}i]}&=&\ket{\bar{j}}\bra{i}-\ket{\bar{\mathfrak{ l}}}\bra{j}\\ Y^{(\bar{\beta}\alpha)}&=&\ket{\bar{\beta}}\bra{\alpha}+\ket{\bar{\alpha}}\bra{\beta}\\ \mathcal{Y}^{\bar{\alpha}i}&=&\ket{\bar{\alpha}}\bra{i}\end{array} \tag{6.22}\] In total we have, we therefore have \(2[m(m-1)/2+n(n+1)/2]\) bosonic oscillators given by \((b^{[ij]},c_{\bar{[}\bar{[}\bar{j}i\bar{\jmath}]}),\)\((\mathrm{f}^{(\alpha\bar{\beta})},\mathrm{g}_{(\bar{\beta}\alpha)}),\) and \(2mn\) fermionic ones given by \((\beta^{i\bar{\alpha}},\gamma_{\bar{\alpha}i}).\) Using the properties \(X^{2}=Y^{2}=0\), the super Lax operator reads as follows \[\mathcal{L}^{\mu_{m+n}}=z^{\mu_{m+n}}+z^{\mu_{m+n}}Y+Xz^{\mu_{m+n}}+Xz^{\mu_{m+ n}}Y \tag{6.23}\] Using (6.16), we have interesting properties useful for the calculation of \(\mathcal{L}^{\mu_{m+n}}\). The charge operator \(z^{\mu}\) reads as \(z^{\frac{1}{2}\Pi-\frac{1}{2}\bar{\Pi}}\); and the nilpotent operators \(X\) and \(Y\) obey the properties \[\mathbf{\Pi}X = X,\qquad X\mathbf{\Pi}=0,\qquad\mathbf{\bar{\Pi}}Y=Y,\qquad Y\mathbf{\bar{ \Pi}}=0 \tag{6.24}\] \[X\mathbf{\bar{\Pi}}=X,\qquad\mathbf{\bar{\Pi}}X=0,\qquad Y\mathbf{\Pi}=Y, \qquad Y\mathbf{\Pi}=0 \tag{6.25}\] The super L-operator is calculated by substituting in (2.30) with (6.17-6.21) and using \(Xz^{\mu_{m+n}}=z^{-\frac{1}{2}}X\) and \(z^{\mu_{m+n}}Y=z^{-\frac{1}{2}}Y\); we have \[\mathcal{L}^{\mu_{m+n}} = z^{\mu_{m+n}}+z^{-\frac{1}{2}}X+z^{-\frac{1}{2}}Y\] \[+z^{-\frac{1}{2}}(\Phi\Psi+\Lambda\Gamma+\mathrm{f}^{\left( \alpha\bar{\beta}\right)}\gamma_{\bar{\alpha}i}X_{\left(\alpha\bar{\beta} \right)}\mathcal{Y}^{\bar{\alpha}i}+\beta^{i\bar{\alpha}}\mathrm{g}_{\left( \bar{\beta}\alpha\right)}\mathcal{X}_{i\bar{\alpha}}Y^{\left(\bar{\beta}\alpha \right)})\] \[+z^{-\frac{1}{2}}\beta^{i\bar{\alpha}}\gamma_{\bar{\alpha}i} \mathcal{X}_{i\bar{\alpha}}\mathcal{Y}^{\bar{\alpha}i}\] where we have set \[\begin{array}{lcl}\Phi=b^{[i\bar{j}]}X_{[i\bar{j}]}&&,&\Psi=c_{[\bar{j}i]}Y ^{[\bar{j}i]}\\ \Lambda=\mathrm{f}^{\left(\alpha\bar{\beta}\right)}X_{\left(\alpha\bar{\beta} \right)}&&,&\Gamma=\mathrm{g}_{\left(\bar{\beta}\alpha\right)}Y^{\left(\bar{ \beta}\alpha\right)}\end{array} \tag{6.26}\] The matrix form after multiplying with the overall factor \(z^{\frac{1}{2}}\) is given in the basis (6.13) by \[\mathcal{L}^{\mu_{m+n}}_{D(m|n)}=\left(\begin{array}{cccc}z+\Phi\Psi+\beta^ {i\bar{\alpha}}\gamma_{\bar{\alpha}i}&\beta^{i\bar{\alpha}}c_{\left\{\bar{ \beta}\alpha\right\}}&\beta^{i\bar{\alpha}}&\Phi\\ \mathrm{b}^{\left(\alpha\bar{\beta}\right)}\gamma_{\bar{\alpha}i}&z+\Lambda \Gamma&\Lambda&0\\ \gamma_{\bar{\alpha}i}&\Gamma&z&0\\ \Psi&0&0&z\end{array}\right) \tag{6.27}\] The \(\Phi,\Psi\) in (6.26) are anti-symmetric (\(m\times m\)) matrices while \(\Lambda\) and \(\Gamma\) are (\(n\times n\)) symmetric matrices reading explicitly as \[\Phi=\left(\begin{array}{cccc}0&\ldots&b^{1\overline{m}}\\ \vdots&\ddots&\vdots\\ b^{n\bar{1}}&\ldots&0\end{array}\right)\qquad;\qquad\Psi=\left(\begin{array}[] {cccc}0&\ldots&c_{1m}\\ \vdots&\ddots&\vdots\\ c_{\overline{m}1}&\ldots&0\end{array}\right) \tag{6.28}\] and \[\Lambda=\left(\begin{array}{cccc}b^{1\bar{1}}&\ldots&b^{1\overline{m}}\\ \vdots&\ddots&\vdots\\ b^{n\bar{1}}&\ldots&b^{n\overline{n}}\end{array}\right)\qquad;\qquad\Gamma= \left(\begin{array}{cccc}c_{\bar{1}1}&\ldots&c_{1n}\\ \vdots&\ddots&\vdots\\ c_{\overline{m}1}&\ldots&c_{\overline{m}n}\end{array}\right) \tag{6.29}\] ### The super L-operator \({\cal L}^{\mu_{n+1}}_{D_{m|n}}\) Now, we move to the investigation of the super Lax operator \({\cal L}^{\mu_{n+1}}\) associated with the second possible 3-grading of the Lie superalgebra \(osp(2m|2n).\) On the level of the distinguished Dynkin diagram, this decomposition is associated to the node of the simple root \(\alpha_{1}=\varepsilon_{1}-\varepsilon_{2}\) which leads to \[osp(2m|2n)\quad\rightarrow\quad N_{+}\oplus\mathbf{l}_{\mu}\oplus N_{-} \tag{6.30}\] with \[\mathbf{l}_{\mu}=so(2)\oplus osp(2m-2|2n)\] and the nilpotent \(N_{\pm}\) superalgebras dimensions given by \[\begin{array}{lcl}\dim\mathbf{l}_{\mu}&=&2(m-1+n)^{2}-m+n+1\\ \dim N_{\pm}&=&2\left(m-1\right)+2n\end{array} \tag{6.31}\] Under this breaking, the fundamental representation \({\bf 2m|2n}\) of the orthosymplectic \(osp(2m|2n)\) splits in terms of representations of \(so(2)\oplus osp(2m-2|2n)\) as follows \[\begin{array}{lcl}\left({\bf 2m|2n}\right)&\rightarrow&\left({\bf 2m-2|2n} \right)_{0}\oplus\left({\bf 2|0}\right)_{0}\\ \left({\bf 2|0}\right)&\rightarrow&\left({\bf 1|0}\right)_{+}\oplus\left({ \bf 1|0}\right)_{-}\end{array} \tag{6.32}\] where we have used the reducibility of \(so(2)\) to split the representation \({\bf 2}\) like \({\bf 1}_{+}\oplus{\bf 1}_{-}.\) By labeling \(\left({\bf 2m|2n}\right)\) by the ket vector \(|A\rangle,\) the decomposition (6.32) read in terms of low dimensional ket vectors as \[|A\rangle\quad\rightarrow\quad|+\rangle\oplus|{\sf A}\rangle\oplus|-\rangle\] (6.33a) with \[|{\sf A}\rangle=|i\rangle\oplus|\alpha\rangle\] and \[i=1,...2m-2,\,\alpha=1,...,2n.\] Using the projectors \[\begin{array}{lcl}\Pi_{+}&=&|+\rangle\left\langle+\right|\\ \Pi_{-}&=&|-\rangle\left\langle-\right|\end{array} \tag{6.34}\] and \[\Pi_{0}=\sum_{\alpha=1}^{2m+2n-2}|{\sf A}\rangle\left\langle{\sf A}\right|= \sum_{i=1}^{2m-2}|i\rangle\left\langle i\right|+\sum_{i=1}^{2n}|\alpha\rangle \left\langle\alpha\right| \tag{6.35}\] the adjoint action of the coweight reads as follows \[\mu_{n+1}=\Pi_{+}+q\Pi_{0}-\Pi_{-} \tag{6.36}\] with \(q=0\). Similarly, the \(2\left(m+n-1\right)\) generators of the nilpotent superalgebras \(N_{\pm}\) expand like \[\begin{array}{lclcl}X&=&{\cal B}^{\sf A}\mathbf{X}_{{}_{\sf A}}&=&b ^{i}X_{i}+\beta^{\alpha}{\cal X}_{\alpha}&\in N_{+}\\ Y&=&{\cal C}_{{}_{\sf A}}\mathbf{Y}^{{}_{\sf A}}&=&c_{i}Y^{i}+\gamma_{ \alpha}{\cal Y}^{\alpha}&\in N_{-}\end{array} \tag{6.37}\] where the \((b^{i},c_{i})\) are bosonic Darboux coordinates and \((\beta^{\alpha},\gamma_{\alpha})\) are fermionic homologue. The realisation of the generators of \(N_{\pm}\) is given by \[\begin{array}{rcl}\boldsymbol{X}_{\text{A}}&=&\left|+\right\rangle\left\langle \text{A}\right|-\left|\text{A}\right\rangle\left\langle-\right|\\ \boldsymbol{Y}^{\text{A}}&=&\left|\text{A}\right\rangle\left\langle+\right|- \left|-\right\rangle\left\langle\text{A}\right|\end{array} \tag{6.38}\] they split like \[\begin{array}{rclrcl}X_{i}&=&\left|+\right\rangle\left\langle i\right|- \left|i\right\rangle\left\langle-\right|\\ Y^{i}&=&\left|i\right\rangle\left\langle+\right|-\left|-\right\rangle \left\langle i\right|\end{array}\qquad,\qquad\begin{array}{rclrcl}\mathcal{ X}_{\alpha}&=&\left|+\right\rangle\left\langle\alpha\right|-\left|\alpha \right\rangle\left\langle-\right|\\ \mathcal{Y}^{\alpha}&=&\left|\alpha\right\rangle\left\langle+\right|-\left|- \right\rangle\left\langle\alpha\right|\end{array} \tag{6.39}\] Using these expression, we compute the powers of \(X\) and \(Y\); the non vanishing ones are given by \[\begin{array}{rclrcl}X^{2}&=&-\mathcal{B}^{2}\left|+\right\rangle\left\langle -\right|\\ Y^{2}&=&-\mathcal{C}^{2}\left|-\right\rangle\left\langle+\right|\end{array} \qquad,\qquad\begin{array}{rclrcl}\mathcal{B}^{2}&=&\left(\mathbf{b}^{2}+ \beta^{2}\right)\\ \mathcal{C}^{2}&=&\left(\mathbf{c}^{2}+\gamma^{2}\right)\end{array} \tag{6.40}\] with \[\begin{array}{rclrcl}\mathbf{b}^{2}&=&b^{i}\delta_{ij}b^{j}\\ \mathbf{c}^{2}&=&c_{i}\delta^{ij}c_{j}\end{array}\qquad,\qquad\begin{array}{rclrcl} \beta^{2}&=&\beta^{\alpha}\delta_{\alpha\beta}\beta^{\beta}\\ \gamma^{2}&=&\gamma_{\alpha}\delta^{\alpha\beta}\gamma_{\beta}\end{array} \tag{6.41}\] Substituting, the super L-operator \(\mathcal{L}^{\mu_{1}}=e^{X}z^{\mu_{1}}e^{Y}\) expands as follows \[\mathcal{L}^{\mu_{1}}=(1+X+\frac{X^{2}}{2})\left(z\Pi_{+}+\Pi_{0}+z^{-1}\Pi_{- }\right)(1+Y+\frac{Y^{2}}{2}) \tag{6.42}\] In the basis (6.33a) and in terms of bosonic \((b^{i},c_{i})\) and fermioinc \((\beta^{\alpha},\gamma_{\alpha})\) oscillators, we have \[\mathcal{L}^{\mu_{n+1}}_{D_{m|n}}=\left(\begin{array}{llll}z^{2}\!+\!z(b^{i }c_{i}+b^{\alpha}c_{\alpha})\!+\!\frac{(\mathbf{b}^{2}+\beta^{2})(\mathbf{c}^ {2}+\gamma^{2})}{4}&zb^{i}\!+\!c_{i}\frac{\mathbf{b}^{2}+\beta^{2}}{2}&z\beta^ {\alpha}\!+\!\frac{(\mathbf{b}^{2}+\beta^{2})}{2}\gamma_{\alpha}&\frac{-( \mathbf{b}^{2}+\beta^{2})}{2}\\ zc_{i}+\frac{(\mathbf{c}^{2}+\gamma^{2})}{2}b^{i}&z\delta^{i}_{j}+b^{i}c_{j}&b^ {i}\gamma_{\beta}&-b^{i}\\ z\gamma_{\alpha}+\frac{(\mathbf{c}^{2}+\gamma^{2})}{2}\beta^{\alpha}&\beta^{ \alpha}c_{j}&z\delta^{\alpha}_{\beta}+\beta^{\alpha}\gamma_{\beta}&-\beta^{ \alpha}\\ -\frac{(\mathbf{c}^{2}+\gamma^{2})}{2}&-c_{i}&-\gamma^{\alpha}&1\end{array}\right) \tag{6.43}\] where we have multiplied by the factor \(z\). This matrix has a very similar structure to (4.26), the only difference concerns the size of the block of the subspace \(\left|i\right\rangle\) which is of \(2m-2\) dimensions here. ## 7 Conclusion and comments The present investigation is an extension of the results of the 4D CS/ Integrability correspondence formulated in [1, 3], and further complemented in [7]. In the latter, XXX spin chains were linked to a construction of line defects in four dimensional Chern-Simons theory, and L-operators solving the RLL equation were interpreted as the parallel transport on the phase space of magnetic 't Hooft line defects. This duality yields a simple and direct formula for the computation of minuscule L-operators based on Levi decompositions of the bosonic symmetry algebra, which are in turns directly deduced by cutting minuscule nodes from the associated Dynkin diagram. This general formula allowed to explicitly realize oscillator Lax operators for the spin chains with bosonic \(ABCDE\) symmetries. Some of these solutions are new to the spin chain literature while the others perfectly agree with the results obtained from Yangian based techniques. The generalization of this duality to the super case was initially treated in [46] for the case of \(sl(m|n)\) superspin chains. In analogy to the aforementioned bosonic construction, the oscillator realizations of super Lax operators solving the RLL equation for a superspin chain are deduced from special decompositions of the Lie superalgebra. Following this rationale, we constructed in this paper a list of Lax operators for superspin chains with internal symmetries given by the \(ABCD\) Lie superalgebras. In this regard, notice that these solutions are obtained for specific nodes of the super Dynkin Diagrams that act like minuscule coweights, meaning that they lead to Levi-like decompositions of these superalgebras. Notice moreover that we focused on the fundamental representation for all the symmetries treated here, indicating that the superspin states of the super-atoms of the super chains are represented in the fundamental. The graded L-operators obtained here are to our knowledge, still missing in the superspin chain literature, except for the solutions of \(sl(m|n)\) chains that were computed using degenerate solutions of the graded Yang-Baxter equation, see eq(2.20) in [52]. These matrices were rederived in [46] from the 4D Chern-Simons with \(SL(m|n)\) symmetry focusing on the distinguished Dynkin diagram and by extending features of the bosonic \(sl(m)\) spin chain. In this linear symmetry, all simple nodes are associated to minuscule coweights. Here, we gave a more general expression of these super Lax matrices for any node beyond the distinguished Dynkin diagram of \(sl(m|n)\); see (3.16) where the bosonic and fermionic oscillators are explicitly distinguished. Notice that the bosonic L-operators of the \(sl(m)\) spin chain [7; 12] can be recovered as a special case of the graded distinguished solutions by simply taking \(n=0\). Unlike the \(A\left(m|n\right)\) superalgebra, the distinguished Dynkin diagram of the \(B\left(m|n\right)\) superalgebra only leads to one Levi-like decomposition associated to the distinguished Dynkin diagram given by **Figure 5**. The graded Lax operator of the \(osp(2m|2n)\) superspin chain was constructed for this specific case as presented in (4.26). This novel result has an interesting similarity with the bosonic minuscule Lax operator of the B-type spin chain. This bosonic operator is calculated from the 4D CS in [13], and is associated to the only minuscule node of the \(so(2m+1)\) algebra which coincides with the node \(\varepsilon_{1}-\varepsilon_{2}\) on the \(B_{m}\) part of the distinguished Dynkin diagram of \(B\left(m|n\right)\). For the \(C(n)\) superspin chain, we also considered the distinguished Dynkin diagram of **Figure 6,** where we have two simple nodes for which we can calculate the L-operator using the formula (2.30). The first super L-operator is given in (5.17); it corresponds to the fermionic node \(\varepsilon-\delta_{1}\) and only contains fermionic oscillators. The second super L-operator (5.33) is associated to the last bosonic node \(2\delta_{n-1}\) which is equivalent to the minuscule node if we only consider the bosonic \(C_{n-1}\) part of the distinguished Dynkin diagram. The minuscule L-matrix of \(sp(2n)\) given in [13] can be recovered from the super matrix (5.33) as a special case. Finally, the distinguished Dynkin diagram of **Figure 7** of the \(D\left(m|n\right)\) symmetry yields three Levi-like decompositions for the \(osp(2m|2n)\) Lie superalgebra corresponding to the three bosonic nodes \(\varepsilon_{1}-\varepsilon_{2},\varepsilon_{m-1}-\varepsilon_{m}\) and \(\varepsilon_{m-1}+\varepsilon_{m}.\) The first one acts in a similar fashion to the vectorial minuscule node of the \(D_{n}\) Lie algebra, and the resulting super Lax operator (6.43) is also a genaralisation of the bosonic vector Lax operator calculated in [13]. The other two nodes are of spinorial nature, they are treated collectively since they lead to the same 3-grading and eventually to the same Lax operator given in (6.27). Their bosonic counterparts have similar properties as studied in [7, 13]. As extensions of this work, one can follow the demarche presented here in order to study other superspin chains in the framework of four-dimensional Chern-Simons gauge theory; in particular those associated to exceptional Lie superalgebras \(F(4)\), \(G(3)\) and \(D(1,2;\alpha)\) whose 3-gradings are given in [57]. Interestingly, the generalization of the L-operator construction based on Levi-like decompositions, as well as the Costello-Yamazaki-Yagi formula [7], would allow to obtain solutions for all nodes of the Dynkin diagram even if it they don't correspond to Levi-like decompositions or minuscule coweights. These cases lead to 5-gradings of Lie superalgebras, as listed in [56]-[58]. In the end of this conclusion, we collect the expressions of the super oscillator realisations of Lax operators for the families \(A(m-1\mid n-1)\), \(B(m\mid n)\), \(C(n)\) and \(D(m\mid n)\) Lie superalgebras with 3-grading decompositions as \(g=\boldsymbol{l}_{\mu}\oplus N_{+}\oplus N_{-}\) where \(\boldsymbol{l}_{\mu}\) is Levi-like subalgebra and \(N_{\pm}\) nilpotent superalgebras. \begin{tabular}{|c|c|c|c|c|} \hline Superalgebra & Subalgebra \(\boldsymbol{l}_{\mu}\) & Nilpotent \(N_{+}\) & L-operators & Equations \\ \hline \(sl_{m|n}\) & \(sl_{k|l}\oplus sl_{m-k|n-l}\oplus gl_{1}\) & \((k+l)^{c}(m-k+n-l)\) & \(\mathcal{L}_{sl(m|n)}^{\mu}\) & (3.16) \\ \hline \(osp_{2m+1|2n}\) & \(osp_{2m-1|2n}\oplus gl_{1}\) & \(2m+2n-1\) & \(\mathcal{L}_{m|n}^{\mu+1}\) & (4.26)-(4.30) \\ \hline \(osp_{2|2n-2}\) & \(sp_{2n-2}\oplus gl_{1}\) & \(2\left(n-1\right)\) & \(\mathcal{L}_{C(n)}^{\mu_{1}}\) & (5.17) \\ & \(sl_{1|n-1}\oplus gl_{1}\) & \(\frac{n(n+1)}{2}-1\) & \(\mathcal{L}_{C(n)}^{\mu_{n}}\) & (5.33)-(5.36) \\ \hline \(osp_{2m|2n}\) & \(osp_{2m-2|2n}\oplus gl_{1}\) & \(2\left(m+n-1\right)\) & \(\mathcal{L}_{D_{m|n}}^{\mu_{m+n}}\) & (6.27)-(6.29) \\ & \(sl_{m|n}\oplus gl_{1}\) & \(\frac{(m+n)(m+n+1)}{2}-m\) & \(\mathcal{L}_{D_{m|n}}^{\mu_{m+1}}\) & (6.43) \\ \hline \end{tabular}
2309.16733
Resilience of Deep Learning applications: a systematic literature review of analysis and hardening techniques
Machine Learning (ML) is currently being exploited in numerous applications being one of the most effective Artificial Intelligence (AI) technologies, used in diverse fields, such as vision, autonomous systems, and alike. The trend motivated a significant amount of contributions to the analysis and design of ML applications against faults affecting the underlying hardware. The authors investigate the existing body of knowledge on Deep Learning (among ML techniques) resilience against hardware faults systematically through a thoughtful review in which the strengths and weaknesses of this literature stream are presented clearly and then future avenues of research are set out. The review is based on 220 scientific articles published between January 2019 and March 2024. The authors adopt a classifying framework to interpret and highlight research similarities and peculiarities, based on several parameters, starting from the main scope of the work, the adopted fault and error models, to their reproducibility. This framework allows for a comparison of the different solutions and the identification of possible synergies. Furthermore, suggestions concerning the future direction of research are proposed in the form of open challenges to be addressed.
Cristiana Bolchini, Luca Cassano, Antonio Miele
2023-09-27T19:22:19Z
http://arxiv.org/abs/2309.16733v2
# Resilience of Deep Learning applications: a systematic survey of analysis and hardening techniques ###### Abstract. Machine Learning (ML) is currently being exploited in numerous applications being one of the most effective Artificial Intelligence (AI) technologies, used in diverse fields, such as vision, autonomous systems, and alike. The trend motivated a significant amount of contributions to the analysis and design of ML applications against faults affecting the underlying hardware. The authors investigate the existing body of knowledge on Deep Learning (among ML techniques) resilience against hardware faults systematically through a thoughtful review in which the strengths and weaknesses of this literature stream are presented clearly and then future avenues of research are set out. The review is based on 163 scientific articles published between January 2019 and March 2023. The authors adopt a classifying framework to interpret and highlight research similarities and peculiarities, based on several parameters, starting from the main scope of the work, the adopted fault and error models, to their reproducibility. This framework allows for a comparison of the different solutions and the identification of possible synergies. Furthermore, suggestions concerning the future direction of research are proposed in the form of open challenges to be addressed. Convolutional Neural Network, Deep Learning, Deep Neural Network, Fault tolerance, Resilience analysis, Hardening, Hardware faults + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: + Footnote †: journal: ACM + Footnote †: Footnote †: journal: ACM + Footnote †: Footnote †: journal: ACM + Footnote †: Footnote †: journal: ACM + [MISSING_PAGE_POST] ## 1. Introduction The widespread adoption of Machine Learning (ML) in safety/mission-critical systems motivated a great attention towards the resilience of such complex systems against the occurrence of faults in the underlying hardware. Among all ML techniques, Deep Learning (DL) is the one that the research community is mainly focusing its attention on, also in terms of reliability issues. In fact, DL is widely used for vision and perception functionalities, which are particularly relevant for implementing human-assisting tasks (e.g., advanced driver-assistance systems) and represent the enabling technology for autonomous behaviors (e.g., unmanned aerial vehicles or rovers). DL consists of a set of specific Artificial Neural Network (ANN) models where multiple layers of processing are used to extract progressively higher level features and information from raw data, such as images taken from cameras. In general, faults can occur in (1) input data, (2) software, and (3) hardware, possibly causing the DL application to behave differently from what expected (e.g., (Botot et al., 2019)). Faults on _input data_ may derive from defective/broken sensors and devices, noise, as well as from adversarial attacks. Faults in _software_ usually originate from bugs or aggressive implementations. Finally, faults in _hardware_ may be caused by radiation, voltage over-scaling, and aging or in-field permanent stuck-at failures. When addressing with hardware faults, the underlying assumption is that the DL application has been designed and implemented to achieve the best performance (in terms of accuracy of the prediction tasks) with respect to requirements and constraints, and the input data is genuine. In this work we focus on _hardware faults_ and we investigate analysis and design methods and tools to evaluate and possibly improve the _resilience_ of DL algorithms and applications against this source of failure. Sometimes the term _robustness_ is interchangeably used, still referring to the mentioned hardware faults context; we do not investigate _resilience/robustness_ with respect to its design and implementation, nor to adversarial attacks, belonging to the security scenario. Moreover, although the design and training processes have an impact on the performance of the final implementation resilience, such facets are here considered only when they are tailored to the possibility to mitigate hardware fault effects. On this topic the body of knowledge is quite rich, and a very detailed analysis has been presented in [2], where the author introduces a comprehensive and extensive synthesis of analysis and hardening methods against faults affecting hardware platforms running ANN applications. The contribution details the various adopted fault models, the fault simulation/injection and emulation strategies presented in literature at that time, as well as the proposed solutions to make the Artificial Intelligence (AI) resilient against the analyzed faults/errors. A similar contribution is given by [3], where the authors analyze how faults in Deep Neural Network (DNN) accelerators such as Graphic Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs), affect the executed application. The analysis framework takes into account the different sources of faults and possible fault locations, and a few final considerations mention hardening solutions. The most recent contribution reporting part of the body of work on DL resilience is [4], analyzing some recent research and results focused on resilience assessment. The authors introduce the context and detail the fault analysis strategies and methods adopted when dealing with DL applications, reporting some novel solutions. These papers serve not only as a reference to prominent research up to that time instant, but also provide a concise explanation of the various existing techniques. To complete the scenario overview, three recent contributions that briefly discuss the state of the art and focus also on possible research challenges and opportunities are the works presented in [1; 5; 6], sometimes embracing also security-related considerations. As Figure 1 shows, the community is very active and the contributions of the last four years introduce new relevant elements and insights, motivating, in our opinion, a new review, that also introduces different perspectives with respect to recent ones (i.e., [4]); a classification framework as well as other synthetic considerations on the observed trends. Given the breadth of the domain and the many different facets, we define a boundary based on (1) the time window of the publication, selecting only those included in the Jan. 2019 - Mar. 2023 window, with a couple of exceptions mainly referring to surveys (e.g., [4]) and tools still commonly adopted to perform resilience analysis (e.g., [7]), to better frame the discussion; (2) the _adopted fault model_, by including only contributions that cover transient and permanent faults, excluding those that only affect weights stored in memories that are typically protected by Error Correcting Code (ECC) solutions; (3) the _DL algorithm_, by excluding works that strictly depend on the specific ANN architecture (e.g., spiking), such that the presented solutions can be broadly adopted; (4) the hardware platform running the application, by including CPUs and hardware accelerators, such as GPUs and FPGAs. The rest of the paper is organized as follows. Section 2 introduces the adopted search methodology aligned with the boundary of the domain previously mentioned, and the classification framework defined to analyze the available contributions. Section 3 reports the various contributions, characterized according to the defined analysis framework, briefly summarizing the most relevant aspects. Section 4 draws some considerations on the overall state of the art, highlighting open challenges and opportunities, while Section 5 concludes the paper. ## 2. Methodology Before presenting the proposed classification framework and the selected contributions, we here present the adopted search and selection process. ### Research design This study aims at conducting a systematic literature review to explore the current state of the art in the design and analysis of resilient DL applications against hardware faults and to observe the present research trends in this context. The purpose is to get an up-to-date overview of the available solutions, also identifying the open challenges and possible opportunities in the field. To this end we performed a thorough search and designed an analysis framework to classify the numerous contributions. ### Research method To gather the contributions within the area of interest, we started from Scopus and World of Science to collect papers that appeared in renowned venues (both journals and conferences), delimiting the time span between January 2019 and March 2023, and excluding all topic areas and keywords that would surely lead to not relevant publications. Tables 1 and 2 report the desired search strings and the actual ones in the mentioned repositories. The searches returned a very high number of contributions (1268) and we adopted the process reported in Figure 2 to filter out clearly unrelated contributions and to include other ones through reference mining and snowballing also Manuscript submitted to ACM. \begin{table} \begin{tabular}{l l} \hline \hline **Database** & **Search string** \\ \hline \multirow{7}{*}{ Scopus} & TITLE-ABS-KEY ( ( "Resilient" OR “Fault tolerant"" OR “Robust" OR “Dependab" OR “Reliab") AND ( "CNN" OR “DNN" OR ml OR “Convolutional Neural Network" OR “Deep Neural Network") AND ( "Soft error" OR seu OR fault ) AND PUBYEAR? 2018 AND ( EXCLUDE ( SUBJAREA,"PHYS") OR EXCLUDE ( SUBJAREA,"MATE") OR EXCLUDE ( SUBJAREA,"ENER") OR EXCLUDE ( SUBJAREA,"MATE") OR EXCLUDE ( SUBJAREA,"DECI") OR EXCLUDE ( SUBJAREA,"CHEM") OR EXCLUDE ( SUBJAREA,"CHEM") OR EXCLUDE ( SUBJAREA,"EART") OR EXCLUDE ( SUBJAREA,"BIOC") OR EXCLUDE ( SUBJAREA,"CENG") OR EXCLUDE ( SUBJAREA,"ENVI") OR EXCLUDE ( SUBJAREA,"MULT") OR EXCLUDE ( SUBJAREA,"SOCI") OR EXCLUDE ( SUBJAREA,"NEUR") OR EXCLUDE ( SUBJAREA,"MEDI") OR EXCLUDE ( SUBJAREA,"BUSI") OR EXCLUDE ( SUBJAREA,"HAEL") OR EXCLUDE ( SUBJAREA,"AGRI")) AND ( EXCLUDE ( LANGUAGE,"Chinese") OR EXCLUDE ( LANGUAGE,"French")) AND ( EXCLUDE ( EXACTKEYWORD,"Diagnos"))))) \\ \cline{2-3} & ((TS=("Resilient" OR “Fault tolerant"" OR “Robust"" OR “Dependab" OR “Reliab"")) AND TS=("CNN" OR "DNN" OR ML OR “Convolutional Neural Network" OR “Deep Neural Network" OR ML OR “Machine Learning" OR DL OR “Deep Learning"))) AND TS=("Soft error" OR SEU OR fault) \\ \hline \hline \end{tabular} \end{table} Table 2: The selected databases and formulated search strings. Figure 2: Flow diagram presenting the retrieval and screening process of the literature following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) process. on other search engines. More precisely, we initially excluded contributions (filter 1) based on the title, the abstract and the keywords. Indeed many results referred to the use of ML/DL for resilience and diagnosis, sometimes applied to out-of-scope contexts (e.g., power/transmission lines or not ML/DL applications). Through snowballing and reference mining we added new contributions, leading to a batch of 183 papers we read. Further filtering took place (filter 2) based on the strength of the contribution (length and/or venue) and the existence of a subsequent more mature/complete publication (163 papers). Finally, we selected a set of 71 papers considered as the review sources (filter 3) to have contributions presenting solutions of general validity, possibly excluding too specific scenarios or custom case study. 9 out of the 71 documents are surveys or position papers, thus we actually analyze and classify 61 papers, presenting novel contributions on the topic of interest. The characteristics of the search method as well as the outcomes are summarized in Table 3. The spreadsheet file with all the raw data analyzed during this systematic literature review process can be downloaded from [https://github.com/D4De/dl_resilience_survey](https://github.com/D4De/dl_resilience_survey). ### Classification framework We have defined an analysis framework to carry out a rigorous classification of the selected papers. Figure 3 sketches the primary axes of this analysis framework, being a set of relevant aspects for the considered topic, i.e., system's _resilience_, and the referred application scenario, i.e., _DL applications_. A brief description of all the considered aspects, further synthesized in Table 4, is given in the following paragraphs. _Scope._ The primary element adopted to organize the contributions is the main goal of the presented solutions, broadly aggregated into two main classes; analysis and hardening methods. Contributions devoted to the development of techniques and tools to evaluate the resilience of the application against hardware faults belong to the _analysis methods_ class, those that present new approaches to to enhance the capabilities of the system to detect and mitigate the effects of Figure 3. The primary axes of the adopted classification framework, with a few sample values. \begin{table} \begin{tabular}{|l|l|} \hline Keywords: & soft error, resilience, dependable, fault tolerance, reliable, robust AND \\ & DL, CNNs, DNNs \\ \hline Repositories: & IEEE, ACM, Elsevier, Springer \\ \hline Search engines: & Google scholar, Semantic scholar, Scopus, lens.org, DBLP \\ \hline Publication years: & January 2019 - March 2023 \\ \hline Search outcome: & 1268 \\ \hline Analyzed contributions: & 163 \\ \hline Reported contributions: & 71 \\ \hline Novel technical contributions: & 61 \\ \hline \end{tabular} \end{table} Table 3. Search methodology details. hardware faults are included in the _hardening methods_ class. Indeed some contributions introduce innovative strategies to evaluate resilience and exploit such information to tailor a hardening method; these methods are included in the _both_ class. Finally, some publications explore the application of either traditional or recent methods to specific study cases, reporting outcomes and limitations, experiences others might benefit from; we classified them in the _case studies_ class. _Abstraction level._ Common to many fields of the digital systems' design area, approaches work at different levels of abstraction, within the entire hardware/software stack from the technological level to the application one. Moreover, multiple other aspects are highly dependent on the adopted abstraction level, therefore we prioritized it and identified the following six values, based on the main system element the proposed methods work on: * physical device, * logic netlist. * architectural description at RTL level, * hardware schema described in the Instruction Set Architecture (ISA), * software elements within the implementation of the single DL operators, * software elements in the dataflow graph of the DL model. _Hardware platform._ The type of misbehavior caused by faults affecting the hardware in the application execution is highly dependent on the underlying platform. Therefore, another key aspect in the proposed analysis framework is the hardware platform where the DL application is executed. Frequently adopted platforms are the GPUs and custom hardware accelerators, implemented on FPGA or ASIC; the CPU is used only in a few contributions, while the Tensor Processing Unit (TPU) is increasingly receiving interest. As we will see, some contributions, especially when acting at the application abstraction level, will not consider any specific hardware, thus being platform independent or _platform-agnostic_. _Fault model._ Every reliability study has a fundamental element driving the discussion, that is the source of the anomalous behavior the proposed approach is addressing. The reference abstraction level for the definition of the fault model is the logic/architecture one, where literature defines permanent models, such as the stuck-at faults, and transient ones, such as Single Event Upset (SEU). Some of the proposed methods work at the application level, not referring to a specific hardware platform; it is therefore not possible to identify the mechanisms causing the anomaly in the expected values/behavior. For these contributions we added a _functional_ fault model, transient and/or permanent, according to the authors' specification. Since many of the analyzed works act at a higher abstraction level, fault models are generally abstracted to derive corresponding error models. _Error model._ An error model describes the effects of the considered fault model at the selected abstraction level, and it affects one of the elements of the abstraction level. When working at device or RTL level, the relationship between fault and error are quite straightforward, when moving to higher abstraction levels, such a relationship is sometimes part of the contribution (for resilience analysis methods), sometimes omitted. Indeed, when adopting a functional fault model as previously discussed, fault and error models tend to be a unique element. Nevertheless, the error model is characterized by the specific corrupted _location_ which, once more, depends on the abstraction level. At device, RTL and microarchitectural levels, fault locations typically include registers and memory elements storing processed data and the DL model weights. At a higher levels of abstraction, error locations may also include parameters as single weights and bias constants, or data values, and even more complex data structures such as the outputs of the various neurons or the intermediate tensors produced by the layers in the DL model. Therefore, we identify the following values: i) corrupted register/memory element, ii) corrupted parameter, iii) corrupted data value, iv) corrupted neuron output, v) corrupted layer output. ML FrameworkThe design of DL applications is generally performed in specific ML frameworks guiding and easing this type of activity by providing ML operators already implemented and algorithms to automate the training and testing of the models. TensorFlow and PyTorch are examples of such frameworks. Several reliability studies and tools are developed and tailored for the specific ML framework, to enable the integration of the resilience activity in the design flow and to exploit the elements it provides. This axis of the classification collects this aspect when specific to the proposed solution. Tool supportThe availability of open-source tools is indeed beneficial to the entire scientific community, to foster further developments as well as fair comparisons. Our framework includes also this aspect, to indicate whether the authors make available the developed software to perform the presented analysis/hardening solutions. The list of urls of the available software is reported in the last part of the paper. ReproducibilitySimilarly to the previous aspect, we deemed relevant to be able to reproduce the outcomes of the study, in the future, to present a comparative analysis for supporting new solutions. To this end, we marked entries with a positive answer when the software is available or the adopted method is discussed in details allowing for it to be replicated. Analysis and hardening approaches can be further characterized with respect to the specific proposed solutions, namely the dependability attribute, injection method and analysis output in the former approaches, target outcome, hardening technique and hardening strategy in the latter. They are discussed in the following and summarized in Tables 5 and 6, respectively. Dependability attributeThe various analysis approaches may focus on the evaluation of different attributes falling under the umbrella of the dependability; generally, works use to quantitatively analyze a reliability metric. In the considered scenario, further works analyze the vulnerability to faults of the various layers, operators or parameters composing the DL model. Thus, the considered _dependability attribute_ is another characterizing aspect for the reviewed papers, that includes in our work the following values: i) reliability, ii) safety, and iii) vulnerability factor. Injection methodThe vast majority of the analyzed contributions rely on fault/error injection methods to perform the resilience analysis, and the specific one depends on the abstraction level of the work. Here we list the following values to include all included studies: * the final system is irradiated with nuclear particles. * faults are emulated on the target hardware platform. * processed data are corrupted during the execution of the software running non-necessarily on the target platform. Analysis outputWhen performing a resilience analysis, two main types of outcomes are typically reported: (1) a quantitative measure adopted as a figure of merit, or (2) a qualitative evaluation of the solution, based on various considerations.Sometimes, based on the analysis results, also guidelines for hardening the system are provided, often targeting the mitigation of the most susceptible elements in the analyzed DL model. In the set of selected papers, all contributions on analysis methods report a quantitative output and eventually some hardening guidelines, that is what we report in the final synthesis. Reliability property.Hardening approaches can be classified w.r.t. the reliability property the final system will exhibit, that in the present set of studies is either fault detection or fault tolerance. Hardening technique.In the DL scenario, as in other contexts, often the the hardening process relies on redundancy-based techniques. Some approaches adopt the classical techniques, such as Duplication with Comparison (DWC) possibly coupled with re-execution, Triple Modular Redundancy (TMR), N-Modular Redundancy (NMR) and ECC. Other works apply Algorithm-Based Fault Tolerance (ABFT) or Algorithm-Based Error Detection (ABED) techniques within the single DL operator, being the algorithm generally based on matrix multiplications. Finally, a last class of works exploits specific characteristics of the DL models, such as the adoption of fault-aware training strategies to exploit the intrinsic information redundancy in DL models to deal with the effects of a fault. Hardening strategy.Finally, various strategies can be adopted aimed at reducing the overhead of hardening redundancies. In particular, apart from the application of a technique to the entire application, _selective_ hardening is used to protect only the most critical portion of the system and _approximation_ strategies can be used to limit the overheads of redundant application replicas. Finally, some solutions design _specific_ versions of DL operators to obtain at their output a resilient result. A detailed list of the collected values for each one of the framework axes is reported in Tables 4 and 5. Indeed the framework can be extended in the future to include new relevant axes, and the values can always be incremented to cover newly reviewed solutions. ## 3. The State of the Art We classified the reviewed papers primarily based on their main contribution partitioning them into _analysis_ methods and _hardening_ ones, those studies that work on both aspects have been included in the group associated with the predominant contribution. ### Resilience Analysis This first class of works includes approaches for the analysis of the resilience of digital systems running DL applications w.r.t. the occurrence of faults. To further characterize them, we consider the abstraction level they work at, namely _application-level_, _hardware-level_ or _cross-layer_. Application-level methodologies aim at analysing the resilience of the DL engine ignoring the underlying hardware platform. Therefore, such works consider the engine at the dataflow graph-level and study the impact of errors corrupting the weights of the model, the output of the operators or the variables within the operators' execution. The advantages of these methodologies are i) the possibility to apply them early in the design process, as soon as the DL engine has been designed and trained; ii) easiness of the deployment (no hardware prototypes and/or instrumentation is required); and iii) the opportunity to work directly on the actual DL engine that will then be used. On the other hand, the solutions may suffer from poor accuracy because of the abstract adopted error models. It is vital for application-level analyses to properly work that the adopted error models actually capture the effects that the faults in the hardware platform cause in the executed application otherwise inconsistent and only partially useful results are obtained. Hardware-level methodologies exploit hardware-level fault injection platforms (mainly by emulating SEUs in the configuration memory of FPGAs or in the registers of GPUs) to accurately emulate the effects of faults in the hardware where the DL model will be executed. These approaches are highly accurate because of the ability of reproducing the faulty behavior, and are time-wise more sustainable than simulation solutions, since fault injection can be executed at speed. On the other hand, these approaches are generally hard to be deployed, demanding specific hardware-level skills that a design team specialized in DL may lack. Moreover, the application of resilience analyses belonging to this class are typically carried out late in the design process, thus making modifications expensive. Finally, cross-layer methodologies try to bring together the advantages of the previous methodologies by splitting the analysis into two steps. First, a hardware platform-specific fault injection or radiation testing activity is performed on a portion of the DL engine under analysis or on the single operators. In this way the actual effects of the faults \begin{table} \begin{tabular}{p{56.9pt} p{113.8pt} p{113.8pt}} **Classification Axis** & **Description** & **Values** \\ \hline \hline _Scope_ & The focus of the approach & Analysis (A), Hardening (H) or both (B) \\ \hline _Abstraction level_ & The abstraction level & Device (DEV), Logic (LOG), RTL, Microarchitectural (ISA), Algorithm (ALG), Application (APP) \\ \hline _Architectural platform_ & The hardware where the application is executed. Affects the fault/error model, the abstraction level, etc. & CPU, GPU, TPU, FPGA, ASIC, or any (in case of high abstraction-level methodologies) \\ \hline _Fault model_ & The source of the anomalous behavior & Stuck-at (SA), Single Event Upset (SEU), permanent functional (PFunc), transient functional (TFunc) \\ \hline _Error model_ & The effects of the fault at the selected abstraction level, identifying the corrupted element & register/memory element (REG), parameter (P), data value (DV), neuron output (NO), layer output (LO) \\ \hline _ML Framework_ & The exploited software ML framework, if specified & TensorFlow (TF, [8]), PyTorch (PT, [9]), Keras (KE, [10]), Darknet (DK, [11]), Caffe (CA, [12]), TensorRT (TR, [13]), cuDNN (cu [14]), N2D2 (ND [15]), FINN (FI [16]), CMSIS-NN (CM [17]), CMix-NN (CN [18]) \\ \hline _Tool support_ & Tools released & Yes/No \\ \hline _Reproducibility_ & The possibility to replicate / compare against & Yes/No \\ \hline \hline \end{tabular} \end{table} Table 4. Taxonomy axes \begin{table} \begin{tabular}{p{56.9pt} p{113.8pt} p{113.8pt}} **Classification Axis** & **Description** & **Values** \\ \hline \hline _Dependability attribute_ & Attribute of interest & Reliability (Re), Availability (Av), Safety (Sa), Vulnerability factor (VF) \\ \hline _Injection method_ & Fault injection method & Radiation (Ra), Emulation (Em), Simulation (Si) \\ \hline _Output_ & Kind/type of output of the analysis & quantitative metrics (QM), guidelines for hardening (HG) \\ \hline \hline \end{tabular} \end{table} Table 5. Analysis studies: further classification occurring into an FPGA, a GPU or a CPU while accelerating/executing a DL engine are captured. Then, the observed effects are used to feed a higher-level analysis/simulation engine to observe how these effects propagate through the subsequent layers of the model and if and how they affect the final output. An additional group gathers a number of works that serve as case studies, because they apply to a specific DLs model, or actually report application case studies, presenting interesting results that are though specifically tailored for the discussed context. A brief description of contributions that belong to this class and to the above mentioned groups follows. #### 3.1.1. Application-level Methodologies The paper in (Krishnan et al., 2017) presents one of the first tools for the resilience analysis of Convolutional Neural Networks (CNNs) by performing error injection at application level. The tool, developed within the Darknet ML framework, allows to corrupt the weights in the CNN model and to carry out error simulation campaigns. The goal of the tool is to analyze the safety of the DL applications; in particular, single experiments are classified as _masked_, _observed safe_ and _observed unsafe_, a threshold set to +/-5% is used to analyze the difference between the top ranked percentage in the erroneous result and the golden counterpart and to determine the safe/unsafe class the corrupted output belongs to. The paper considers permanent faults affecting the CNN weights, not whatsoever relating these permanent functional faults to realistic faults in the underlying hardware running the application. BinFI and TensorFlowFI, presented in (Krishnan et al., 2017) and (Krishnan et al., 2017) respectively, are two subsequent contributions from the same research group, who, among other works, designed, developed and distributed two fault injection frameworks to evaluate ML systems resilience. BinFI identifies safety-critical bits in ML applications, while TensorFlowFI analyzes the effects of hardware and software faults that occur during the execution of TensorFlow programs. The paper in (Krishnan et al., 2017) presents TensorFlowFI+, an extension of the TensorFI environment. In particular, TensorFI+ supports TensorFlow 2 models, allowing to analyze also non-sequential models by corrupting the output of the layers. An interesting feature of the framework is the possibility to inject faults during the training phase of the CNN under analysis. PyTorchFI (presented in (Krishnan et al., 2017)) is an error simulation engine for DNNs that exploits the PyTorch framework. The tool allows to emulate faults by injecting _perturbations_ in the weights and neurons of the convolutional layers of DNNs; the injected perturbations are functional errors, therefore no specific hardware architecture is considered. The analysis can be run on either CPUs or GPUs. A similar approach is implemented by Ares (Bellegretti et al., 2017), an application-level error simulator for \begin{table} \begin{tabular}{l l l} **Classification** & **Description** & **Values** \\ **Axis** & **Description** & **Values** \\ \hline \hline _Reliability_ & Aim of the hardening & Fault detection (FD), Fault tolerance (FT) \\ _property_ & & Duplication with Comparison (DWC), Triple Modular Redundancy (TMR), N-Modular Redundancy (NMR), DWC + Re-Execution (D+R) Algorithm- \\ _Technique_ & Adopted technique & Based Fault Tolerance (ABFT), Algorithm-Based Error Detection (ABED), Error Correcting Codes (ECCs), Checkpointing (CHK), DL-specific (DL) \\ \hline _Strategy_ & Type of action & Full, Selective (Sel), Specific (Spec), Approximated (Ax) \\ \hline \hline \end{tabular} \end{table} Table 6. Hardening studies: further classification DNNs1. Again, the tool supports the simulation of perturbations modeling faults affecting the weights, the activation functions and the state of the neurons. Several observations and guidelines are also drawn in the paper: i) the resilience of DNNs is strongly influenced by the data type and quantization of the weights; ii) some classes are more likely to be mispredicted than others; iii) faults in the weights are more likely to cause a misprediction than those in the activation functions; and, iv) the more weights are reused the higher the failure probability. Footnote 1: The tool is dated 2018, outside the boundary of this investigation. However, we included it, because it is adopted in several of the analyzed studies. An analytical model called SERN is proposed in [24] for the resilience analysis of CNNs w.r.t. soft errors affecting the weights. The results obtained by SERN are then validated against a set of fault injection experiments. In particular, by exploiting the proposed framework, the authors analyse the impact of the occurring faults w.r.t. i) the position of the affected bit within the stored value and ii) the size of the stored value itself.The authors further propose to harden the CNN under analysis by protecting the most significant bits of the weights via ECC and by selectively duplicating the first convolutions layers of the network. Working at this abstraction level, the attention is focused on the performance and behavior of the DNN with respect to different implementation strategies, when a fault corrupts its elements. More specifically, studies [25; 26; 27] explore the effects of quantization, compression and pruning on resilience. In particular, [25] explores the impact of transient faults on compressed DNNs with respect to different pruning rates and data precision. The adopted fault model is the single bit flip on random live values stored in latches or registers and the authors develop a fault injection framework dubbed _TorchFI_ to emulate such effects. The main outcomes of this work are: i) 16-bit integer quantization can mitigate the overall error propagation w.r.t. the 32-bit floating-point baseline; ii) while 16-bit quantization increases resilience, the more aggressive 8-bit quantization can produce a resilience drop; and iii) pruned networks being smaller and faster will be less prone to faults, therefore possibly achieving a better resilience. Similar quantization strategies are explored in [26], proposing a simulator for evaluating the resilience of DNNs based on the frameworks of Keras and Tensorflow. The targeted fault model includes SEUs in the inputs, in the weights and in the output of the operators. Finally, the work presented in [27] discusses a simulation analysis for understanding the fault resilience of compressed DNN models as compared to uncompressed ones. Simulation is then used to study the resilience of pruned and quantized DNNs w.r.t. not pruned and not quantized ones. The results presented in the paper demonstrate that on the one hand pruning does not impact the resilience of the DNN while, on the other hand, data quantization largely increases it. Another neural network element being tailored during the design and implementation of a system is the type of data, similarly to quantization. Approximation can be adopted to leverage model accuracy and implementation costs (e.g., execution time, hardware resource demand and power consumption). Since such representation choice has an impact on resilience, some studies investigate this aspect. The authors in [28] exploit the application-level error simulator presented in [19] to analyze the safety w.r.t. the occurrence of permanent faults in the weights of two different CNNs when varying the data type; both floating point and fixed point data types at different precision levels are considered. The conclusions drawn in the paper are that the most resilient data type and precision level depend on the specific model; moreover, the paper suggests to select the best suited solution by trading safety and memory footprint of the various alternatives. The authors of [29] use an ad-hoc designed ML algorithm to build a _vulnerability model_ of the parameters of the DNN under analysis. To reduce the number of required fault injection experiments to analyze the effects of bit flips, empirical considerations are introduced on the importance of the various bits within the value representation, both in the floating point and in the fixed point cases. The authors evaluate the benefits/loss of accuracy with respect to injecting faults in all locations showing that the outcome offers good opportunities. #### 3.1.2. Hardware-level Methodologies Libano and others investigates in various studies the resilience of CNNs accelerated onto FPGAs by means of both radiations tests and fault emulation. In particular, in [(30)] radiation testing experiments are performed to analyze the impact of data precision and degree of parallelism on the resilience of the network. The conclusions of the study are: i) lower precision means less hardware resources and consequently lower fault probability; and ii) more parallelism means more hardware resources but also faster execution thus, the best performance-resilience trade-off is reached with the highest achievable degree of parallelism. An analysis of the effects of SEUs in Binarized Neural Networks (BNNs) accelerated onto SRAM-based FPGAs is presented in [(31)]. The authors exploit the Xilinx FINN framework to build the BNN and the FPGA Reliability Evaluation through JTAG (FREtZ) framework for the fault injection activity. The outcome of such logic-level fault injection experiment is subsequently exploited to carry out an in-depth layer-per-layer analysis of the effects of the faults on the accuracy of the network. The results of this study show that BNNs are inherently resilient to soft errors. Additional examples of fault resilience analysis of CNNs accelerated onto FPGA devices are presented in [(32)] and [(33)]. In the former the authors explore alternative quantized designs and compare them against a classical TMR, to evaluate costs and benefits. In the latter the authors consider permanent stuck-at faults and explore their effects, investigating four typical CNNs, including Yolo. The analysis shows that hardware faults can cause both system exceptions, such as system stall and abnormal runtime, and prediction accuracy loss. A custom evaluation metric based on accuracy loss is exploited, also taking into account system exception probability; the nominal and TMR-protected versions are analyzed and compared against. Another analysis for CNNs accelerated onto FPGA devices is presented in [(34)], where the focus is on investigating the impact of various pruning techniques on the resilience of the network. Several interesting considerations are drawn: i) removing filters that marginally contribute to the final classification increases the resilience of the CNN w.r.t. fault in the configuration memory; ii) networks with higher pruning rates are more robust to errors affecting the weights; and iii) only a small percentage of weights (about 30%) can (when corrupted) actually modify the behavior of the network and the percentage is even smaller if we consider the ability of causing an accuracy loss (about 14%). A broad contribution to this class of solutions comes from Rech's team, analyzing the resilience to SEUs when executing DL applications on GPUs. In particular, in [(35)], radiation tests are used to cause realistic SEUs in the target device; then, they complement the first set of experiments with microarchitectural-level fault injection by means of the SASSIFI tool, to collect a more extensive set of results. In the experiments, various versions of the same CNN applications are analyzed, including the nominal versions and robust versions hardened by means of ECCs and ABFT strategies applied to the convolutional layer. In a subsequent work [(36)], the same research team evaluates with a similar approach the resilience of Google's TPU by means of radiation testing. The most interesting aspect of this work is the definition of a set of error models in terms of the spatial patterns of the erroneous values in the output tensor of the convolution operator. Finally, the work in [(37)] presents a strategy to estimate the criticality of processing elements (PEs) in a systolic array with respect to faults that may permanently affect one of them, by building and training a _neural twin_. The aim is to simplify the complexity (in terms of time) to analyze faults' effects with respect to solutions based on fault injection (as the authors did in the past) by using a trained model of the PE. The analysis on the single element offers the expected advantages and coherence with the PE real fault/error behavior, however the possibility to generalize and transfer the model to the rest of the PEs is still to be investigated. #### 3.1.3. Cross-layer Methodologies Fidelity (Steintein et al., 2017) is an accurate logic-level error simulator for DNNs accelerated via custom circuits. By exploiting a deep knowledge of the regular structure of DNN hardware accelerators, Fidelity is able to reproduce and track in software the effects of SEUs occurring in the underlying hardware platform and affecting both the weights and the neurons. Moreover, based on the application of Fidelity to a set of large networks the authors draw the following considerations: i) not only the weights but also neurons and neuron scheduling highly affects the resilience of the network; ii) the adopted data precision has an impact on the resilience; and, iii) the larger the perturbation in the output of the neuron, the more likely the network suffers from a mis-classification. The work in (Steintein et al., 2017) presents an analysis framework aimed at predicting the propagation of SEUs affecting the registers of a CPU executing a CNN. The SIMICS system simulator is employed to simulate the entire CPU and the executed CNN; corruptions in the CPU registers are introduced to simulate SEUs. A small set of fault simulation experiments are first performed to extract data that are later used to train a Generative Adversarial Network (GAN). The GAN represents the actual core of the methodology since, after its training, it is used to predict, layer by layer, the percentage of faults that will be masked, those that will cause a crash and the ones that will lead to a Silent Data Corruption (SDC). The work in (Steintein et al., 2017) presents another cross-layer error simulation framework; the approach is developed for a specific working scenario considering a microprocessor-based system running CNNs, focusing on faults affecting the RAM chip. The proposed approach is based on radiation experiments aimed at systematically analyzing the effects of the faults to build application-level error models, defined in terms of data corruption patters and occurrence frequencies; such models are specifically devoted to corrupt CNN parameters, such as weights and bias constants. These models are integrated into an in-house error simulator offering the possibility to run CNN resilience analysis at the application level, and, therefore, on any platform, without the need of actually deploying the CNN on the target architecture. The framework is used to evaluate the resilience of various implementations of the LeNet-5 CNN obtained by using different data types, using different precisions. A three-level resilience analysis environment is proposed in (Steintein et al., 2017). The first step is a profiling where each instruction of the DL model under analysis is associated with information such as input values, output result and opcode by means of NVBit (Steintein et al., 2018). As a second step, the microarchitectural fault injection for GPUs (called FlexGripPlus (Stein et al., 2018)) is employed to characterize the effects of SEUs affecting the microarchitectural resources of the GPU cores while they are executing a single layer of the CNN. Finally, the observed erroneous behaviors are used into a software-level fault simulation environment to analyse how faults propagate among the layers of the CNN. This enables a detailed analysis of the vulnerability factor of every layer in the considered CNN. The work in (Stein et al., 2018) presents a cross-layer framework for the analysis of CNN sensitivity against faults. The proposed framework relies on a CPU executing the CNN and on an FPGA-based accelerator implementing the operator where faults have to be injected; the actual fault injection is realized by bit-flipping the content of the configuration memory of the FPGA device. CLASSES (Stein et al., 2018) is a cross-layer error simulation framework developed in the TensorFlow ML framework. The tool is provided with a methodological approach to define error models starting from microarchitecture-level fault injection. More precisely, the method runs a preliminary fault injection campaign for each type of ML operator on the target architectural platform; then, corrupted output tensors are analyzed to identify recurrent spatial patterns of erroneous values and their frequency. Thus, error models are defined for each one of these ML operators in terms of an algorithmic description of how the output tensor of the operator should be modified according to the observed spatial patterns. Error models are stored in a repository used by the application-level error simulator that will run the entire CNN model and will inject errors on selected intermediate tensors produced by any operator. Since the error model captures the effects of the fault corrupting the target architecture, error simulation is performed at application level, on any machine, without the need to deploy the application on the target final hardware. The paper demonstrates the effectiveness of the tool and the companion approach in the scenario of Yolo CNN executed on a GPU, however, the approach is general and can be employed for any architecture and CNN model. (Krishnan, 2017) presents FireNN; it is a cross-layer resilience analysis engine for CNNs accelerated onto FPGAs. The tool allows to study how the task carried out by the CNN is affected by SEUs occurring either in the CNN weights or in the layers output. More precisely, the entire CNN is executed in software by means of the PyTorch framework while the CNN operator cons the target for the fault injection in transferred onto the FPGA device. Once the operator has been configured in the FPGA the fault is injected, the (possibly corrupted) operator output is collected and it is then reintroduced in the subsequent operators that, again, are executed in software. LLTFI (Krishnan, 2017) supports framework-agnostic fault injection in both C/C++ programs and ML applications written using any high-level ML framework. It uses LLVM to compile the DNN model in the Intermediate Representation (IR) targeted for CPU platform, that is used for fault injection activities. In this way, the tool supports injection at the granularity of single IR instructions, allowing also to observe at a fine-grain level the error propagation among the various parts of the DNN. Based on these capabilities, LLTFI provides guidelines and metrics to drive the selective instruction hardening, as demonstrated by the experimental activities discussed in the paper. #### 3.1.4. Case studies The paper in (Krishnan, 2017) from NVIDIA analyses the reliability and safety of a CNN (executed on a GPU) for object detection in the automotive application domain. Both fault simulation and radiation testing are carried out. It is one of the few papers where safety issues (Failure in Time in particular) are taken into account. The paper highlights how the use of ECCs for the protection of the content of the memory of the GPU increases the reliability of the system. On the other hand, the paper also states that ECC protection is not enough and that periodic structural tests are recommended to mitigate risks due to SEUs. The impact of SEUs occurring in the weights on the accuracy of CNNs is analysed in (Krishnan, 2017) via an ad-hoc designed fault simulation framework. GoogleNet, Alexnet, VGG16, and SqueezeNet are considered in the analysis and the target hardware platform is a GPU. The analysis is carried out targeting three aspects: data representation (fixed point versus floating point values), position of the corrupted bit within the value and position of the corrupted layer within the network. The outcome of the analysis refers that i) CNNs using fixed point values are much more resilient than the ones using floating point values; and, as expected, ii) faults occurring in the exponent of floating point CNNs have the biggest impact on resilience; and iii) the last layer of the network are the ones having the biggest impact on its resilience. The works in (Krishnan, 2017; Krishnan, 2017) deal with two different case studies, analyzing and improving the resilience of ResNet and GoogLeNet implemented on GPUs, respectively. In both cases the context is very specific such that, as the authors state, it is not possible to generalize the outcomes that, thus, can actually be exploited only in similar application contexts. Layer and kernel vulnerability is analyzed by performing a fault injection campaign via SASSIFI, to identify the most vulnerable aspects of the implemented model. In (Krishnan, 2017) the authors also selectively harden some of the kernels that exhibited high vulnerability, by triplicating them and voting the output. The paper in [52] presents an analysis of the resilience against SEUs affecting the weights of the LeNet5 CNN applied to the MNIST dataset. Based on the results of this analysis the authors draw several considerations: i) faults affecting the convolutional layers are more likely to cause a significant accuracy drop than faults affecting the fully connected layers; ii) the faults affecting the exponent of the floating point values used to represent the weights have the largest effect on the accuracy of the CNN; iii) the use of Sigmoid operators instead of ReLU ones decreases the resilience of the CNN; and iv) average pooling is more capable of preventing the propagation of faults compared to max pooling. ### Hardening Strategies The second class of reviewed works propose approaches for the hardening of systems running DL applications w.r.t. effects of faults corrupting the underlying hardware. These works focus on handling and mitigating SDCs, constituting the most dangerous effect of faults, because it is not detected by the system; a few contributions deal also with the recovery from Detected Unrecoverable Error (DUE). This class of works can be further partitioned into (1) approaches applying classical redundancy-based hardening strategies, and (2) design strategies exploiting peculiar characteristics of DL models. One of the main challenges in the hardening process is the fact that DL applications are compute intensive; therefore selective or approximated techniques are generally defined when considering redundancy-based strategies, to limit the costs of the hardening process. Moreover, DL models are internally redundant and presents specific peculiarities that can be exploited to introduce a degree of intrinsic resilience to faults in the designed applications. This second group of works exploits these properties to define resilience-driven design methods. #### 3.2.1. Redundancy-based techniques The work in [53] proposes two complementary selective hardening techniques for introducing fault tolerance in DL systems acting at application level, without targeting any specific architecture. The first technique works at design time to identify the most vulnerable feature maps. This vulnerability analysis is performed by means of metrics to estimate i) the probability of activation of a fault while processing a feature map and ii) the probability of propagation of the generated error to the primary outputs of the CNN. Then, most vulnerable feature maps are hardened by means of DWC, and, in the case of mismatch, re-execution is performed at run-time. The second proposed technique works at run-time and monitors with an ABED approach the outputs of each CNN inference. In particular, two metrics are used to classify the outputs as _suspicious_, and if needed, a re-execution is performed to recompute the results. These two metrics are defined based on empirical observations showing that, when considering a CNN for classification activities, the difference between the top two confidence classes exhibits a strong inverse relationship with the occurrence of a misclassification. The extensive experimental evaluation of the proposed techniques is performed in PyTorchFI, by the same authors, and is architecture agnostic. The work in [54] proposes a hardening approach based on selective application of classical redundancy-based techniques against both transient faults in computations and permanent faults in the memory storing the weights. The approach exploits techniques for explainable AI to identify the most susceptible locations in the CNN at the granularity of the single weight, and neurons in the feature map whose corruption will possibly cause a misclassifications with a high probability. Then, ECCs and TMR are selectively applied to the most critical weights and neurons, respectively. Even if the approach works at the application level and is prototyped in the PyTorch framework, it is particularly tailored for DNNs designed by using a low data precision, generally accelerated in hardware. The authors in [55] develop a so-called _Resilient TensorFlow_ framework, obtained by adding to TensorFlow a set of fault-aware implementations of its base operators, to address SEUs occurring in the underlying GPU device. Each new operator is implemented to execute a thread-level TMRed version of the nominal counterpart. Then, thread blocks are opportunistically scheduled and distributed on the GPU cores to avoid a single fault to corrupt multiple redundant threads. The proposed approach is validated by means of both application-level fault simulation, by means of TensorFI, and microarchitectural-level fault emulation, by means of NVBitFL. An interesting further contribution is the introduction of the Operation Vulnerability Factor, a metric used to evaluate the resilience of operations, to validate the proposed solution. In our opinion, the metric could be adopted to compare different solutions focused on hardening the single operator. The work in [56] puts together various preliminary contributions previously published by the same research group to harden CNNs executed on ARM CPUs. In particular, they evaluate through simulated fault injection at microarchitectural level, by means of the SOFIA tool [57], the resilience of various implementations of the same CNN with various data precision models (integers at 2, 4, and 8 bits). Based on the results, they harden the various CNNs by using two different techniques: i) a partial TMR applied at instruction level on sub-parts of the application, or ii) an ad-hoc allocation of variables to registers. The idea at the basis of this second technique is that minimizing the number of used memory elements reduces the area exposed to radiations and therefore system resilience improves, here measured in terms of Mean Work To Failure (MWTF). The experimental analysis is performed on the MobileNet CNN. The work in [58] proposes a selective hardening approach for CNNs. First, the approach uses the CLASSES error simulator [45] to characterize the vulnerability against SEUs of each layer in the CNN. This metric is defined as the percentage of faults corrupting the single layer causing the final CNN output to be functionally different from the golden one, i.e., _unusable_ as defined in [59]. As an example, when considering an image classification task, the output of the CNN is _usable_ when the input image is correctly classified, even if the actual output percentage values are slightly different from the golden ones; on the other hand, the output is _unusable_ when the output percentage values are highly corrupted thus causing a misclassification of the input image. Then, the overall robustness of the CNN is computed by combining the layers' vulnerability factors. The approach performs an optimization of the hardening based on a selective layer duplication to co-optimize the overall robustness of the CNN and its overall execution time. The approach is applied to a set of 4 different CNN applications targeting a GPU device. Another example of application-level selective hardening approach is the strategy in [60], that exploits a resilience score previously defined in [61] to rank neurons in the model; then, the approach prunes neurons classified as non critical to reduce memory footprint, and triplicates neurons classified as critical to improve model resilience. The strategy is implemented in the PyTorch framework without targeting a specific hardware platform, and the resilience of the system is evaluated against errors randomly modifying or setting to zero the output values of the single neurons. SHIELDeNN [62] and STARR [63] are two similar approaches, targeting BNNs implemented on FPGAs. Both tools perform a preliminary vulnerability analysis of the parameters of the BNN (in particular, weights and activation functions) to identify the most critical ones; this analysis is based on in-house fault simulators. Then, selective TMR is applied to the most critical parameters, at the granularity of entire layers in [62] and individual channels in [63]. Although both works target FPGA devices, they only harden against faults affecting the data memory storing BNN parameters, neglecting faults affecting the device configuration memory, whose corruption actually leads to a modified functionality. Still targeting FPGAs, [64] presents a methodology for achieving a lightweight fault tolerancefor CNNs. The idea is to avoid the classical TMR scheme by adopting an approximated NMR-based approach; instead of having three exact replicas of the CNN plus a voter, the proposed methodology exploits the so-called _ensemble learning_, an approach used in DL for increasing model accuracy. In particular, the technique introduces a number of redundant CNNs, that are simpler and smaller than the original one. During the training phase each CNN learns a _subset_ of the problem; then, during testing/deployment all CNN output responses are _merged_ by a _combiner_ module that produces the final output as the original CNN would have computed. The methodology is applied to various versions of the ResNet CNN and the resilience evaluation is performed by means of a fault injector corrupting the FPGA configuration memory. The work in [65] targets a hardware accelerator organized as a dataflow architecture for ML acceleration. The strategy exploits computing elements in the architecture currently having as activation value a zero or an identical value of a neighbor computing element; the aim is to duplicate the same computation of the neighbor element. Additional logic is introduced into the architecture to manage on-the-fly duplication of the computations, to check results and, if needed, to re-execute faulty elaborations. The advantage of the approach is to benefit from the massively parallel nature of the considered ML accelerator to introduce computation replicas at execution level without extending the architecture with additional computing elements. The architecture is experimentally validated onto an FPGA device by performing emulated fault injection in the registers of the RTL description. [66] introduces several ABFT schemes to detect and correct errors in the convolutional layers during the inference process; to this end the authors develop in Caffe framework a _soft error detection library for CNNs_, _FT-Caffe_. The approach is based on the adoption of checksum schemes and layer-wise optimizations, opportunely calibrated by means of a workflow that provides error detection and then error correction. Being it a runtime method, performance degradation is overhead aspect traded-off against fault resilience. Application-level error simulation is used by means of an in-house tool to evaluate the approach. Two ABED techniques are proposed in [67] and [68] for linear layers, i.e., convolutional and fully-connected layers. Both works, targeting GPU devices, are based on computation and checksum validation in matrix multiplication algorithms. In particular, the approach in [67] considers quantized models and is implemented in CUDA, using also the cuDNN library; the other layers are protected by traditional DWC. The experimental evaluation is performed through microarchitectural-level fault emulation by injecting single bit-flips in the layer inputs and outputs and weights, and through radiation testing. The approach in [68] defines two different checksum strategies: (1) a global one, being a refined version of the classical hardening scheme for matrix multiplication, and (2) a thread-level one, where the classical scheme is redesigned to aggressively use the GPU tensor cores. A design-time profiling approach, called _intensity-guided ABFT_, is used to decide for each CNN layer which strategy is the most efficient one in terms of execution time. The paper presents only an experimental evaluation of the performance of the proposed approach, neglecting reliability measures. Finally, it is worth mentioning another similar ABED strategy based on checksums [69], applicable to convolutional and fully-connected layers. As for the previous contributions, the authors propose a hardware module to accelerate computation and checksum validation. The evaluation is again performed at the application level within a custom error simulation environment developed in Keras and Tensorflow. #### 3.2.2. DL-based techniques The paper [70] introduces _Ranger_, a fault correction technique identifying and modifying values presenting a deviation from the nominal ones, presumably due to the occurrence of transient faults in the processed data. The intuition at the basis of this technique, previously discussed in the paper presenting BinFI [20], is that each layer in a DNN model produces in output tensors containing elements included in a specific value range. Moreover, if a SEU generates a corrupted value in the output tensor sensibly different from its nominal range, there is a high probability that this will cause the DNN to generate an erroneous output, an event that does not occur when the corrupted values is anyway within the nominal range. Thus, the proposed low-cost technique consists in introducing on the output of selected DNN layers a new operator that clips those output values that are outside identified restriction bounds. The proposed idea is implemented in TensorFlow and evaluated by means of TensorFI. Paper [71] presents a technique very similar to Ranger. The paper considers permanent faults in the weights of the DNN and defines a novel clipped version of the ReLU activation function, replacing output values larger than a given threshold with a 0. A methodology is proposed to identify a proper threshold capable of identifying possible faults causing out-of-range corrupted values and at the same time limiting the negative impact of this new operator on the accuracy of the overall DNN. The experimental evaluation is carried out by means of an in-house error simulator developed in PyTorch. The work in [72] proposes yet another value range limiting strategy, implemented by modifying the activation function to perform a clipping against a threshold. Based on the limitations of previous efforts in the same direction, the authors employ a fine-grained neuron-wise activation function, to be determined in a supplementary training phase, that follows the traditional accuracy training. To this end, the work proposes a two-steps framework that supports the design and implementation of a resilient DNN. The authors analyze the final implementation against memory faults, that is weights and biases of different layers, as well as parameters of activation functions. An in-house error simulator is developed in PyTorch for running an experimental evaluation. Results are compared against hardening solutions proposed in [71] and [70], showing an improvement. Few other papers present alternative strategies to address faults causing high-magnitude errors. For instance, the work in [73] combines quantization tailored on the parameter distribution at each DNN layer and a training method considering a specific loss function, optimistically exploiting the selected quantization scheme not to decrease the accuracy while pursuing a high resilience. This approach, validated in an ad-hoc application level error simulation framework developed in PyTorch, outperforms two different strategies proposed by the same authors [74; 75] and a state-of-the-art approach based on explicit value range clipping [71]. Another work exploiting the statistical distribution of the tensor values is proposed in [76]; it defines thresholds for localizing and suppressing errors. The technique is coupled with state-of-the-art checksum strategies for error detection. Experimental results show that this approach outperforms [74]. The authors in [77] also exploit the statistical distribution of the values in the output of the DNN, before applying the final softmax normalization, to detect outliers, which represent a suspicious symptom of a fault corrupting the system. In this class of papers, we found papers that optimize the memory overhead introduced by the application of ECC to the DNN weights by exploiting peculiar properties and characteristics of DNN models. As an example, the study in [78] proposes a novel training scheme, namely Weight Distribution Oriented Training (WOT), to regularize the weight distribution of CNNs so that they become more amenable for protection by encoding without incurring in overheads. The idea is to exploit the fact that weights in a well-trained CNN are small number, requiring a few bits to be represented with respect to the available ones. Therefore, part of the bits are used to hold the ECC, effectively using a 8-bit quantization strategy for the weights, to use the remaining bits for the checksum. The evaluations is performed at application level by means of a custom fault simulation method in PyTorch. Another similar work is presented in [79] where a Double Error Correcting code based on parity is adopted to protect weights against stuck-at faults. The proposed approach, prototyped in Keras, outperforms the one in [78]. Finally, other papers follow the same path, also broadening the field of analysis. As an example the authors in [80] continue the analysis of the robustness of the various data types by considering the recently introduced _Brain-Float 16 (bf16)_ format; since this data type is obtained by removing 16 bits from the mantissa of the standard 32 bit floating point, it presents a higher vulnerability to faults. Based on the robustness analysis, the authors define another similar coding scheme for the weights of the model. In particular, to avoid any memory overhead, a parity code is applied by using the Least Significant Bit (LSB) of each word as the checking bit; the intuition is that a change in the LSB marginally affects the model accuracy. Then, when a parity error is detected, the entire weight is set to zero; in fact, as studied in [81], a change of a single weight to zero generally does not affect the DNN result. A novel hardening paradigm, dubbed _fault-aware training_ is proposed in [82]. The idea behind this technique is to inject faults during the training process thus forcing the CNN to learn how to deal with the occurrence of faults at inference time. This promising technique on the one hand enables a _low-cost_ hardening but on the other hand it poses new challenges to the designer. Indeed, it is vital to identify the proper amount of faults to be presented to the CNN during the training phase; a high number could increase robustness, introducing the side-effect of preventing training convergence and producing an excessively large CNN. A reduced number of faults will result in a quick but possibly ineffective training. In the paper the newly proposed fault-aware training is coupled with two additional CNN model modifications aimed at mitigating high-magnitude errors: i) replacing the standard ReLU activation with its clipped counterpart, ReLU6 (originally proposed in [83]); and ii) re-ordering the layers in the CNN such that ReLU6 is always executed before batch normalization.The paper evaluates the proposed approach by considering a GPU target device and by using both microarchitectural fault injection (via NVbitFI [84]) and application level error simulation (via a Python-based in-house tool). Fault-aware training is also investigated in [85], where the authors introduce specific loss functions and training algorithm to deal with multiple bit errors. The evaluation is carried out at the application level by not considering any specific hardware platform. _Fault-aware weight re-tuning_ for fault mitigation is proposed in [86]. In this paper the authors first analyze the resilience against permanent faults of a Multiply and Accumulate (MAC) structure generally used in GPUs and TPUs. In particular, the authors analyse how the structure is sensitive to SA faults a CNN is w.r.t. i) the degree of approximation adopted in the employed multipliers; ii) the position of the faulty bit in the corrupted value; and iii) the position of the layer affected by the fault in the whole CNN.The authors propose to prune the weights that are mapped on the corrupted bits and that are thus going to be affected by the SA faults (previously identified through post-production test procedures). Once such pruning has been carried out, re-training of the CNN is performed. The experimental evaluation is performed by designing a systolic array architecture based on the considered MAC structure. Fault injection campaigns are run with an in-house error simulator in TensorFlow. The work in [87] first performs a systematic analysis of the Program Vulnerability Factor (PVF) of the various instructions of an ARM CPU executing DL applications. Experiments are performed by means of a fault emulation tool corrupting the ISA registers by means of the on-chip debugging interface. Then, it defines two techniques to harden the considered system against SDCs: (1) selective kernel-level DWC with re-execution, and (2) a _symptom-based_ technique checking all values of the intermediate results against a given threshold to trigger a re-execution when a value is above it. This second technique is based on the same intuition of the range restriction strategies discussed above (e.g., [70; 71]). Finally, the paper considers the adoption of kernel-level check-pointing to recover from crashes or other DUE. In a subsequent work [88], the same authors note that output values of a DNN layer present a regular data distribution that can be analyzed at runtime to compute, during the inference process, the two thresholds to be used for the range restriction technique. [89] focuses on a different perspective with respect to all previous contributions: the impact of faults during model training. An in-house error simulator is defined within the Caffe framework to inject bit-flips in the variables to simulate SEUs affecting the High Performance Computing (HPC) system running the training procedure. Outcomes of such an analysis are that (as already emerged in other works for errors affecting floating point values and layers) i) most training failures result from higher order bit flipping in the exponents, and ii) convolutional layers are more failure prone. Moreover, the authors highlight how monitoring the value of the loss function among the various training iterations is an effective signal to detect most of the SDCs causing a training failure. Based on this observation, an ad-hoc error detection strategy is defined for training failures due to SEUs. The adoption of the two identified main classes, namely _resilience analysis_ and _hardening strategies_, to partition the reviewed contributions allows us to organize them based on the main focus of the novelty of the presented solution. Table 7 offers a bird's-eye view of this classification and summarizes the outcome. As mentioned, the classification framework we define allows us to capture the elements we deem more relevant emerging from the reviewed contribution, thus providing a guide in identifying pertinent state-of-the-art proposals to build upon or to compare against. Table 8 collects the 61 entries of the analyzed papers for an easy access to the information. \begin{table} \begin{tabular}{l l} Resilience analysis & [19][20][21][22][23][7][24][25][26][27][28][29] \\ Hardware-level methodologies & [30][31][32][33][34][35][36][37] \\ Cross-level methodologies & [38][39][40][41][44][45][46][47] \\ Case studies & [48][49][50][51][52] \\ Hardening strategies & [53][54][55][56][58][60][62][63][64][65][66][67][68][69] \\ \begin{tabular}{l} Deep Learning-based techniques \\ \end{tabular} & [70][71][72][73][76][77][78][79][80][82][85][86][87][89] \\ \end{tabular} \end{table} Table 7: Contributions according to their type. ## 4. Insights, Challenges and Opportunities The high number of pertinent contributions in the last four years (i.e., 183 authored by more than 400 scientists) shows a dynamic context, that in this decade has been fostering interesting and relevant outcomes, characterized by some common aspects, that we summarize in the following, together with open challenges and opportunities (beyond the ones highlighted by (Becker et al., 2016)). **Trend**: The number of contributions in the years has been increasing (as Figure 1 shows) if we consider that the spectrum of analysis and design targets has grown and the works reported in the chart cover only a limited research area (the one included in this survey) with respect to the total. **DL design impact on resilience**: Numerous are the studies that explore how different DL design choices - from data type, to data quantization, from pruning to compression - affect the resulting network resilience to faults corrupting both stored data (e.g., weights, neuron output) and manipulation (e.g., convolution output). Such impact, though, is heavily and strictly related to the specific adopted DL solution, and although some general considerations are drawn, there is no "one ground truth that applies to every case" so that, in our opinion, every time a DL application has to be deployed in a safety/mission-critical application domain, analysis and hardening solutions need to be specifically tailored. To this end, approaches providing usable tools and methods to analyze and harden a DL application seem to be of great interest. **Metrics**: For both the analysis and hardening strategies, most contributions can be partitioned into two classes, those evaluating resilience with respect to conventional reliability metrics, such as Mean Time To Failure (MTTF), Failures in Time (FIT), Architecture Vulnerability Factor (AVF), Program Vulnerability Factor (PVF), Kernel Vulnerability Factor (KVF) or the SDC rate (e.g., (Ross and Sauer, 2015; Sauer et al., 2016)) and those who adopt an _application-aware_ metric, more closely related to the specific and special context, such as usable/not usable (e.g., (Sandel et al., 2016; Sauer et al., 2016)). Both classical and innovative figures of merit are adopted or defined, leading to numerous alternative visions. Some of the best contributions report comparative results that allow the reader to identify benefits and potentials of the new discussed solutions, but the rich set of different quantitative metrics makes the task not an easy one. **Challenge:** Although the choice of the adopted metric depends on the application context, future efforts could go in the direction of reporting always the results also with respect to a commonly adopted metric, to enable fair comparisons. **Cross-layer strategies**: The complexity of the hardware platforms able to efficiently execute heavy ML/DL applications and that of the applications themselves initially led to contributions that worked either at the architecture level (working on faults) or at the application level (working on errors). However, the gap between these levels and the necessity to maintain a correspondence between faults and errors to provide a reliable susceptibility/resilience evaluation are spurring cross-layer approaches that explore and support such a fault-error relation. **Fault injection tools and their availability**: Considering the application context and the involved elements, fault injection is a critical task with respect to i) the experiment time, ii) the controllability/observability aspects, and iii) the adherence of the injected errors to the underlying realistic faults.Specifically targeting the domain of interest, several fault injection tools have been recently proposed, working from the architectural level ((Sandel et al., 2016; Sandel et al., 2016; Sandel et al., 2016; Sandel et al., 2016)) to the application one ((Sandel et al., 2016; Sandel et al., 2016; Sandel et al., 2016)), or cross-layer ((Sandel et al., 2016; Sandel et al., 2016)). Although several of them are available (see Table 9 for the available open-source software packages), when developing hardening techniques and strategies proprietary fault injection solutions are devised, sometimes to drive a selective hardening policy based on the analysis outcomes. **Challenge:** an ecosystem of available tools working at different abstraction levels, on different platforms could indeed allow for a systemic effort to tackle DL resilience for present and future challenges. **Reproducible research**: One of the critical activities when developing new methods is the evaluation of their performance with respect to existing ones, to motivate the introduction of yet another approach. Often, the comparison is carried out against the vanilla solution, the baseline implementation without any sort of hardening. Indeed, only a few contributions (besides the ones proposing a new tool) share and make public their software/data. **Challenge:** incentive reproducible research to foster stronger contributions, as well as the possibility to move towards an integrated ecosystem of solutions for the different hardware/software/application variants. As an example, an available benchmark suite that offers for the various hardware/software/application contexts a reference to (1) compare solutions, and (2) support the integration of complementary approaches, could be a valuable asset for the community. **Community**: There are a number of very active research groups on the topic, that are steadily contributing to the discussion. To visually get an overview of such a community, the awareness and relationships among the research groups, as well as the typical venues where the topic is presented and discussed, we exploited VOSviewer ([93]). On the 163 papers considered eligible we explored co-authorship and (see Figure 4). The analysis identifies 68 authors having authored at least 3 papers on the topic, belonging to 14 clusters (research groups). Links between nodes represent a co-authorship. On the same dataset we explore the publication venues; the graphs in Figure 5(a) and 5(b) show the venues where the included contributions have been published, highlighting the number of documents and the cross-references among venues, respectively. Finally, we analyse the set of included papers (71 in total) exploring the number of citations to possibly get insights on other scientists' awareness, reported in Figure 6. **Synergy opportunity**: This work, as well as past literature review analyses, shows that ML resilience, and DL in the specific, against faults affecting the underlying hardware is a research area exhibiting many challenges and facets, setting an opportunity for creating a synergy in the research community towards the development of an ecosystem of methods and tools that can tackle the different facets of DL resilience against hardware faults. \begin{table} \begin{tabular}{l l l} **Ref.** & **Name** & url \\ \hline \hline [7] & Ares & alugupta.github.io/ares \\ [20] & BinFI & github.com/DependableSystemsLab/TensorFI-BinaryFI \\ [21] & TensorFlowFI2 & github.com/DependableSystemsLab/TensorFI2 \\ [47] & LLTFI & github.com/DependableSystemsLab/LLTFI \\ [70] & Ranger & github.com/DependableSystemsLab/Ranger \\ [23] & PyTorchFI & github.com/PyTorchfi/PyTorchfi \\ [38] & FIdelity & github.com/silvaurus/FIdelityFramework \\ [54] & & github.com/Msaibh/FaultTolerantDnnXai \\ [33] & & github.com/ICT-CHASE/fault-analysis-of-FPGA-based-NN-accelerator \\ [49] & & github.com/cypox/CNN-Fault-Injector \\ [45] & CLASSES & github.com/D4De/classes \\ [22] & TensorFI+ & github.com/sabuj7177/characterizing_DNN_failures \\ \hline \end{tabular} \end{table} Table 9: Open-source software made available from the works presented in Table 8 (Tool support). ## 5. Concluding Remarks This paper collects and reviews the most recent literature (since 2019) on the analysis and design of resilient DL algorithms and applications against faults in the underlying hardware. The analysis includes 71 studies focused on methods and tools dealing with the occurrence of transient and permanent faults possibly causing the DL application to misbehave. Through a detailed search and selection process we reviewed 71 contributions, and analyzed them with respect to a classification framework supporting the reader in the identification of the most promising works based on the area of interest (e.g., with respect to the adopted fault model, error model or DL framework). The aim is twofold; i) mapping the active research landscape on the matter, and ii) classifying the contributions based on various parameters Figure 4. Co-authorship analysis with “authors” as the unit of analysis. In this analysis, the minimum number of documents for each author is 3, and the number of selected authors is 68 in 14 clusters, accordingly. Node size depends on the number of documents and the connecting lines between them indicate the collaboration between authors. The color spectrum represents the average number of citations. deemed of interest to support the interested reader in finding the relevant information they might be looking for (e.g., similar studies, solutions that might be applied, etc.). The study emphasizes the breadth of the research and actually defines some boundaries to limit the included contributions, focusing on DL applications and the most commonly used in the literature. Figure 5: Eligible studies: analysis of the publication venues with respect to (a) the number of contributions at such venue and the (b) cross-reference among them. A link between two items means that one of them cites the other and the color spectrum represents the average number of citations. adopted fault models, leaving other facets (e.g., spiking neural networks, manufacturing and process-variation faults) to future studies. Some insights and overall considerations are also drawn; the vibrant research on this topic and the broad spectrum of challenges calls, in our opinion, towards the development of an ecosystem of solutions that offer a support in the implementation of resilient DL applications.
2302.14256
Remote Sensing Scene Classification with Masked Image Modeling (MIM)
Remote sensing scene classification has been extensively studied for its critical roles in geological survey, oil exploration, traffic management, earthquake prediction, wildfire monitoring, and intelligence monitoring. In the past, the Machine Learning (ML) methods for performing the task mainly used the backbones pretrained in the manner of supervised learning (SL). As Masked Image Modeling (MIM), a self-supervised learning (SSL) technique, has been shown as a better way for learning visual feature representation, it presents a new opportunity for improving ML performance on the scene classification task. This research aims to explore the potential of MIM pretrained backbones on four well-known classification datasets: Merced, AID, NWPU-RESISC45, and Optimal-31. Compared to the published benchmarks, we show that the MIM pretrained Vision Transformer (ViTs) backbones outperform other alternatives (up to 18% on top 1 accuracy) and that the MIM technique can learn better feature representation than the supervised learning counterparts (up to 5% on top 1 accuracy). Moreover, we show that the general-purpose MIM-pretrained ViTs can achieve competitive performance as the specially designed yet complicated Transformer for Remote Sensing (TRS) framework. Our experiment results also provide a performance baseline for future studies.
Liya Wang, Alex Tien
2023-02-28T02:27:36Z
http://arxiv.org/abs/2302.14256v2
# Remote Sensing Scene Classification with Masked Image Modeling (MIM) ###### Abstract Remote sensing scene classification has been extensively studied for its critical roles in geological survey, oil exploration, traffic management, earthquake prediction, wildfire monitoring, and intelligence monitoring. In the past, the Machine Learning (ML) methods for performing the task mainly used the backbones pretrained in the manner of supervised learning (SL). As Masked Image Modeling (MIM), a self-supervised learning (SSL) technique, has been shown as a better way for learning visual feature representation, it presents a new opportunity for improving ML performance on the scene classification task. This research aims to explore the potential of MIM pretrained backbones on four well-known classification datasets: Merced, AID, NWPU-RESISC45, and Optimal-31. Compared to the published benchmarks, we show that the MIM pretrained Vision Transformer (ViTs) backbones outperform other alternatives (up to 18% on top 1 accuracy) and that the MIM technique can learn better feature representation than the supervised learning counterparts (up to 5% on top 1 accuracy). Moreover, we show that the general-purpose MIM-pretrained ViTs can achieve competitive performance as the specially designed yet complicated Transformer for Remote Sensing (TRS) framework. Our experiment results also provide a performance baseline for future studies. ## I. Introduction In the past several years, remote sensing images have become easily accessible due to more and more devices dedicated to data collection. As artificial intelligence (AI) is booming, the methods for performing computer vision (CV) tasks on those images have advanced rapidly. One common CV task is remote sensing scene classification, which takes an image and correctly labels it to a predefined class. Scene classification is an important task for many applications such as land management, urban planning, wildfire monitoring, geological survey, oil exploration, traffic management, earthquake prediction, and intelligence monitoring [1]. The machine learning (ML) methods for remote sensing scene classification have been studied extensively (e.g., [2], [3], [4], [5],[6], [7], [8], [9], [10], [11]). Most studies in the past adopted the classical two-stage training paradigm: pre-training plus fine-tuning. See Figure 1 for illustration, where the backbones for feature extractions such as ResNet [12], Vision Transformer (ViT) [13], and Swin-T [14] are commonly pretrained in a supervised manner on ImageNet dataset [15], and then linear classification head layers are added on top of backbones and got fine-tuned on the task datasets in a supervised learning means, too. Although ViTs have shown impressive performance over their convolution neural networks (CNNs) counterparts, they are prone to overfit the small datasets and usually require a large quantity of labeled datasets. In natural language processing (NLP), self-supervised pre-training methods like masked language modeling (MLM) have successfully addressed this problem. Motivated by MLM, BEiT [16] proposes Masked Image Modeling (MIM) to relieve the label-hungry problem of Transformers [17] while achieving impressive performance on various downstream tasks [18]. As such, the recent trend in CV has switched to adopting self-supervised learning (SSL) techniques (e.g., contrastive learning, MIM) for pre-training; see Figure 2 for illustration. SSL methods can pretrain backbones with unlabeled data by leveraging the structure present in the data itself to create supervised tasks (such tasks are often referred to as "pretext tasks"). To date, various MIM techniques for visual feature representation learning have been proposed (see Table 1 image and video rows for a comprehensive list). The most famous one is Masked Autoencoder (MAE) [19], which owns a very simple learning architecture but has been proven to be a strong and scalable pre-training framework for visual representation learning. MAE has attracted unprecedented attention and got various derivatives (e.g., CAE [20], ConvMAE [21], CMAE [22], GreenMAE [23], MixMIM [24]). To authors' knowledge, no research has ever explored MAE pretrained backbones for scene classification. Therefore, this research aims to evaluate MAE pretraining capability for the task. The remainder of the paper is organized as follows: Section II describes the related work, and Section III presents the selected four scene classification datasets. The results and discussion are presented in Section IV and V, respectively. Section VI is the conclusion. Figure 1: Pretraining in supervised manner plus fine-tuning [25]. Figure 2: Pretraining in self-supervised manner plus fine-tuning paradigm [26]. ## 2 Related Work ### Vision Transformer (ViT) ViT [13] was proposed to make the standard Transformer [17] architecture process image data efficiently. Unlike traditional CNNs whose filters can only attend locally, the global attention mechanism of ViTs can integrate information across the entire image. ViTs outperform the CNNs by almost four times in terms of computational efficiency and accuracy [75], and are replacing CNNs in the CV field. Although Transformer architectures have achieved so much success in the natural language processing (NLP) domain for a while, their success in the CV field was slow due to the different data characteristics between text and image (see Table 2 for comparison). An image could have thousands of pixels; in contrast, the input sequence length of text data is in tens. The computation complexity of Transformer is \(O(n^{2}d)\), where \(n\) and \(d\) are the input sequence length and embedding length, respectively. To deal with the problem, ViTs adopt a special method to preprocess the image data, which can be described as follows (see Figure 3 for illustration): Step 1. Split an image into non-overlapping patches (fixed sizes, e.g., 16 \(\times\) 16 or 32 \(\times\) 32). Step 2. Flatten the image patches. Step 3. Encode the flattened patches into linear embeddings. Step 4. Add positional embeddings to the patch embeddings of Step 3. Step 5. Feed the sequence as an input to Transformer encoder. This way, they can reduce input sequence length to \(n^{\prime}=\frac{W\times H}{p^{2}}\), where \(W,H,\text{and }p\) are width, height, and patch size of the image, respectively. With such preprocessing, Transformer architecture can process image data much efficiently. Next, the relevant MIM methods tested in our work will be presented. \begin{table} \begin{tabular}{c c c} \hline \hline \multicolumn{1}{c}{Text} & Image \\ \hline 1-dimensional & 2-dimensional \\ \hline Discrete & Continuous \\ \hline Low redundancy & High redundancy \\ \hline Low computation cost due to small \(n\) & High computation cost due to large \(n\) \\ \hline \end{tabular} \end{table} Table 2: Data characteristics comparison \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{**Vision**} & \multirow{2}{*}{Image} & \multirow{2}{*}{BEiT v1 [16], v2 [27], MAE [19], SimMIM [28], ADIOS [29], MIT [30], AttMask [31], Beyond-Masking [32], BootMAE [33], CAE [20], CAN [34], ConvMAE [21], Contrastive MAE [22], ContrastMask [35], dBOT [36], DMAE [37], Denoising MAE [38], GreenA MAE [33], iBOT [39], LoMaR [40], LS-MAE [41], MaxAlign [42], MaskDistill [18], MaskFaster [43], MaskTune [44], MetaMask [45], MFM [46], MILAN [47], MixMask [48], MixMIM [24], MRA [49], MSN [50], MST [51], MultiMAE [52], MVP [53], RC-MAE [54], SDMAE [55], SemMAE [56], SdAE [57], SupMAE [58], U-MAE [59], UM-MAE [60] \\ \hline Video & AdaMAE [61], Bevt [62], MAM2 [63], MAR [64], MaskViT [65], M3Video [66], MCVD [67], MotionMAE [68], OmnMAE [69], Spatial-Temporal [70], SSVH [71], VideoMAE [72], Vimpace [73], VRL [74] \\ \hline \end{tabular} \end{table} Table 1: MIM techniques for visual feature learning ## Appendix B Masked Autoencoder (MAE) MAE is an asymmetric autoencoder that uses ViTs in both its encoder and decoder, and the size of the decoder is smaller than the encoder, as illustrated in Figure 4. It directly infers masked patches from the unmasked ones with a simple loss of mean squared error (MSE). To save computation, the encoder only works on the unmasked patches; in contrast, the decoder works on both masked and unmasked patches trying to predict the original images. The masking ratio can be set up to 75%, which is considerably higher than that in BERT (typically 15%) [77] or earlier MIM methods (20% to 50%) [16, 78]. MAE's ablation study also points out that a high masking ratio is good for fine-tuning and linear probing [19]. With those meticulous designs, MAE is three times (or more) faster than BEiT [16] while achieving superior performance [19]. Figure 4: MAE architecture [19], Figure 3: ViT architecture [76]. ### Context autoencoder (CAE) Context autoencoder (CAE) [20] was also proposed for self-supervised representation pre-training of ViTs. Unlike MAE, the pretext goal of CAE is to predict the masked patches from the visible patches instead of the whole image. The architecture of CAE consists of an encoder, a latent contextual regressor with an alignment constraint, and a decoder (see Figure 5 for illustration). The working pipeline of CAE is as follows: Step 1. The visible patches are fed into the encoder to get their representations. Step 2. The encoded representations of visible patches together with mask queries are then fed to the contextual regressor to get the representation of masked patches. It should be noted that masked queries are learnable during the training. Step 3. The masked patches' presentations are also computed from the encoder. Step 4. An alignment constraint is applied on the outputs of Step 2 and Step 3, which are expected to be the same in representation space, to calculate a loss value. Step 5. Step 2's results are fed to the decoder for generating the masked tokens, which are then compared to the targets generated by feeding masked patches to the pretrained DALL-E tokenizer [79]. The difference here formulates another loss value. Step 6. Combine losses in Step 4 and Step 5 together for the optimization. Compared to BEiT [16], which combines the encoding and pretext task completion roles together, CAE separates them. This way, it can improve the representation learning capacity, which further supports downstream tasks. The masking ratio in CAE is 50%, which is lower than 75% of MAE. ### Masked Convolution Meets Masked Autoencoders (ConvMAE) ConvMAE [21], a derivative of the popular MAE [19], is proposed to train scalable visual representation with hybrid convolution-transformer architectures and masking convolution. It integrates both merits of local inductive bias from CNNs and global attention of ViTs. Although the modifications to the original MAE are minimal, ConvMAE has demonstrated great success on pre-training visual representations for improving the performance of various tasks [21]. ConvMAE can also provide multi-scale features while avoiding the discrepancy between pre-training and fine-tuning. Like MAE, ConvMAE architecture still consists of two parts: encoder and decoder (see Figure 6). However, its encoder is a hybrid convolution-transformer architecture, and its decoder part is still made of ViT. In addition, ConvMAE introduces a hierarchical masking strategy together with masked convolution to make sure that only a small number of visible tokens are fed into the transformer encoder layers (see Figure 6, top row). As shown in Figure 6, the encoder has three stages with output spatial resolutions of \(\frac{W}{4}\times\frac{H}{4}\), \(\frac{W}{8}\times\frac{H}{8}\), and \(\frac{W}{16}\times\frac{H}{16}\), respectively, Figure 5: CAE architecture [20]. where \(H\) and \(W\) are the height and width of the input image. The encoder can generate multi-scale features \(E_{1}\), \(E_{2}\), and \(E_{3}\), which capture both fine- and coarse-grained image information. The transformer blocks of encoder in Stage 3 aggregate and fuse three features together (see the bottom row blue block in Figure 6 for illustration) and send them to the decoder of ConvMAE, which still works on both visible and masked tokens (see the middle row green block in Figure 6 for illustration). The loss function is the same as the one used in MAE in which only masked patches are considered for the loss values calculation. Next, we will present the image datasets selected by this research for evaluating the performance of various MIM pretrained backbones on remote sensing scene classification. ## III Datasets We have chosen four well-known remote sensing scene image classification datasets for evaluation: 1) Merced land-use dataset [80], 2) Aerial image dataset (AID) [81], 3) NWPU-RESISC45 [2], and 4) Optimal-31 dataset [82]. The characteristics of these four datasets are summarized in Table 3. The rest of this section provides a short introduction for each of the datasets. ### Merced Dataset Merced Dataset [80] was released in 2010, and has 2,100 RGB images of 21 land-use scene classes. Each class contains 100 images of size \(256\times 256\) pixels with 0.3 m resolution. The images were extracted from the United States Geological Survey National Map [83]. Figure 7 shows the image samples from the 21 classes. \begin{table} \begin{tabular}{c c c c c c c} \hline Dimitri & Images & Fine & Size & Fine & Images & Depth & Depth & Depth \\ \hline UC Merced Land Use & 100 & 21 & 2,100 & 0.3 & 256\(\times\)256 & 2010 \\ \hline AID & 220–420 & 30 & 10,000 & 0.5–8 & 600\(\times\)600 & 2017 \\ \hline NWPU-RESISC45 & 700 & 45 & 31,500 & 0.2-30 & 256\(\times\)256 & 2017 \\ \hline OPTIMAL-31 & 60 & 31 & 1,860 & 0.3 & 256\(\times\)256 & 2017 \\ \hline \end{tabular} \end{table} Table 3: Selected classification dataset information Figure 6: ConvMAE architecture [21]. ### Aerial image dataset (AID) Dataset AID [81] dataset was published in 2017 by Wuhan University, China. It has 10,000 images. The images are classified into 30 classes with 220 to 420 images per class. The images were cropped from Google Earth imagery measuring 600 \(\times\) 600 pixels with a resolution varying from 0.5 m to about 8 m. Figure 8 shows the image samples from the 30 classes. ### NWPU-RESISC45 Dataset The NWPU-RESISC45 [2] dataset was published in 2017 by Northwestern Polytechnical University, China. It contains 31,500 remote sensing images grouped into 45 scene classes. Each class includes 700 images with a size of 256 \(\times\) 256 pixels, and the spatial resolution varies from about 30 to 0.2 m per pixel for most of the scene classes, except for the classes of island, lake, mountain, and beach, which have lower spatial resolutions. This dataset was also extracted from Google Earth that maps Earth by the superimposition of images obtained from satellite imagery, aerial photography, and geographic information system (GIS) onto a 3-D globe. The 31,500 images are collected from more than 100 countries and regions across the world, including developing, transition, and highly developed economies. Figure 9 shows one sample of each class from this dataset. ### Optimal-31 Dataset Optimal-31 [82] dataset was created in 2017 by Northwestern Polytechnical University, China. It contains 31 scene classes. Each class consists of 60 images with size of 256 \(\times\) 256 pixels. Figure 10 shows an example image for every class. The pixel resolution for the images is 0.3 m. Figure 7: UC Merced example images. Figure 8: AID example images. Figure 9: NWPU-RESISC45 example images. ## IV Results This section presents the four experiment results of remote sensing scene image classification with backbones pretrained with MAE, CAE, and ConvMAE, respectively. The results are also compared with those from 17 algorithms listed in [5], which contains the results from 16 CNNs and one specially designed Transformer-based architecture, Transformers for Remote Sensing (TRS). According to [5], TRS has achieved state-of-the-art performance. The implementation details and the corresponding results are presented as follows. ### Experimental Setup For a fair comparison, we tried to follow the same experiment setup laid out in [5] if possible. The training equipment setup is shown in Table 4. First, we downloaded the pretrained backbones directly from their official GitHub websites. Then, we carried out fine-tuning on the tested datasets. In specifics, all experiments were fine-tuned for 80 epochs. Optimizer was Adam. The initial learning rate was set to 0.0004, and weight decay was set to 0.00001. All images were reshaped to \(224\times 224\) sizes, and the batch size was set to 16. The top 1 accuracy (acc1) was used for the evaluation. The best performance metrics are highlighted in bold in the result tables. ### Data Augmentation Strategies During the fine-tuning stage, we adopted data augmentation for better performance. We adopted the MixUp [84] and CutMix [85] data augmentation techniques. For MixUp, two images are merged by linearly interpolating them along with their class labels to create a new training instance. CutMix is to cut a patch of one image and replace it with a patch from another image in the dataset (see Figure 11 for examples). We set the parameters as 0.8 and 1.0 for MixUp and CutMix, respectively. \begin{table} \begin{tabular}{|c|c|} \hline Operation System & Linux \\ \hline CPU & 2xAMD EPYC 7262 8-Core Processor \\ \hline Memory & 250 GB \\ \hline Framework & PyTorch 1.13.1 \\ \hline GPUs & 4xA100 \\ \hline \end{tabular} \end{table} Table 4: Experimental environment Figure 10: Optimal-31 example images. ### _Merced Dataset Classification Results_ For this dataset, 80% of the images were used as the training dataset, and 20% as the testing dataset. It should be noted that for column names, PT represents pre-training; FT is fine-tuning; lr is learning rate; and acc1 is top 1 accuracy rate. The results are listed in Table 5, from which we can see that large backbones ViT-L and ViT-H can achieve 100% accuracy. Compared to the previously published results, ranging 94.31% to 99.52%, from 17 deep learning methods listed in Table 3 of [5], no one has achieved such good performance. In addition, all MIM methods but ConvMAE perform better (99.76% to 100%) than the TRS method (99.52%) [5], which is the best method listed in Table 3 of [5]. ### _AID Dataset Classification Results_ For this dataset, 50% of the images were used as the training dataset, and 50% as the testing dataset. Table 6 lists the classification results for the AID dataset. Compared to the previously published results, ranging 86.39% to 98.48%, from 17 deep learning methods listed in Table 4 of [5], MIM methods achieve acc1 ranging from 97.5% to 98.15%, which still beats most of CNNs. It should be noted that our AID images were resized to \(224\times 224\) for using MIM pretrained backbones; instead, the TRS method used 600 \(\times\) 600 image size, which could contribute to performance differences. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Method & Training & Reduction & LTE & Market & EB-F-F-O-S & IT & Fit-F-F-O-S & Sumcs \\ \hline MAE & 1k, MAE & ViT-B & RGB & 75\% & 80 & 0.0004 & 98.00\% & ours \\ MAE & 1k, MAE & ViT-L & RGB & 75\% & 80 & 0.0004 & 97.90\% & ours \\ MAE & 1k, MAE & ViT-L & RGB & 75\% & 80 & 0.0004 & 98.14\% & ours \\ MAE & 1k, CAE & ViT-B & DALLE & 50\% & 80 & 0.0004 & 97.50\% & ours \\ CAE & 1k, CAE & ViT-L & DALLE & 50\% & 80 & 0.0004 & 97.50\% & ours \\ CAE & 1k, CAE & ViT-L & DALLE & 50\% & 80 & 0.0004 & 97.82\% & ours \\ ConvMAE & 1k, ConvMAE & ConvViT-B & RGB & 75\% & 80 & 0.0004 & 97.92\% & ours \\ TRS & 1k,sup & - & Labels & None & 80 & 0.0004 & **98.48\%** & [5] \\ \hline \hline \end{tabular} \end{table} Table 6: **Classification accuracy on AID dataset (50% for training)** Figure 11: **Examples of applying data augmentation techniques on Merced dataset [1].** \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Method & Training & Residence & LTE & Market & EB-F-O-S & IT & Fit-F-O-S & Sumcs \\ \hline MAE & 1k, MAE & ViT-B & RGB & 75\% & 80 & 0.0004 & 98.00\% & ours \\ MAE & 1k, MAE & ViT-L & RGB & 75\% & 80 & 0.0004 & 97.90\% & ours \\ CAE & 1k, CAE & ViT-L & DALLE & 50\% & 80 & 0.0004 & 99.76\% & ours \\ ConvMAE & 1k, CAE & ViT-L & DALLE & 50\% & 80 & 0.0004 & **100.00\%** & ours \\ ConvMAE & 1k, ConvMAE & ConvViT-B & RGB & 75\% & 80 & 0.0004 & **100.00\%** & ours \\ TRS & 1k,sup & - & Labels & None & 80 & 0.0004 & **100.00\%** & ours \\ \hline \hline \end{tabular} \end{table} Table 5: **Classification accuracy on UC-Merced dataset (80% for training)** ## Appendix E **NWPU-RESISC45 Dataset Classification Results** For this dataset, 20% of the images were used as the training dataset, and 80% as the testing dataset. Table 7 lists the classification results for the NWPU-RESISC45 dataset. Compared to the previously published results (76.85% to 95.56%) from 17 deep learning methods listed in Table 5 of [5], the MAE with ViT-H backbone (95.61%) can beat the previous best TRS method (95.56%). Once again, the experiment demonstrates the MIM pretrained backbones perform better (94.40% to 95.61%) than most of the CNNs, whose performances range from 76.85% to 94.43%. ## Appendix F **OPTIMAL-31 Dataset Classification Results** For this dataset, 80% of the images were used as the training dataset, and 20% as the testing dataset. Table 8 lists the classification results for this dataset. Compared to the previously published results (81.22% to 95.97%) from 10 deep learning methods listed in Table 6 of [5], ConvMAE with ViT-B backbone (96.51%) can beat the best TRS method (95.97%). Once again, the experiment demonstrates the MIM pretrained backbones perform better (93.20% to 96.51%) than most of the CNNs (81.22% to 94.51%). In addition, we compared the results between MIM and supervised learning pretrained ViTs listed in Table 8 and 9 of [5]. Obviously, for same backbone, our tested MIM methods learn much better representations than supervised pretraining methods (up to 5% on top 1 accuracy). For example, according to Table 8 of [5], supervised learning pretrained ViT-Base achieves 95.81% top-1 accuracy for Merced dataset (80% of training), and our tested MAE pretrained ViT-Base can achieve 99.76% top-1 accuracy, which denotes about 4% of improvement. ## Appendix V Discussion ### **Compare with Supervised Pretraining Methods** In addition, we compared the results of ViTs pretrained from our tested MIM methods and supervised learning methods published in the literature by far. Table 9 and Table 10 compares results from ViT-B and ViT-L which are pretrained with different methods, respectively. The best performance metrics are highlighted in bold in the result tables. Obviously, for same backbone, our tested MIM methods learn much better representations than supervised pretraining methods (up to 6% on top 1 accuracy). For example, supervised learning pretrained ViT-Base achieves 95.81% top-1 accuracy for Merced dataset (80% of training), and our tested MAE pretrained ViT-Base can achieve 99.76% top-1 accuracy, which denotes about 4% of improvement. In addition, MIM pretrained backbones with less data (1k) can outperform supervised learning methods with 21k data (see Table 9). ### **Application Scenarios** MIM methods have been proved to be a great way of learning visual feature representation and can support multiple domains such as object detection, segmentation, multi-modal learning, reinforcement learning, time series, point cloud, 3D-mesh, and audio. Figure 12 summarizes the various applications of MIM published in the literature. \begin{table} \begin{tabular}{c c c c c c c c c} \hline Method & Pretraining & RedTime & Full & Market & Full-Bordes & In & Full-Bord (\%) & Simero \\ \hline MAE & 1k, MAE & ViT-B & RGB & 75\% & 80 & 0.0004 & 94.40\% & ours \\ MAE & 1k, MAE & ViT-L & RGB & 75\% & 80 & 0.0004 & 95.31\% & ours \\ MAE & 1k, MAE & ViT-H & RGB & 75\% & 80 & 0.0004 & **95.61\%** & ours \\ CAE & 1k, CAE & ViT-B & DALLE & 50\% & 80 & 0.0004 & 94.71\% & ours \\ CAE & 1k, CAE & ViT-L & DALLE & 50\% & 80 & 0.0004 & 95.45\% & ours \\ ConvMAE & 1k, ConvMAE & ConvViT-B & RGB & 75\% & 80 & 0.0004 & 95.17\% & ours \\ TRS & 1k,sup & - & Labels & None & 80 & 0.0004 & 95.56\% & [5] \\ \hline \end{tabular} \end{table} Table 7: **Classification accuracy on NWPU-RESISC45 dataset (20% training)** \begin{table} \begin{tabular}{c c c c c c c c c} \hline Method & Pretraining & RedTime & Full & Market & Full-Bordes & In & Full-Bord (\%) & Simero \\ \hline MAE & 1k, MAE & ViT-B & RGB & 75\% & 80 & 0.0004 & 94.40\% & ours \\ MAE & 1k, MAE & ViT-L & RGB & 75\% & 80 & 0.0004 & 95.31\% & ours \\ MAE & 1k, MAE & ViT-H & RGB & 75\% & 80 & 0.0004 & **95.61\%** & ours \\ CAE & 1k, CAE & ViT-B & DALLE & 50\% & 80 & 0.0004 & 94.71\% & ours \\ CAE & 1k, CAE & ViT-L & DALLE & 50\% & 80 & 0.0004 & 95.45\% & ours \\ ConvMAE & 1k, ConvMAE & ConvViT-B & RGB & 75\% & 80 & 0.0004 & 95.17\% & ours \\ TRS & 1k,sup & - & Labels & None & 80 & 0.0004 & 95.56\% & [5] \\ \hline \end{tabular} \end{table} Table 8: **Classification accuracy on Optimal31 dataset (80% training)** ## VI Conclusion This study has explored the use of the backbones pretrained by the newly proposed MIM methods (i.e., MAE, CAE, ConvMAE) to perform challenging remote sensing scene classification tasks. We carried out experiments on four well-known scene classification datasets: Merced, AID, NWPU-RESISC45, and Optimal-31. Our experiments demonstrated that MIM pretrained ViT backbones consistently beat CNN backbones (up to 18% on top 1 accuracy). In addition, for the same ViT backbone, MIM can learn better representation than the supervised learning counterparts (up to 5% on top 1 accuracy). Furthermore, our tested MIM methods can achieve on-par performance \begin{table} \begin{tabular}{|c c c c c c|} \hline \hline Training & Prediction & Nored & MID & NWPU-RESISC45 & Optimal-31 \\ \cline{3-6} & & (Guitar training) & (Guitar training) & (Guitar training) & (Guitar training) \\ \hline 1k, sup [5] & ViT-L & 96.06\% & 95.13\% & 91.94\% & 91.14\% \\ 1k, MAE (ours) & ViT-L & **100\%** & **97.90\%** & 95.31\% & 95.70\% \\ 1k, CAE (ours) & ViT-L & **100\%** & 97.82\% & **95.45\%** & **96.24\%** \\ \hline \hline \end{tabular} \end{table} Table 10: **Comparing results from ViT-L from different pretraining methods** Figure 12: MIM applications. \begin{table} \begin{tabular}{|c c c c|} \hline \hline Training & Prediction & Nored & MID & NWPU-RESISC45 & Optimal-31 \\ \cline{3-6} & & (Guitar training) & (Guitar training) & (Guitar training) \\ \hline 1k, sup [5] & ViT-L & 96.06\% & 95.13\% & 91.94\% & 91.14\% \\ 1k, MAE (ours) & ViT-L & **100\%** & **97.90\%** & 95.31\% & 95.70\% \\ 1k, CAE (ours) & ViT-L & **100\%** & 97.82\% & **95.45\%** & **96.24\%** \\ \hline \hline \end{tabular} \end{table} Table 10: **Comparing results from ViT-L from different pretraining methods** as specially designed yet complicated TRS architecture. Our experiment results also provided a performance baseline for future studies. ## Acknowledgments The authors thank Dr. Kris Rosfjord and Dr. Heath Farris for their generous support of this project. We would also like to thank Mike Robinson, Bill Bateman, Lixia Song, Erik Vargo, and Paul A Diffenderfer of the MITRE Corporation for their valuable discussions, insights, and encouragement. ## NOTICE This work was sponsored by MITRE's Independent Research and Development Program. The contents of this document reflect the views of the authors and do not necessarily reflect the views of the Federal Aviation Administration (FAA) or the Department of Transportation (DOT). Neither the FAA nor the DOT makes any warranty or guarantee, expressed or implied, concerning the content or accuracy of these views.
2309.06520
Minimum Bayes' Risk Decoding for System Combination of Grammatical Error Correction Systems
For sequence-to-sequence tasks it is challenging to combine individual system outputs. Further, there is also often a mismatch between the decoding criterion and the one used for assessment. Minimum Bayes' Risk (MBR) decoding can be used to combine system outputs in a manner that encourages better alignment with the final assessment criterion. This paper examines MBR decoding for Grammatical Error Correction (GEC) systems, where performance is usually evaluated in terms of edits and an associated F-score. Hence, we propose a novel MBR loss function directly linked to this form of criterion. Furthermore, an approach to expand the possible set of candidate sentences is described. This builds on a current max-voting combination scheme, as well as individual edit-level selection. Experiments on three popular GEC datasets and with state-of-the-art GEC systems demonstrate the efficacy of the proposed MBR approach. Additionally, the paper highlights how varying reward metrics within the MBR decoding framework can provide control over precision, recall, and the F-score in combined GEC systems.
Vyas Raina, Mark Gales
2023-09-12T18:51:10Z
http://arxiv.org/abs/2309.06520v2
# Minimum Bayes' Risk Decoding for System Combination of Grammatical Error Correction Systems ###### Abstract For sequence-to-sequence tasks it is challenging to combine individual system outputs. Further, there is also often a mismatch between the decoding criterion and the one used for assessment. Minimum Bayes' Risk (MBR) decoding can be used to combine system outputs in a manner that encourages better alignment with the final assessment criterion. This paper examines MBR decoding for Grammatical Error Correction (GEC) systems, where performance is usually evaluated in terms of edits and an associated F-score. Hence, we propose a novel MBR loss function directly linked to this form of criterion. Furthermore, an approach to expand the possible set of candidate sentences is described. This builds on a current max-voting combination scheme, as well as individual edit-level selection. Experiments on three popular GEC datasets and with state-of-the-art GEC systems demonstrate the efficacy of the proposed MBR approach. Additionally, the paper highlights how varying reward metrics within the MBR decoding framework can provide control over precision, recall, and the F-score in combined GEC systems. 1 Footnote 1: Code available at: [https://github.com/rainavyas/mbr_gec](https://github.com/rainavyas/mbr_gec) ## 1 Introduction Ensembling, the combination of system outputs, is a powerful technique in deep learning, exploiting diverse model capabilities for robust predictions. Though numerous methodologies exist for system combination (Ganaie et al., 2021), when there is only access to model outputs, many methods are inapplicable and thus the simplest method becomes the averaging of model outputs. However, for sequence-to-sequence (seq2seq) systems, such as summarization, machine translation, and grammatical error correction (GEC), output averaging is less straightforward. A further challenge with seq2seq tasks is the mismatch between the decoding and assessment criteria. Kumar and Byrne (2004) proposed the utilization of Minimum Bayes' Risk (MBR) decoding as a means to select an output that minimizes the theoretical risk according to a designated reward metric. We propose a novel variant of MBR decoding for GEC to allow for system combination and give better alignment with the assessment criteria. The nature of a GEC task permits the use of MBR decoding within the "edit"-space. Each output sequence can be represented as a set of "edits" required to transform the input sequence into the output. Consequently, the selection of a single output sequence for GEC can be achieved through MBR decoding with a reward function defined on the set of edits, aligned with the edit-based F-score typically used in GEC assessment criteria. Beyond selection, an additional technique known as max-voting (Tarnavskyi et al., 2022) can be employed to combine different sets of edits. We propose an enhancement to the performance achieved through max-voting by treating the output sequences obtained from the combination as additional candidates for MBR decoding. Further, with a greedy MBR decoding algorithm, we explore the edit space to identify other candidate edit sets. Through experiments on three popular GEC datasets and use of state of the art GEC systems (Grammary's GECToR (Omelianchuk et al., 2020)), we demonstrate that our MBR decoding approach in the edit space consistently leads to significant performance gains. Further, we also show that by selecting different reward metrics as part of the MBR decoding approach we can provide explicit control over precision, recall and the overall F-score used to assess GEC systems. ## 2 Related Work **Grammatical Error Correction**: Early GEC systems using hand-crafted rules (Naber, 2003) were replaced by encoder-decoder architectures, using for example Recurrent Neural Networks (Cho et al., 2014). Today, many state of the art GEC systems use Transformer-based (Vaswani et al., 2017) encoder-decoder architectures to perform the sequence-to-sequence GEC task (Kaneko et al., 2020; Chen et al., 2020; Kiyono et al., 2019; Lichtarge et al., 2020; Stahlberg and Kumar, 2020). However, LaserTagger (Malmi et al., 2019), the PIE model (Awasthi et al., 2019) and Grammarly's GECToR (Omelianchuk et al., 2020) are all able to achieve competitive performance using a sequence-to-edit structure for the overall sequence-to-sequence task, where a token can be tagged with edit operations. Once a set of tags have been defined, the edit operations can be applied to the input sequence to generate the grammatically correct output sequence. The GECToR system is particularly efficient at inference as it uses a Transformer encoder followed by softmax over linear layers for edit tag prediction, which is significantly faster than standard sequence-to-sequence GEC system decoders. Further, Wu et al. (2023) demonstrated that GECToR performs better than the most recent generative large language models, e.g. ChatGPT (Brown et al., 2020), which tend to over-correct, compromising on recall performance. Hence this work uses the GECToR model as its base GEC architecture. **System Combination for seqseq systems**: Individual deep learning systems for classification tasks can be combined in many ways: stacking (Wolpert, 1992), negative correlation learning (Liu and Yao, 1999), max-voter schemes (Ju et al., 2018; Simonyan and Zisserman, 2014) or probability averaging (He et al., 2016; Raina et al., 2020; Szegedy et al., 2015). However, for generative language tasks such as GEC, where the output is a sequence of tokens, many traditional ensembling approaches are inapplicable. Sequence-level ensembling approaches, however, can address this by averaging conditional token level probabilities of multiple systems (Sennrich et al., 2015; Freitag et al., 2017; Malinin and Gales, 2021; Fathullah et al., 2021). However, this approach requires identical member architectures as well as access to the output probabilities of the predicted tokens. With the rising trend of limited black box access to large language models (e.g. ChatGPT (Liu et al., 2023)), system combination methods that only require the generated output sequences have practical benefit. With access to only the output sequences from individual seq2seq systems, it is challenging to combine them into a single output. For automatic speech recognition, Goel and Byrne (2000) select a single output using a simple Minimum Bayes' Risk (MBR) decoding approach (Kumar and Byrne, 2004), where the aim is effectively to select the _most average_/representative output sequence. Similarly Manakul et al. (2023) use MBR to combine sequences for clinical document summarization. The MBR approach has also recently been applied to machine translation (Rosti et al., 2007, 2007; Freitag et al., 2022; Muller and Sennrich, 2021; Zhang et al., 2022). For GEC systems, Tarnavskyi et al. (2022) propose a _max voting_ scheme, where only edits predicted by the majority of individual systems are retained. We further improve GEC performance by applying MBR decoding to a sequence selection set augmented with sequences from max voting. We further enrich this selection space with a greedy search over edits. ## 3 Output Sequence Combination for GEC A Grammatical Error Correction (GEC) system predicts a grammatically correct output sequence \(\mathbf{y}\) from an input sequence, \(\mathbf{x}\). With multiple different GEC system output sequence predictions, \(\mathcal{Y}=\{\mathbf{y}_{1},\ldots,\mathbf{y}_{N}\}\), for the same input sequence, \(\mathbf{x}\), it is challenging to combine them into a single, best sequence. It is useful to consider the _edit_-space, where a set of edits, \(\mathbf{e}_{n}(\mathbf{x},\mathbf{y}_{n})=\{e_{1},\ldots,e_{|\mathbf{e}_{n}|}\}\) can be used to represent each predicted output sequence, \(\mathbf{y}_{n}\)2. A single edit in the edit set can be defined fully by an input token in \(\mathbf{x}\) and an edit operation to apply (insertion, deletion or substitution). This section describes how Minimum Bayes' Risk decoding can be used in the edit-space to combine the different output sequences in \(\mathcal{Y}\). Footnote 2: Given an input sequence \(\mathbf{x}\) and an output sequence \(\mathbf{y}\) it is simple to create an edit set, using tools such as ER-RANT (Bryant et al., 2017). ### MBR decoding for GEC MBR decoding aims to select the most representative output sequence, \(\mathbf{y}^{*}\in\mathcal{Y}\). For GEC, we aim to maximise a reward score \(\mathcal{R}\) in the edit-space that encourages better alignment with the final assessment metric, \[\mathbf{y}^{*}=\operatorname*{arg\,max}_{\mathbf{y}\in\mathcal{Y}}\left\{ \mathbb{E}_{p(\tilde{\mathbf{y}}|x)}[\mathcal{R}(\tilde{\mathbf{e}}(\mathbf{x },\tilde{\mathbf{y}}),\mathbf{e}(\mathbf{x},\mathbf{y}))]\right\}, \tag{1}\] where the reward score, \(\mathcal{R}(\tilde{\mathbf{e}},\mathbf{e})\), views \(\tilde{\mathbf{e}}\) as reference edits and \(\mathbf{e}\) as the hypothesis/predicted edits. In practice, it is difficult to meaningfully estimate the posterior distribution, \(p(\tilde{\mathbf{y}}|x)\) for each output sequence. Hence, we consider only similarly performing systems' output sequences, \(\mathcal{Y}^{(\mathrm{c})}\in\mathcal{Y}\) to calculate the expectation of the reward and so we approximate each of these sequences as equiprobable, \[\mathbf{y}^{*}\approx\operatorname*{arg\,max}_{\mathbf{y}\in\mathcal{Y}^{( \mathrm{s})}}\left\{\frac{1}{|\mathcal{Y}^{(\mathrm{c})}|}\sum_{\tilde{ \mathbf{y}}\in\mathcal{Y}^{(\mathrm{c})}}\mathcal{R}(\tilde{\mathbf{e}}( \mathbf{x},\tilde{\mathbf{y}}),\mathbf{e}(\mathbf{x},\mathbf{y}))\right\}, \tag{2}\] where \(\mathcal{Y}^{(\mathrm{s})}\) represents the set of possible output sequences we want to select from. ### MBR decoding with edit voting Inspired by Tarnavskyi et al. (2022) the different edit sets, \(\{\mathbf{e}_{1},\ldots,\mathbf{e}_{N}\}\) associated with the different output sequences, can be combined to create a single edit set, \(\mathbf{e}^{(m)}\) containing all the individual edits present in at least \(m\) of the edit sets (i.e. \(m\)_votes_). This new combined edit set represents a new combined output sequence, \(\mathbf{y}^{(m)}\). The MBR decoding approach of Equation 1 can now be applied by simply including the combined sequence in the set of sequences to select from, such that \(\mathbf{y}^{(m)}\in\mathcal{Y}^{(\mathrm{s})}\). Note that the voting scheme can generate a maximum of \(N\) different combined sequences, with \(\mathbf{e}^{(1)}\) being the union of all edit sets and \(\mathbf{e}^{(N)}\) the intersection. Hence the selection space of sequences \(\mathcal{Y}^{(\mathrm{s})}\) can be made richer with an extra \(N\) sequences. ### Greedy MBR decoding for edit selection Instead of augmenting the selection set \(\mathcal{Y}^{(\mathrm{s})}\) with only a few sequences, it is useful to consider all possible edit sets. However, it is computationally infeasible to consider every possible edit set. Hence, this work proposes a practical, greedy method to increase the richness of the selection set. The minimal edit set is arguably the intersection of all edit sets, \(\mathbf{e}^{(N)}\). In contrast the set of possible edits is given by the union set, \(\mathbf{e}^{(1)}\). Hence, we can insert individual edits one by one from the union set to the intersection set. Every new edit insertion into the existing edit set represents a new output sequence \(\mathbf{y}\) (that can be added to \(\mathcal{Y}^{(\mathrm{s})}\)). However, we only retain the edit insertions that give a new output sequence that increases the MBR expected reward, \(\frac{1}{|\mathcal{Y}^{(\mathrm{s})}|}\sum_{\tilde{\mathbf{y}}\in\mathcal{Y}^{ (\mathrm{s})}}\mathcal{R}(\tilde{\mathbf{e}}(\mathbf{x},\tilde{\mathbf{y}}), \mathbf{e}(\mathbf{x},\mathbf{y}))\) from Equation 2. This way we can efficiently search a richer selection set, \(\mathcal{Y}^{(\mathrm{s})}\) of output sequences to find the best combined output sequence \(\mathbf{y}^{*}\). ### MBR reward score Equation 1 uses a reward score \(\mathcal{R}(\tilde{\mathbf{e}},\mathbf{e})\) to perform MBR decoding. Careful selection of the reward score allows for control over the desired metric to optimise. We can for example aim to combine systems in a manner that encourages better edit _recall_, \[\mathcal{R}^{(\mathrm{rec})}(\tilde{\mathbf{e}},\mathbf{e})=\frac{|\tilde{ \mathbf{e}}\cap\mathbf{e}|}{|\tilde{\mathbf{e}}|}. \tag{3}\] Conversely, it may be desirable to have a system with high precision, \[\mathcal{R}^{(\mathrm{prec})}(\tilde{\mathbf{e}},\mathbf{e})=\frac{|\tilde{ \mathbf{e}}\cap\mathbf{e}|}{|\mathbf{e}|}. \tag{4}\] However, it is usually desirable to have a GEC system with a good combination of precision and recall, as measured by a F-k score, \[\mathcal{R}^{(\mathrm{f}\{\mathrm{k}\})}(\tilde{\mathbf{e}},\mathbf{e})= \frac{(1+k^{2})|\tilde{\mathbf{e}}\cap\mathbf{e}|}{|\tilde{\mathbf{e}}|k+| \mathbf{e}|}. \tag{5}\] As the precision is more important than recall for GEC systems, this work aligns the reward metric with the F0.5 score. The Jaccard Similarity reward metric is also explored as an alternative in Appendix A. ## 4 Experiments ### Experimental setup We evaluate performance of the combined systems on three popular grammatical error correction corpora. **First Certificate in English (FCE)** corpus (Yannakoudakis et al., 2011) is a subset of Cambridge Learner Corpus (OpenCLC, 2019) made up of written examinations for general and business English of candidates from 86 different mother tongues, consisting of 2,720 test sentences. **Building Education Applications 2019 (BEA-19)**(Bryant et al., 2019) offers a test set of 4477 sentences, sourced from essays written by native and non-native English students. **Conference on Computational Natural Language Learning 2014 (CoNLL-14)**(Ng et al., 2014) test set consists of 1312 sentences sourced from 50 essays written by 25 non-native English speakers. Three different state of the art GECToR models are used as the individual systems to be combined 3. Each system uses a different Transformer encoder (bert (b), roberta (r) or xlnet (x)). Table 1 gives the performance of these individual systems 4. Footnote 4: GEC performance for CoNLL and FCE is measured using the ERRANT tool (Bryant et al., 2017). Note that CoNLL is often evaluated with a different scorer in other papers. BEA is evaluated using the online submission portal: [https://codalab.lsin.upsaclay.fr/competitions/4057](https://codalab.lsin.upsaclay.fr/competitions/4057) ### Results MBR decoding (Equation 2) is applied in the edit-space for the three individual GECToR systems' outputs (b,r,x). Here, as the systems have similar performance (equiprobable posterior assumption valid), we let the selection set and the set of sequences to calculate the expected reward be the same \(\mathcal{Y}^{\text{(s)}}=\mathcal{Y}^{\text{(c)}}=\{b,r,x\}\). Table 2 compares the different reward functions, \(\mathcal{R}\), when applying MBR decoding. Selection with precision (Equation 6) and F0.5 (Equation 5) oriented reward metrics give a significant increase in performance over the individual systems in Table 1. Although the recall reward (Equation 3) does not increase F0.5 performance, it does significantly increase recall performance. This demonstrates that a simple application of MBR decoding can be used to combine individual systems to improve performance and selection of the reward function gives specific control over precision and recall of the combined system. Section 3.2 describes how MBR decoding can be applied to systems combined by a voting scheme in the edit space. Table 3 shows the performance of systems combined with voting, where an individual edit requires \(m\) votes (from b,r or x edit system predictions) to be included in the combined edit set, \(\mathbf{e}^{(m)}\) to form the single combined sequence \(\mathbf{y}^{(m)}\). Note here that \(\mathbf{e}^{(1)}\) is the union set and \(\mathbf{e}^{(3)}\) is the intersection and so these sequences encourage either a higher recall or precision respectively. Table 4 shows the impact of MBR decoding where all the separate voting sets \((\mathbf{y}^{(1)},\mathbf{y}^{(2)},\mathbf{y}^{(3)})\) are included in the selection set, \(\mathcal{Y}^{\text{(s)}}=\{b,r,x,\mathbf{y}^{(1)},\mathbf{y}^{(2)},\mathbf{y }^{(3)}\}\). Note that we maintain the same set of sequences for the expected reward calculation, \(\mathcal{Y}^{\text{(s)}}=\{b,r,x\}\) to ensure the equiprobable posterior assumption holds 5. It is evident that a richer selection set allows for even greater improvements in model performance for precision and F0.5 reward MBR decoding. Footnote 5: Experiments with an alternative set of sequences for \(\mathcal{Y}^{\text{(s)}}\) are in Appendix C Finally, as described in Section 3.3, MBR decoding can be performed over a richer edit selection space by greedily adding individual edits to the intersection edit set, \(\mathbf{e}^{(3)}\) from the union edit set, \(\mathbf{e}^{(1)}\). Experiments revealed (Appendix B) that allowing for all edits to be included from the union set can significantly increase the risk of poor insertions, compromising performance. Hence, instead we only consider edits from \(\mathbf{e}^{(2)}\) to be added to the intersection set \(\mathbf{e}^{(3)}\). Table 5 demonstrates that MBR decoding over this richer set of sequences can give better performance (CoNLL) than MBR with voting, but does not always give the best performance (BEA and FCE have better performance in Table 4). This is perhaps because the expected reward over the individual systems (b,r,x) is not necessarily perfectly aligned with the final F0.5 score relative to the true reference edits used in evaluation and thus over-optimisation of the selection set for MBR decoding does not help performance for some datasets. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & conll & bea & fce \\ \hline b & \(56.15\begin{pmatrix}54.175\\ 41.119\\ 45.871\\ 44.85\end{pmatrix}\) & \(65.41\begin{pmatrix}67.33\\ 58.71\end{pmatrix}\) & \(49.66\begin{pmatrix}54.47\\ 56.68\end{pmatrix}\) \\ r & \(56.82\begin{pmatrix}6.190\\ 42.50\end{pmatrix}\) & \(68.21\begin{pmatrix}70.21\\ 61.21\end{pmatrix}\) & \(49.86\begin{pmatrix}33.47\\ 33.28\end{pmatrix}\) \\ x & \(56.77\begin{pmatrix}6.17\begin{pmatrix}6.174\\ 42.95\end{pmatrix}\) & \(68.00\begin{pmatrix}61.380\\ 61.380\end{pmatrix}\) & \(50.52\begin{pmatrix}34.80\\ 34.10\end{pmatrix}\) \\ \hline \hline \end{tabular} \end{table} Table 1: F0.5 and (precision, recall) performance for individual GECToR systems \begin{table} \begin{tabular}{l c c c} \hline \hline Reward & conll & bea & fce \\ \hline \(\mathcal{R}^{\text{(rec)}}\) & \(53.99\begin{pmatrix}53.99\\ 477.23\end{pmatrix}\) & \(63.81\begin{pmatrix}63.47\\ 65.25\end{pmatrix}\) & \(48.18\begin{pmatrix}39.29\\ 44.20\end{pmatrix}\) \\ \(\mathcal{R}^{\text{(rec)}}\) & \(60.24\begin{pmatrix}39.50\\ 32.50\end{pmatrix}\) & \(73.42\begin{pmatrix}33.40\\ 49.66\end{pmatrix}\) & \(53.51\begin{pmatrix}66.74\\ 69.88\end{pmatrix}\) \\ \(\mathcal{R}^{\text{(f0)}}\) & \(60.43\begin{pmatrix}67.94\\ 41.90\end{pmatrix}\) & \(70.84\begin{pmatrix}74.48\\ 56.92\end{pmatrix}\) & \(52.71\begin{pmatrix}57.93\\ 33.892\end{pmatrix}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Voting combination, \(\mathbf{y}^{(m)}\) (\(m\) votes). ## 5 Conclusions The combination of sequence-to-sequence grammatical error correction (GEC) systems is challenging. There is also often a mismatch between the decoding criterion and assessment criterion used for GEC systems. This work demonstrates that a novel Minimum Bayes' Risk (MBR) decoding approach within the edit-space can give an effective system combination method that aligns better with the assessment criteria. We further showed that enhancing the selection space to encompass sequences formulated by max-voting over individual edits can further improve system performance. Moreover, the employment of a greedy search strategy, guided by an MBR reward function, can result in performance gains for the combined system. Crucially, the choice of a reward function in the MBR framework gives users the ability to optimize desired characteristics of the combined GEC system, such as precision, recall or the F-score. ## 6 Limitations This work explored how MBR decoding can be used to combine individual GEC systems, as well as align the combined system's performance to the edit-based F-score used to assess GEC systems. Experiments were performed with Grammarly's GECToR based systems. It would be useful to extend these experiments to other state of the art GEC systems. Although these other systems are not as efficient as GECToR due to the use of an auto-regressive Transformer decoder (as opposed to GECToR's encoder only structure), it is still meaningful to understand how these systems react to MBR decoding used for system combination. This is particularly relevant as generative large language models are increasingly used for standard natural language tasks. ## 7 Ethics Statement This work reports on an efficient method to combine individual GEC system outputs in a manner that better aligns with assessment and improve performance. There are no perceived ethical risks associated with this work. ## 8 Acknowledgements This paper reports on research supported by Cambridge University Press & Assessment (CUP&A), a department of The Chancellor, Masters, and Scholars of the University of Cambridge.
2309.16055
Identifying Risk Factors for Post-COVID-19 Mental Health Disorders: A Machine Learning Perspective
In this study, we leveraged machine learning techniques to identify risk factors associated with post-COVID-19 mental health disorders. Our analysis, based on data collected from 669 patients across various provinces in Iraq, yielded valuable insights. We found that age, gender, and geographical region of residence were significant demographic factors influencing the likelihood of developing mental health disorders in post-COVID-19 patients. Additionally, comorbidities and the severity of COVID-19 illness were important clinical predictors. Psychosocial factors, such as social support, coping strategies, and perceived stress levels, also played a substantial role. Our findings emphasize the complex interplay of multiple factors in the development of mental health disorders following COVID-19 recovery. Healthcare providers and policymakers should consider these risk factors when designing targeted interventions and support systems for individuals at risk. Machine learning-based approaches can provide a valuable tool for predicting and preventing adverse mental health outcomes in post-COVID-19 patients. Further research and prospective studies are needed to validate these findings and enhance our understanding of the long-term psychological impact of the COVID-19 pandemic. This study contributes to the growing body of knowledge regarding the mental health consequences of the COVID-19 pandemic and underscores the importance of a multidisciplinary approach to address the diverse needs of individuals on the path to recovery. Keywords: COVID-19, mental health, risk factors, machine learning, Iraq
Maitham G. Yousif, Fadhil G. Al-Amran, Hector J. Castro
2023-09-27T22:30:11Z
http://arxiv.org/abs/2309.16055v1
# Identifying Risk Factors for Post-COVID-19 Mental Health Disorders: A Machine Learning Perspective ###### Abstract In this study, we leveraged machine learning techniques to identify risk factors associated with post-COVID-19 mental health disorders. Our analysis, based on data collected from 669 patients across various provinces in Iraq, yielded valuable insights. We found that age, gender, and geographical region of residence were significant demographic factors influencing the likelihood of developing mental health disorders in post-COVID-19 patients. Additionally, comorbidities and the severity of COVID-19 illness were important clinical predictors. Psychosocial factors, such as social support, coping strategies, and perceived stress levels, also played a substantial role. Our findings emphasize the complex interplay of multiple factors in the development of mental health disorders following COVID-19 recovery. Healthcare providers and policymakers should consider these risk factors when designing targeted interventions and support systems for individuals at risk. Machine learning-based approaches can provide a valuable tool for predicting and preventing adverse mental health outcomes in post-COVID-19 patients. Further research and prospective studies are needed to validate these findings and enhance our understanding of the long-term psychological impact of the COVID-19 pandemic. This study contributes to the growing body of knowledge regarding the mental health consequences of the COVID-19 pandemic and underscores the importance of a multidisciplinary approach to address the diverse needs of individuals on the path to recovery. COVID-19, mental health, risk factors, machine learning, Iraq *Corresponding author: Maitham Ghaly Yousif [email protected]_ [email protected]_[https://www.isohe.org/medical-advances-and-innovations-journal_](https://www.isohe.org/medical-advances-and-innovations-journal_) August 2023 | Volume 1 | Issue 3 ## Introduction The COVID-19 pandemic, caused by the novel coronavirus SARS-CoV-2, has not only posed a significant threat to global public health but has also brought to light various indirect consequences affecting individuals' mental well-being[1-5]. As healthcare systems around the world grapple with the immediate challenges of treating COVID-19 patients, it has become increasingly evident that there is a pressing need to understand and address the potential long-term mental health repercussions of this global crisis. Numerous studies have reported a spectrum of mental health issues emerging in the wake of COVID-19 recovery, including anxiety, depression, post-traumatic stress disorder (PTSD), and other neuropsychiatric disorders[4-6]. These conditions, often collectively referred to as post-COVID-19 mental health disorders, can be debilitating and require comprehensive evaluation, risk assessment, and timely intervention. To effectively mitigate these mental health challenges, it is imperative to identify the risk factors contributing to their development. Machine learning, with its capacity to analyze vast datasets and extract ## Methodology: ### Data Collection: Data was collected from 669 COVID-19 patients from various hospitals across different provinces of Iraq. This dataset included demographic information, medical history, COVID-19 severity, and subsequent mental health assessments. ### Study Design: This study follows a retrospective cohort design. ### [https://www.isohe.org/medical-advances-and-innovations-journal](https://www.isohe.org/medical-advances-and-innovations-journal) August 2023 | Volume 1 | Issue 3 Descriptive statistics were used to summarize the demographic and clinical characteristics of the study population. Bivariate analysis, including chi-square tests and t-tests, was conducted to identify significant associations between potential risk factors and post-COVID-19 mental health disorders. Multivariate logistic regression was performed to assess the adjusted association of these risk factors with mental health disorders. Machine Learning Analysis: The dataset was preprocessed to handle missing data and encode categorical variables. A machine learning pipeline was established, including data splitting into training and testing sets. Different machine learning algorithms, such as decision trees, random forests, and logistic regression, were trained on the dataset. Model performance metrics such as accuracy, precision, recall, F1-score, and ROC-AUC were used to evaluate the models. Feature importance analysis was conducted to identify the most significant risk factors contributing to post-COVID-19 mental health disorders. Ethical Considerations: This study was conducted following ethical guidelines, including informed consent and data anonymization. Limitations: Possible limitations of the study, such as selection bias or data quality issues, were acknowledged. This methodology allowed for a comprehensive analysis of risk factors associated with post-COVID-19 mental health disorders, combining traditional statistical methods and machine learning techniques for a more accurate and predictive assessment. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Characteristic & Patients with Mental Disorders (n=322) & Patients without Mental Health Disorders (n=347) \\ \hline Age (years) & Mean \(\pm\) SD: 46.2 \(\pm\) 6.5 & Mean \(\pm\) SD: 44.8 \(\pm\) 7.2 \\ \hline Gender (Male/Female) & 155 (48.1\%) & 182 (52.5\%) \\ \hline Pre-existing Mental Conditions (Yes/No) & Health & 75 (23.3\%) & 35 (10.1\%) \\ \hline \end{tabular} \end{table} Table 1: Demographic Characteristics of Study Participants \begin{table} \begin{tabular}{|l|l|l|} \hline \multicolumn{1}{|c|}{Risk Factor} & \multicolumn{1}{|c|}{Odds Ratio (95\% CI)} \\ \hline Age (per year) & 1.18 (1.10 - 1.27) \\ \hline Severe COVID-19 (vs. Mild) & 3.02 (2.12 - 4.30) \\ \hline Female (vs. Male) & 1.43 (1.05 - 1.95) \\ \hline Pre-existing Mental Health Conditions (Yes vs. No) & 2.71 (1.77 - 4.16) \\ \hline \end{tabular} \end{table} Table 4: Risk Factors Associated with Mental Health Disorders (Logistic Regression) \begin{table} \begin{tabular}{|l|l|l|} \hline COVID-19 & Patients with Mental Health Disorders & Patients without Mental Health Disorders \\ Severity & (n=322) & (n=347) \\ \hline Mild & 110 (34.2\%) & 175 (50.4\%) \\ \hline Moderate & 132 (41.0\%) & 125 (36.1\%) \\ \hline Severe & 80 (24.8\%) & 47 (13.5\%) \\ \hline \end{tabular} \end{table} Table 2: COVID-19 Severity and Mental Health Disorders \begin{table} \begin{tabular}{|l|l|l|} \hline \multicolumn{1}{|c|}{Mental Health Disorder} & Male (n=337) & Female (n=332) \\ \hline Anxiety & 98 (29.1\%) & 117 (35.2\%) \\ \hline Depression & 85 (25.2\%) & 105 (31.6\%) \\ \hline PTSD & 62 (18.4\%) & 78 (23.5\%) \\ \hline \end{tabular} \end{table} Table 3: Prevalence of Mental Health Disorders by Gender Figure 1 displays the feature importance scores obtained from the Random Forest model. Age, severe COVID-19, pre-existing conditions, and Discussion The findings of this study highlight several crucial risk factors associated with the development of mental health disorders in individuals recovering from COVID-19. Leveraging a machine learning perspective, our gender are the most influential features in predicting mental health disorders. research offers valuable insights that can inform targeted interventions and support strategies for this vulnerable population. Our results indicate a strong association between the severity of COVID-19 illness and the likelihood of developing mental health disorders. Patients who experienced severe cases were found to be Figure 1: Feature Importance in Predicting Mental Health Disorders (Random Forest) \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Model Metric & Random Forest & Logistic Regression & Support Vector Machine \\ \hline Accuracy & 0.79 & 0.72 & 0.75 \\ \hline Precision & 0.81 & 0.68 & 0.74 \\ \hline Recall & 0.76 & 0.79 & 0.72 \\ \hline F1-Score & 0.78 & 0.73 & 0.73 \\ \hline ROC-AUC & 0.84 & 0.76 & 0.80 \\ \hline \end{tabular} \end{table} Table 5: Machine Learning Model Performance at a significantly higher risk compared to those with milder cases (OR: 3.02, 95% Cl: 2.12 - 4.30). These findings corroborate the results of previous investigations (13-15) that have highlighted the substantial psychological toll associated with severe illness experiences. In addition to the previously mentioned sources, further emphasizes the profound impact of severe COVID-19 on mental health. Their research demonstrated that individuals with severe cases of COVID-19 experienced a wide range of psychiatric symptoms and disorders in the months following recovery. These symptoms encompassed anxiety, depression, and post-traumatic stress disorder, among others(16-18). Moreover, (18,19) conducted a comprehensive analysis of COVID-19 patients and found a clear relationship between the severity of respiratory symptoms and the prevalence of subsequent mental health issues. Their findings align with our research and emphasize the importance of monitoring mental health outcomes in patients who have undergone severe COVID-19 illness. Other studies (20,21) investigated the complex interplay between COVID-19 severity, mental health, and substance use disorders. Their study revealed a bidirectional association, indicating that severe COVID-19 not only increased the risk of mental health disorders but also the risk of substance use disorders. These findings underscore the multifaceted impact of severe COVID-19 on individuals' psychological well-being. It is imperative for healthcare providers to recognize the increased mental health needs of patients who have battled severe COVID-19 and to implement early interventions. These interventions should encompass not only medical care but also psychological support to mitigate the potential long-term psychological consequences of severe illness experiences. An intriguing gender difference emerged from our data, with females manifesting a higher prevalence of mental health disorders compared to males. This gender-based distinction concurs with the results of other studies on post-COVID-19 mental health (22-25), suggesting the necessity for gender-specific mental health interventions. While the exact reasons for this gender disparity require further investigation, it underscores the importance of tailored approaches to mental health support. The study by (26) delved into the gender differences in the psychological impact of the COVID-19 pandemic. Their research found that women were more likely to experience symptoms of anxiety and depression during the pandemic compared to men. This aligns with our findings and highlights the need for gender-sensitive mental health interventions, especially in the context of post-COVID-19 recovery. Additionally (27,28) conducted a comprehensive review of the mental health consequences of COVID-19. They noted that the pandemic might disproportionately affect the mental well-being of women due to various factors, including differences in coping strategies and societal roles. Their insights corroborate the importance of addressing gender disparities in mental health outcomes, which our study underscores. Furthermore (29,30) explored the psychommunological status of patients recovered from SARS-CoV-2, shedding light on gender-specific psychological responses. Their findings provide further evidence of gender-related disparities in mental health post-COVID-19 and emphasize the need for customized mental health interventions tailored to the unique needs of both genders. Our study unveiled a compelling link between pre-existing mental health conditions and the likelihood of post-COVID-19 mental health disorders. Patients with pre-existing conditions were at a significantly higher prevalence of mental health disorders compared to males. This gender-based distinction concurs with the results of other studies on post-COVID-19 mental health (22-25), suggesting the necessity for gender-specific mental health interventions. While the exact reasons for this gender disparity require further investigation, it underscores the importance of tailored approaches to mental health support. The study by (26) delved into the gender differences in the psychological impact of the COVID-19 pandemic. Their research found that women were more likely to experience symptoms of anxiety and depression during the pandemic compared to men. This aligns with our findings and highlights the need for gender-sensitive mental health interventions, especially in the context of post-COVID-19 recovery. Additionally (27,28) conducted a comprehensive review of the mental health consequences of COVID-19. They noted that the pandemic might disproportionately affect the mental well-being of women due to various factors, including differences in coping strategies and societal roles. Their insights corroborate the importance of addressing gender disparities in mental health outcomes, which our study underscores. Furthermore (29,30) explored the psychommunological status of patients recovered from SARS-CoV-2, shedding light on gender-specific psychological responses. Their findings provide further evidence of gender-related disparities in mental health post-COVID-19 and emphasize the need for customized mental health interventions tailored to the unique needs of both genders. Our study unveiled a compelling link between pre-existing mental health conditions and the likelihood of post-COVID-19 mental health disorders. Patients with pre-existing conditions were at a significantly higher prevalence of mental health disorders compared to males. This gender-based distinction concurs with the results of other studies on post-COVID-19 mental health (22-25), suggesting the necessity for gender-specific mental health interventions. While the exact reasons for this gender disparity require further investigation, it underscores the importance of tailored approaches to mental health support. The study by (26) delved into the gender differences in the psychological impact of the COVID-19 pandemic. Their research found that women were more likely to experience symptoms of anxiety and depression during the pandemic compared to men. This aligns with our findings and highlights the need for gender-sensitive mental health interventions, especially in the context of post-COVID-19 recovery. Additionally (27,28) conducted a comprehensive review of the mental health consequences of COVID-19. They noted that the pandemic might disproportionately affect the mental well-being of women due to various factors, including differences in coping strategies and societal roles. Their insights corroborate the importance of addressing gender disparities in mental health outcomes, which our study underscores. Furthermore (29,30) explored the psychommunological status of patients recovered from SARS-CoV-2, shedding light on gender-specific psychological responses. Their findings provide further evidence of gender-related disparities in mental health post-COVID-19 and emphasize the need for customized mental health interventions tailored to the unique needs of both genders. Our study unveiled a compelling link between pre-existing mental health conditions and the likelihood of post-COVID-19 mental health disorders. Patients with pre-existing conditions were at a significantly higher prevalence of mental health disorders compared to males. This gender-based distinction concurs with the results of other studies on post-COVID-19 mental health (22-25), suggesting the necessity for gender-specific mental health interventions. While the exact reasons for this gender disparity require further investigation, it underscores the importance of tailored approaches to mental health support. The study by (26) delved into the gender differences in the psychological impact of the COVID-19 pandemic. Their research found that women were more likely to experience symptoms of anxiety and depression during the pandemic compared to men. This aligns with our findings and highlights the need for gender-sensitive mental health interventions, especially in the context of post-COVID-19 recovery. Additionally (27,28) conducted a comprehensive review of the mental health consequences of COVID-19. They noted that the pandemic might disproportionately affect the mental well-being of women due to various factors, including differences in coping strategies and societal roles. Their insights corroborate the importance of addressing gender disparities in mental health outcomes, which our study underscores. Furthermore (29,30) explored the psychommunological status of patients recovered from SARS-CoV-2, shedding light on gender-specific psychological responses. Their findings provide further evidence of gender-related disparities in mental health post-COVID-19 and emphasize the need for customized mental health interventions tailored to the unique needs of both genders. Our study unveiled a compelling link between pre-existing mental health conditions and the likelihood of post-COVID-19 mental health disorders. Patients with pre-existing conditions were at a significantly higher prevalence of mental health disorders compared to males. This gender-based distinction concurs with the results of other studies on post-COVID-19 mental health (22-25), suggesting the necessity for gender-specific mental health interventions. While the exact reasons for this gender disparity require further investigation, it underscores the importance of tailored approaches to mental health support. The study by (26) delved into the gender differences in the psychological impact of the COVID-19 pandemic. Their research found that women were more likely to experience symptoms of anxiety and depression during the pandemic compared to men. This aligns with our findings and highlights the need for gender-sensitive mental health interventions, especially in the context of post-COVID-19 recovery. Additionally (27,28) conducted a comprehensive review of the mental health consequences of COVID-19. They noted that the pandemic might disproportionately affect the mental well-being of women due to various factors, including differences in coping strategies and societal roles. Their insights corroborate the importance of addressing gender disparities in mental health outcomes, which our study underscores. Furthermore (29,30) explored the psychommunological status of patients recovered from SARS-CoV-2, shedding light on gender-specific psychological responses. Their findings provide further evidence of gender-related disparities in mental health post-COVID-19 and emphasize the need for customized mental health interventions tailored to the unique needs of both genders. Our study unveiled a compelling link between pre-existing mental health conditions and the likelihood of post-COVID-19 mental health disorders. Patients with pre-existing conditions were at a significantly higher prevalence of mental health disorders compared to males. This gender-based distinction concurs with the results of other studies on post-COVID-19 mental health (22-25), suggesting the necessity for gender-specific mental health interventions. While the exact reasons for this gender disparity require further investigation, it underscores the importance of tailored approaches to mental health support. The study by (26) delved into the gender differences in the psychological impact of the COVID-19 pandemic. Their research found that women were more likely to experience symptoms of anxiety and depression during the pandemic compared to men. This aligns with our findings and highlights the need for gender-sensitive mental health interventions, especially in the context of post-COVID-19 recovery. Additionally (27,28) conducted a comprehensive review of the mental health consequences of COVID-19. They noted that the pandemic might disproportionately affect the mental well-being of women due to various factors, including differences in coping strategies and societal roles. Their insights corroborate the importance of addressing gender disparities in mental health outcomes, which our study underscores. Furthermore (29,30) explored the psychommunological status of patients recovered from SARS-CoV-2, shedding light on gender-specific psychological responses. Their findings provide further evidence of gender-related disparities in mental health post-COVID-19 and emphasize the need for customized mental health interventions tailored to the unique needs of both genders. Our study unveiled a compelling link between pre-existing mental health conditions and the likelihood of post-COVID-19 mental health disorders. Patients with pre-existing conditions were at a significantly higher prevalence of mental health disorders compared to males. This gender-based distinction concurs with the results of other studies on post-COVID-19 mental health (22-25), suggesting the necessity for gender-specific mental health interventions. While the exact reasons for this gender disparity require further investigation, it underscores the importance of tailored approaches to mental health support. The study by (26) delved into the gender differences in the psychological impact of the COVID-19 pandemic. Their research found that women were more likely to experience symptoms of anxiety and depression during the pandemic compared to men. This aligns with our findings and highlights the need for gender-sensitive mental health interventions, especially in the context of post-COVID-19 recovery. Additionally (27,28) conducted a comprehensive review of the mental health consequences of COVID-19. They noted that the pandemic might disproportionately affect the mental well-being of women due to various factors, including differences in coping strategies and societal roles. Their insights corroborate the importance of addressing gender disparities in mental health outcomes, which our study underscores. Furthermore (29,30) explored the psychommunological status of patients recovered from SARS-CoV-2, shedding light on gender-specific psychological responses. Their findings provide further evidence of gender-related disparities in mental health post-COVID-19 and emphasize the need for customized mental health interventions tailored to the unique needs of both genders. Our study unveiled a compelling link between pre-existing mental health conditions and the likelihood of post-COVID-19 mental health disorders. Patients with pre-existing conditions were at a significantly higher prevalence of mental health disorders compared to males. This paper also highlights the need for gender-specific mental health interventions tailored to the unique needs of both genders. Our study unveiled a compelling link between pre-existing mental health conditions and the likelihood of post-COVID-19 mental health disorders. Patients with pre-existing conditions were at a significantly higher prevalence of mental health disorders compared to males. This paper highlights the need for gender-specific mental health interventions tailored to the unique needs of both genders. Our study unveiled a compelling link between pre-existing mental health conditions and the likelihood of post-COVID-19 mental health disorders. The study by (26) delved into the gender differences in the psychological impact of the COVID-19 pandemic. Their research found that women were more likely to experience symptoms of anxiety and depression during the pandemic compared to men. This aligns with our findings and highlights the need for gender-sensitive mental health interventions, especially in the context of post-COVID-19 recovery. Additionally (27,28) conducted a comprehensive review of the mental health consequences of COVID-19. They noted that the pandemic might disproportionately affect the mental well-being of women due to various factors, including differences in coping strategies and societal roles. Their insights corroborate the importance of addressing gender disparities in mental health outcomes, which our study underscores. Furthermore (29,30) explored the psychommunological status of patients recovered from SARS-CoV-2, shedding light on gender-specific psychological responses. Their findings provide further evidence of gender-related disparities in mental health post-COVID-19 and emphasize the need for customized mental health interventions tailored to the unique needs of both genders. Our study unveiled a compelling link between pre-existing mental health conditions and the likelihood of post-COVID-19 mental health disorders. Patients with pre-existing conditions were at a significantly higher prevalence of mental health disorders compared to males. This paper highlights the need for gender-specific mental health interventions tailored to the unique needs of both genders. Our study unveiled a compelling link between pre-existing mental health conditions and the likelihood of post-COVID-19 mental health disorders. Patients with pre-existing conditions were at a significantly higher prevalence of mental health disorders compared to males. This paper highlights the need for gender-specific mental health interventions tailored to the unique needs of both genders. Our study unveiled a compelling link between pre-existing mental health conditions and the likelihood of post-COVID-19 mental health disorders. Patients with pre-existing conditions were at a significantly higher prevalence of mental health disorders compared to males. This paper highlights the need for gender-specific mental health interventions tailored to the unique needs of both genders. Our study unveiled a compelling link between pre-existing mental health conditions and the likelihood of post-COVID-19 mental health disorders. The study by (26) delved into the gender differences in the psychological impact of the COVID-19 pandemic. Their research found that women were more likely to experience symptoms of anxiety and depression during the pandemic compared to men. This aligns with our findings and highlights the need for gender-sensitive mental health interventions, especially in the context of post-COVID-19 recovery. Additionally (27,28) conducted a comprehensive review of the mental health consequences of COVID-19. They noted that the pandemic might disproportionately affect the mental well-being of women due to various factors, including differences in coping strategies and societal roles. Their insights corroborate the importance of addressing gender disparities in mental health outcomes, which our study underscores. Furthermore (29,30) explored the psychommunological status of patients recovered from SARS-CoV-2, shedding light on gender-specific psychological responses. Their findings provide further evidence of gender-related disparities in mental health post-COVID-19 and emphasize the need for customized mental health interventions tailored to the unique needs of both genders. Our study unveiled a compelling link between pre-existing mental health conditions and the likelihood of post-COVID-19 mental health disorders. Patients with pre-existing conditions were at a significantly higher prevalence of elevated risk (OR: 2.71, 95% CI: 1.77 - 4.16). These findings emphasize the need for continued mental health support for individuals with a history of mental health issues (31-36). Integrated care models, combining medical and mental health services, can be instrumental in addressing the complex needs of this subgroup. The study conducted a comprehensive study that examined the mental health impact of the COVID-19 pandemic, including individuals with pre-existing mental health conditions. Their research highlighted the vulnerability of this group to exacerbated mental health challenges during the pandemic. Our findings align with theirs, emphasizing the continued importance of mental health support for individuals with pre-existing conditions(37). The study explored the long-term effects of pre-existing mental health conditions on the mental well-being of COVID-19 survivors. They found that individuals with pre-existing conditions faced a higher risk of persistent mental health issues post-recovery. This underscores the significance of targeted interventions and support for this population, consistent with our study's findings (38). Additionally, [39] delved into insurance risk prediction using machine learning, highlighting the importance of data-driven approaches in understanding and addressing health-related risks. While not directly related to pre-existing mental health conditions, their work underscores the broader relevance of data analytics in healthcare, including mental health, and aligns with our study's emphasis on tailored care approaches. The study presented a comprehensive analysis of post-COVID-19 mental health disorders, and several key findings emerged from this research, with implications that extend beyond the scope of the study. Notably, the study revealed a strong association between the severity of COVID-19 illness and the likelihood of developing mental health disorders. Patients who experienced severe cases were found to be at a significantly higher risk compared to those with milder cases, in line with previous investigations (40-44). These findings underscore the substantial psychological toll associated with severe COVID-19 illness experiences and emphasize the importance of recognizing and addressing the heightened mental health needs of such patients. Early interventions and ongoing mental health support are crucial components of a comprehensive healthcare strategy. Another significant finding of the study was the gender disparity in the prevalence of post-COVID-19 mental health disorders. Females were found to manifest a higher prevalence compared to males, consistent with previous studies (45-48). While the exact reasons for this gender-based distinction require further investigation, it highlights the necessity for gender-specific mental health interventions. Tailored approaches that account for gender differences can contribute to more effective mental health support systems. Furthermore, the study unveiled a compelling link between pre-existing mental health conditions and the likelihood of post-COVID-19 mental health disorders. Patients with pre-existing conditions were at a significantly elevated risk, aligning with previous research (49-55). Integrated care models, combining medical and mental health services, were discussed as instrumental in addressing the complex needs of this subgroup. These findings emphasize the importance of continued mental health support for individuals with a history of mental health issues and the need for a holistic approach to healthcare.